Overview
The discrete log module implements the baby-step giant-step algorithm for solving discrete logarithm problems in the Ristretto group. This is essential for decrypting twisted ElGamal ciphertexts, where messages are encoded as scalar exponents.Problem Statement
Given a generator pointG and a target point T, find the scalar x such that:
Key Concepts
Baby-Step Giant-Step Algorithm
The algorithm splits the search into two phases:- Precomputation (Giant Steps): Build a hash table of points
{2^17 * i * G : i}fori = 0..2^16 - Online Search (Baby Steps): For each
j = 0..2^16, check ifT - j*Gis in the hash table
(i, j), then x = 2^17 * i + j.
Performance Characteristics
- Precomputation: One-time computation, stored as static data
- Search Range: 32-bit unsigned integers (0 to 2^32 - 1)
- Time Complexity: O(2^16) online operations
- Memory: ~2 MB precomputed hash table
Core Types
DiscreteLog
Represents a discrete log problem instance.Methods
Creating InstancesDiscreteLogError
Errors that can occur during configuration.DecodePrecomputation
Precomputed hash table for the giant steps.Usage Examples
Basic Discrete Log Solution
Multi-threaded Solving
Integration with ElGamal Decryption
Edge Cases
Configuring Batch Size
Thread Configuration
Algorithm Details
Two-Level Search
The algorithm uses a 16/16 bit split:- High bits: Precomputed in hash table (2^16 entries)
- Low bits: Computed online (2^16 iterations)
x = x_hi * 2^17 + x_lo:
- Hash table contains:
{(2^17 * x_hi * G) : x_hi}for x_hi ∈ [0, 2^16) - Online search checks:
T - x_lo * Gfor x_lo ∈ [0, 2^16) - When match found:
x = 2^17 * x_hi + x_lo
Parallelization
When usingn threads:
- Each thread searches a different starting point
- Thread
istarts atT - i*G - Each thread steps by
-n*Ginstead of-G - Search space divided evenly among threads
- First thread to find solution returns result
Batch Compression
Ristretto points are compressed in batches for efficiency:set_compression_batch_size().
Performance Tuning
Thread Count
More threads can improve performance for large values:- More threads = faster for large values
- More threads = higher CPU usage
- More threads = diminishing returns beyond 8-16
Batch Size
Larger batch sizes can improve throughput:Typical Performance
Informal measurements on modern hardware:- Small values (< 2^16): < 1 second
- Medium values (2^16 - 2^24): 1-10 seconds
- Large values (2^24 - 2^32): 10-60 seconds
- Multi-threaded: 2-4x speedup with 4-8 threads
Security Considerations
Not Constant-Time
The implementation is not constant-time:- Uses hash tables (variable access time)
- Short-circuits on solution (variable iteration count)
- Batching introduces timing variations
- Multi-threading adds non-determinism
Information Leakage
Execution time reveals information about the solution:- Small values decrypt faster
- Large values take longer
- Thread count affects timing
- Adding random delays
- Using fixed-time operations
- Limiting value range
Range Limitations
Only supports 32-bit unsigned integers:- Values > 2^32 - 1 will fail to decrypt
- Negative values not supported
- Returns
Nonefor out-of-range values
Precomputation
The hash table is precomputed and embedded in the binary:Precomputation Contents
- 2^16 = 65,536 hash table entries
- Each entry: 32-byte point → 2-byte index
- Total size: ~2 MB
- Computed for generator G only
Regenerating Precomputation
The precomputation is generated by test code:Advanced Usage
Custom Configuration
Handling Decryption Failures
Comparison with Other Methods
| Method | Time Complexity | Space | Range |
|---|---|---|---|
| Baby-step Giant-step | O(√n) | O(√n) | Full |
| Pollard’s Rho | O(√n) | O(1) | Full |
| Brute Force | O(n) | O(1) | Limited |
| Lookup Table | O(1) | O(n) | Limited |
- Precomputed table (giant steps)
- Online search (baby steps)
- Parallel search (multiple threads)
- Batch compression (optimization)