Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/binary-person/rammerhead/llms.txt

Use this file to discover all available pages before exploring further.

testcafe-hammerhead rewrites every JavaScript file it proxies. Rewriting parses the JS, transforms property accesses and global references so they resolve through the proxy, then re-serialises the result. For a page with dozens of large scripts, this is significant CPU work that repeats on every load — even when the script has not changed. Rammerhead’s JS cache short-circuits this by storing the rewritten output keyed by a hash of the original script. On a cache hit, the stored output is returned immediately without re-parsing.

Cache backends

Rammerhead ships two implementations that share the same get(key) / set(key, value) interface defined by RammerheadJSAbstractCache.
Rewritten scripts are written to individual files on disk. An LRU marker in memory tracks file sizes so the total cache never exceeds the configured limit; when the limit is reached, the least-recently-used files are deleted automatically.
// src/classes/RammerheadJSFileCache.js
class RammerheadJSFileCache {
    constructor(diskJsCachePath, jsCacheSize, maxItems, enableWorkerMode) {
        this.lruMarker = new LRUCache({
            max: maxItems,
            maxSize: jsCacheSize,
            sizeCalculation: n => n || 1,
            dispose(_, key) {
                fs.unlinkSync(path.join(diskJsCachePath, key));
            }
        });
        // ...
    }
    async get(key) {
        if (this.isWorker() ? await this.askMasterGet(key) : this.lruMarker?.get(key)) {
            return fs.readFileSync(path.join(this.diskJsCachePath, key), 'utf-8');
        }
        return undefined;
    }
    set(key, value) {
        if (this.isWorker()) {
            this.askMasterSet(key, value);
        } else {
            this.lruMarker.set(key, value.length);
            fs.writeFileSync(path.join(this.diskJsCachePath, key), value, 'utf-8');
        }
    }
}
Characteristics:
  • Survives process restarts — cached files are re-indexed from disk at startup
  • Shareable across worker processes via the master/worker IPC protocol
  • Read latency depends on disk speed (NVMe/SSD strongly recommended)
  • Uses a two-level structure: the LRU marker in memory, actual script content on disk

Default configuration

src/config.js sets the cache backend:
// src/config.js
const enableWorkers = os.cpus().length !== 1;

// recommended: 50mb for memory, 5gb for disk
// jsCache: new RammerheadJSMemCache(5 * 1024 * 1024),
jsCache: new RammerheadJSFileCache(
    path.join(__dirname, '../cache-js'),  // cache directory
    5 * 1024 * 1024 * 1024,              // 5 GB max total size
    50000,                               // max 50,000 cached files
    enableWorkers                        // enable worker-mode IPC on multi-core hosts
),
The commented-out line shows the equivalent memory cache configuration (50 MB).
The cache-js/ directory must exist before Rammerhead starts. The repository ships with an empty cache-js/.gitkeep to create it. If you move the directory, update diskJsCachePath in config.js accordingly.

How the file cache handles multiple workers

On multi-core hosts, enableWorkers is true and Rammerhead forks one worker process per CPU. Because each worker has its own memory space, they cannot share a simple in-memory LRU. RammerheadJSFileCache solves this with an IPC protocol:
  • The master process owns the LRU marker and handles all cache writes. When the LRU evicts an entry, the master deletes the corresponding file.
  • Workers send read and write requests to the master via process.send / worker.send. They do not maintain their own LRU marker.
// worker sends a read request
process.send({ type: 'rjc', id, key });

// master replies with whether the file exists
worker.send({ type: 'rjc', key, id, exists: !!this.lruMarker.get(key) });

// worker then reads the file directly from disk if exists === true
return fs.readFileSync(path.join(this.diskJsCachePath, key), 'utf-8');
This design keeps disk reads distributed across workers while centralising LRU accounting in a single process.
The file cache is not recommended for slow spinning-disk HDDs. Each cache hit involves at least one readFileSync call. On an HDD, this latency can make the cache slower than re-running the hammerhead rewrite. Use RammerheadJSMemCache or an SSD-backed path instead.

Startup: re-indexing existing cache files

When RammerheadJSFileCache starts (and is running as master), it scans the cache directory and re-populates the LRU marker from any files left over from previous runs:
// src/classes/RammerheadJSFileCache.js
const initFileList = [];
for (const file of fs.readdirSync(diskJsCachePath)) {
    if (file === '.gitkeep') continue;
    const stat = fs.statSync(path.join(diskJsCachePath, file));
    initFileList.push({ key: file, size: stat.size });
}
initFileList.sort((a, b) => a.size - b.size);  // smallest first

for (const file of initFileList) {
    if (!file.size) {
        // zero-byte file: write was interrupted, delete it
        fs.unlinkSync(path.join(diskJsCachePath, file.key));
        continue;
    }
    this.lruMarker.set(file.key, file.size, { noDisposeOnSet: true });
}
Files are inserted into the LRU from smallest to largest. Because the LRU evicts least-recently-used items when full, this ordering means larger files are more likely to be evicted first if the cache is already at capacity — preserving the smaller scripts that tend to be more numerous.

Switching from file cache to memory cache

To trade persistence for simplicity, replace the jsCache line in src/config.js:
// src/config.js
// jsCache: new RammerheadJSFileCache(path.join(__dirname, '../cache-js'), 5 * 1024 * 1024 * 1024, 50000, enableWorkers),
jsCache: new RammerheadJSMemCache(50 * 1024 * 1024),  // 50 MB in-memory LRU
The 50 MB default is conservative. On a server with ample RAM and many concurrent users, increasing this to 200–500 MB can measurably reduce CPU usage by keeping more frequently-visited scripts in the LRU.
The cache key is generated by hammerhead and is based on a hash of the original (un-rewritten) script content. The same script served from two different URLs will produce the same key and share a single cache entry.
The cache key is content-addressed, so a script is only reused if its content is identical. If the remote server changes the script, the hash changes and the old entry is simply never requested again — it will eventually be evicted by the LRU when the cache fills up. There is no explicit time-to-live.
For the file cache, delete the files in cache-js/ (leave .gitkeep). For the memory cache, restart the process. There is no hot-reload or flush endpoint.
The startup scan treats zero-byte files as corrupted (write was interrupted by a crash) and deletes them automatically. They will be re-generated on the next cache miss.

Sessions

How Rammerhead sessions store per-user state

URL shuffling

Per-session character-map encoding for URL obfuscation

Build docs developers (and LLMs) love