testcafe-hammerhead rewrites every JavaScript file it proxies. Rewriting parses the JS, transforms property accesses and global references so they resolve through the proxy, then re-serialises the result. For a page with dozens of large scripts, this is significant CPU work that repeats on every load — even when the script has not changed. Rammerhead’s JS cache short-circuits this by storing the rewritten output keyed by a hash of the original script. On a cache hit, the stored output is returned immediately without re-parsing.Documentation Index
Fetch the complete documentation index at: https://mintlify.com/binary-person/rammerhead/llms.txt
Use this file to discover all available pages before exploring further.
Cache backends
Rammerhead ships two implementations that share the sameget(key) / set(key, value) interface defined by RammerheadJSAbstractCache.
- RammerheadJSFileCache (default)
- RammerheadJSMemCache
Rewritten scripts are written to individual files on disk. An LRU marker in memory tracks file sizes so the total cache never exceeds the configured limit; when the limit is reached, the least-recently-used files are deleted automatically.Characteristics:
- Survives process restarts — cached files are re-indexed from disk at startup
- Shareable across worker processes via the master/worker IPC protocol
- Read latency depends on disk speed (NVMe/SSD strongly recommended)
- Uses a two-level structure: the LRU marker in memory, actual script content on disk
Default configuration
src/config.js sets the cache backend:
The
cache-js/ directory must exist before Rammerhead starts. The repository ships with an empty cache-js/.gitkeep to create it. If you move the directory, update diskJsCachePath in config.js accordingly.How the file cache handles multiple workers
On multi-core hosts,enableWorkers is true and Rammerhead forks one worker process per CPU. Because each worker has its own memory space, they cannot share a simple in-memory LRU. RammerheadJSFileCache solves this with an IPC protocol:
- The master process owns the LRU marker and handles all cache writes. When the LRU evicts an entry, the master deletes the corresponding file.
- Workers send read and write requests to the master via
process.send/worker.send. They do not maintain their own LRU marker.
Startup: re-indexing existing cache files
WhenRammerheadJSFileCache starts (and is running as master), it scans the cache directory and re-populates the LRU marker from any files left over from previous runs:
Switching from file cache to memory cache
To trade persistence for simplicity, replace thejsCache line in src/config.js:
What is the cache key?
What is the cache key?
The cache key is generated by hammerhead and is based on a hash of the original (un-rewritten) script content. The same script served from two different URLs will produce the same key and share a single cache entry.
Does the cache ever become stale?
Does the cache ever become stale?
The cache key is content-addressed, so a script is only reused if its content is identical. If the remote server changes the script, the hash changes and the old entry is simply never requested again — it will eventually be evicted by the LRU when the cache fills up. There is no explicit time-to-live.
Can I clear the cache manually?
Can I clear the cache manually?
For the file cache, delete the files in
cache-js/ (leave .gitkeep). For the memory cache, restart the process. There is no hot-reload or flush endpoint.What happens if a cache file is zero bytes?
What happens if a cache file is zero bytes?
The startup scan treats zero-byte files as corrupted (write was interrupted by a crash) and deletes them automatically. They will be re-generated on the next cache miss.
Sessions
How Rammerhead sessions store per-user state
URL shuffling
Per-session character-map encoding for URL obfuscation