Documentation Index
Fetch the complete documentation index at: https://mintlify.com/sandwichfarm/nostr-watch/llms.txt
Use this file to discover all available pages before exploring further.
@nostrwatch/trawler crawls Nostr relays (WebSocket servers that store and forward events) to collect metadata, capabilities, and health data. It uses the nostrawl library as its crawling engine and persists state to a local SQLite database. When paired with relaymon, trawler continuously feeds newly discovered relay URLs into the monitor so that the relay list stays current without manual curation. Configurable batch sizes, concurrency limits, and seed relay sources allow the crawl to be tuned for network conditions. Dependencies are vendored and committed, so the app runs offline after the initial clone.
Prerequisites
- Deno >=1.40
Environment variables
All environment variables are optional. They override the correspondingconfig.yaml values.
| Variable | Required | Description | Example |
|---|---|---|---|
TRAWLER_DB_PATH | No | Override the SQLite database file path | ./data/trawler.db |
TRAWLER_DB_WAL | No | Enable WAL mode for SQLite | true |
Installation
@nostrwatch/trawler is a Deno application — no npm install step is required. Clone the monorepo and navigate to the app directory. The vendor/ directory is committed, so Deno dependencies are available offline.
Install Deno
Install Deno >=1.40 from deno.land if you have not already.
Quick start
Available tasks
| Task | Description |
|---|---|
deno task start | Run the trawler |
deno task compile | Compile to a standalone binary in dist/ |
deno task test | Run the test suite |
deno task force-refresh | Clear all caches and restart |
Configuration reference
Trawler is configured viaconfig.yaml. Copy the file and edit as needed.
Key configuration options
| Key | Default | Description |
|---|---|---|
logLevel | info | Logging verbosity: debug, info, warn, error |
trawler.db.path | ./trawler.db | SQLite database file path |
trawler.db.enableWAL | true | Enable Write-Ahead Logging for better concurrent access |
trawler.relaysPerBatch | 5 | Number of relays processed per crawl batch |
trawler.concurrency | 2 | Maximum concurrent relay checks |
trawler.seed.interval | 60000 | Seed list refresh interval in milliseconds |
trawler.seed.sources | ["config"] | Seed sources: config, api, events |
trawler.seed.options.allowedNetworks | ["clearnet"] | Network types to crawl: clearnet, tor, i2p |
TRAWLER_DB_PATH and TRAWLER_DB_WAL environment variables override the corresponding trawler.db.* config values when set.