Alpha Leak is structured as a single Node.js process running 30+ concurrent background services, coordinated through PostgreSQL and Redis. There is no message queue between ingestion and processing — events are handled inline, which keeps latency minimal and eliminates the operational complexity of a separate queue tier. The system is divided into four named phases. Each phase builds on the data produced by the one before it.Documentation Index
Fetch the complete documentation index at: https://mintlify.com/alphaleaks60-maker/docs2/llms.txt
Use this file to discover all available pages before exploring further.
Pipeline overview
The diagram below shows every service in the system and how data flows through them, from raw WebSocket events to live on-chain execution.Data stores
Each store has a distinct role. The system does not use a general-purpose cache layer — every store is chosen specifically for the access pattern it serves.| Store | Role |
|---|---|
| PostgreSQL | Primary store for all trades, signals, wallet profiles, ML scores, detected bundles, copy-trade pairs, regime snapshots, and live trade history. Trades table is monthly-partitioned for query performance. |
| Redis | Pub/Sub channel (trade:signals) for signal fan-out to the live trader; key-value cache for wallet stats (wcache:), crowding ratios (crowding:), market regime (market:regime), and bonding curve state (token:<mint>). Also maintains the tracked_wallets and known_bots sets. |
| GCS | Long-term archive of raw trade data older than 60 days. |
| ONNX model files | Loaded from disk at startup and hot-reloaded every 5 minutes if updated. Stored alongside _metadata.json files describing features, calibration parameters, and PR-AUC. |
Concurrency model
Every background service follows the same pattern: asetInterval loop that checks a running flag before each execution, skipping the cycle if the previous run hasn’t finished. This prevents overlapping database queries under load without requiring a separate job queue. Memory usage is monitored and logged after every run.
The WebSocket subscriber maintains a backlog counter. If pending events exceed 10, a warning is emitted and the metrics interval logs it. This acts as a natural pressure valve for burst periods.
Inter-service communication
Services communicate in two ways: through shared database state, and through a real-time Redis Pub/Sub channel.- Database reads
- Redis PubSub
Most intelligence services read each other’s output from PostgreSQL. For example,
MlInference reads WalletScorer alpha scores, and AntiSignalEmitter reads BundleDetector results. This keeps services decoupled — each service owns its writes and reads only the columns it needs.The
InlineScorer inside the live trader runs ONNX inference directly in the subscription callback — there is no database round-trip between receiving a signal and computing the final execution score.