Starting the services
Before running BOOM, start the required infrastructure services using Docker Compose:- Valkey (in-memory queue)
- MongoDB (alert storage)
- Kafka (message streaming)
- BOOM API server
Building BOOM
Build the Rust binaries with optimizations:Use the
--release flag for better performance. Debug builds are significantly slower.Pipeline components
Start the Kafka producer (optional)
For testing with archival data, start a Kafka producer to simulate real-time alerts:This downloads and produces ZTF public alerts from June 17, 2024.Options include:
Parameters
SURVEY: Survey name (ztf,lsst, ordecam)DATE: UTC date inYYYYMMDDformat (optional, defaults to yesterday)PROGRAMID: Program identifier (ZTF:public,partnership, orcaltech)
Example: ZTF public alerts
Additional options
View all available options:--limit: Limit the number of alerts produced--server-url: Override the Kafka broker URL (default:localhost:9092)
Start the Kafka consumer
The consumer reads alerts from Kafka topics and transfers them to Valkey queues:
Parameters
SURVEY: Survey name to consume fromDATE: UTC date inYYYYMMDDformat (optional, defaults to today)--programids: Comma-separated program IDs (default:public)
Example: ZTF consumer
Advanced options
Consumer options
| Option | Description | Default |
|---|---|---|
--config | Path to configuration file | config.yaml |
--processes | Number of parallel Kafka readers | 1 |
--max-in-queue | Maximum alerts in Valkey queue | 15000 |
--clear | Clear the Valkey queue before starting | false |
--exit-on-eof | Exit when topic is empty (testing only) | false |
--topics-override | Override default topic names | None |
--instance-id | UUID for this consumer instance | Auto-generated |
Start the scheduler
The scheduler manages worker pools that process alerts:The scheduler spawns three types of workers based on your configuration:
Parameters
SURVEY: Survey to process alerts for (ztf,lsst, ordecam)--config: Path to configuration file (default:config.yaml)
Example: ZTF scheduler
- Alert workers: Ingest alerts from Valkey, format to BSON, perform crossmatches, and write to MongoDB
- Enrichment workers: Run classification models on alerts and update MongoDB
- Filter workers: Execute user-defined filters and send matching alerts to Kafka
Scheduler output
You should see messages like:The number of workers for each type is configured in
config.yaml under the workers section.Worker configuration
Editconfig.yaml to adjust worker counts:config.yaml
Stopping BOOM
To gracefully shut down BOOM:- Stop the consumer with
Ctrl+C - Stop the scheduler with
Ctrl+C
Clearing Kafka topics
If you need to clear a Kafka topic before restarting the producer:YYYYMMDD with the date and programid with your program ID.
Next steps
Processing alerts
Learn how alerts flow through the pipeline
Creating filters
Write custom filters to identify interesting alerts
Monitoring
Monitor pipeline performance with Prometheus
Logging
Configure logging and debugging output