Skip to main content
BOOM uses the tracing crate for structured logging and instrumentation. This guide explains how to configure logging levels and span events.

Overview

BOOM’s logging system provides:
  • Structured logs: Events with context from active spans
  • Configurable levels: Control verbosity via environment variables
  • Span instrumentation: Track function execution and performance
  • Error context: Automatic error source chain formatting

Log levels

Configure logging verbosity with the RUST_LOG environment variable.

Available levels

LevelUsageExample
errorActual errorsDatabase connection failed
warnUnexpected situationsMissing optional field
infoMajor lifecycle eventsWorker started, Alert processed
debugDetailed diagnosticsFunction entry/exit, Variable values
traceVery detailed diagnosticsLoop iterations, All function calls
offDisable loggingSilence all output

Basic usage

# Default: info level for boom, error only for ort crate
cargo run --bin scheduler -- ztf

Default directive

When RUST_LOG is not set, BOOM uses:
src/utils/o11y/logging.rs
let env_filter = EnvFilter::try_from_default_env()
    .or_else(|_| EnvFilter::try_new("info,ort=error"))?;
This means:
  • info: Log INFO and above for all crates
  • ort=error: Only log ERROR from the ort crate (ONNX Runtime)
The ort crate is significantly noisier than other dependencies, so it’s limited to ERROR by default.

Advanced filtering

The RUST_LOG variable supports complex filtering directives.

Per-crate levels

Set different levels for different crates:
RUST_LOG=boom=debug,mongodb=warn,rdkafka=error cargo run --bin scheduler -- ztf
This sets:
  • debug for BOOM code
  • warn for MongoDB driver
  • error for Kafka client

Per-module levels

Target specific modules:
RUST_LOG=boom::filter=trace,boom::alert=debug cargo run --bin scheduler -- ztf

Overriding defaults

Setting RUST_LOG completely overwrites the default directive. If you need DEBUG but still want to filter ort, specify it explicitly:
# Bad: Shows all ort logs
RUST_LOG=debug cargo run --bin scheduler -- ztf

# Good: Debug everywhere except ort
RUST_LOG=debug,ort=error cargo run --bin scheduler -- ztf

Span events

Spans represent durations (like function execution) and provide context for log events.

BOOM_SPAN_EVENTS variable

Enable span lifecycle events with BOOM_SPAN_EVENTS:
BOOM_SPAN_EVENTS=new,close cargo run --bin scheduler -- ztf

Available span events

EventDescriptionUse case
newSpan createdTrack when functions start
enterSpan enteredFunctions with multiple entry points
exitSpan exitedFunctions with multiple exit points
closeSpan closed (includes duration)Performance profiling
activeAll events when span is activeMaximum verbosity
fullAll span eventsComplete span lifecycle
noneNo span events (default)Normal operation

Profiling with span events

The close event includes execution time, useful for profiling:
BOOM_SPAN_EVENTS=close cargo run --bin scheduler -- ztf
Output includes:
close boom::filter::run_filter time.busy=1.23ms time.idle=45.6μs
Use BOOM_SPAN_EVENTS=new,close to see when functions start and how long they take.

Multiple span events

Combine multiple events with commas:
BOOM_SPAN_EVENTS=new,enter,exit,close cargo run --bin scheduler -- ztf

Complete example

Combine RUST_LOG and BOOM_SPAN_EVENTS for detailed debugging:
RUST_LOG=debug,ort=warn BOOM_SPAN_EVENTS=new,close \
  cargo run --bin scheduler -- ztf
This configuration:
  • Sets DEBUG level for all crates
  • Limits ort to WARN
  • Shows span creation and completion with timing

Function instrumentation

BOOM uses the #[instrument] attribute to create spans automatically:
src/bin/scheduler.rs
#[instrument(skip_all, fields(survey = %args.survey))]
async fn run(args: Cli, meter_provider: SdkMeterProvider) {
    // Function body automatically wrapped in a span
}

Skipping fields

The skip parameter excludes fields from span context:
#[instrument(skip(self))]  // Don't log self
pub async fn process_alert(&self, alert: Alert) {
    // ...
}
This avoids logging large or unhelpful values like self.

Custom fields

Add custom fields to spans:
src/filter/base.rs
#[instrument(skip(alert, schema), fields(candid = alert.candid, object_id = alert.object_id), err)]
pub async fn send_alert_to_kafka(
    alert: &Alert,
    schema: &Schema,
    producer: &FutureProducer,
    topic: &str,
) -> Result<(), FilterWorkerError> {
    // ...
}
Logs include:
INFO send_alert_to_kafka{candid=2462315415115010042 object_id="ZTF24aabcdef"}

Error logging

BOOM provides macros for standardized error logging.

log_error! macro

Log errors with automatic source chain formatting:
use boom::utils::o11y::logging::{log_error, WARN};

// Basic error logging (ERROR level)
log_error!(error);

// With context message
log_error!(error, "failed to process alert");

// With format arguments
log_error!(error, "failed to process alert {}", candid);

// Custom log level
log_error!(WARN, error, "retrying operation");

as_error! macro

Create closures for Result methods:
use boom::utils::o11y::logging::as_error;

// Use with inspect_err
foo().inspect_err(as_error!("operation failed"))?;

// Use with unwrap_or_else
foo().unwrap_or_else(as_error!());

Error source chains

BOOM automatically formats error source chains:
src/utils/o11y/logging.rs
error = %$error,
source = %iter_sources(&$error).collect::<Vec<_>>().join(SEP),
debug = ?$error
Output includes:
ERROR filter execution failed error="invalid pipeline" source="field not found | invalid BSON type" debug=...

Log output format

BOOM’s log format includes:
src/utils/o11y/logging.rs
let fmt_layer = tracing_subscriber::fmt::layer()
    .with_target(false)       // Don't show module path
    .with_file(true)          // Show source file
    .with_line_number(true)   // Show line number
    .with_span_events(parse_span_events("BOOM_SPAN_EVENTS"));
Example output:
2024-06-17T12:34:56.789Z INFO boom/src/scheduler.rs:45: heartbeat: workers running alert=3/3 enrichment=3/3 filter=3/3

Flame graphs

Generate flame graphs for performance profiling:
1

Install inferno

cargo install inferno
2

Run with BOOM_FLAME_FILE

BOOM_FLAME_FILE=./tracing.folded cargo run --bin scheduler -- ztf
Let BOOM run for a while, then terminate with Ctrl+C.
3

Generate SVG

inferno-flamegraph <tracing.folded >tracing-flamegraph.svg
Open the SVG in your browser for an interactive flame graph.
Flame graphs use span instrumentation to track function execution time. Functions without #[instrument] won’t appear in the graph.

Logging conventions

BOOM follows these logging conventions:

When to log at each level

Reserve INFO for major lifecycle events:
  • Worker started/stopped
  • Database connected
  • Alert batch processed
  • Filter executed
Avoid logging every individual alert at INFO level.
Use WARN for unexpected but non-critical issues:
  • Missing optional configuration
  • Retryable errors
  • Deprecated features
  • Performance degradation
Log actual errors that require attention:
  • Database connection failures
  • Invalid filter pipelines
  • Unrecoverable processing errors
Only log errors where they’re handled, not at every propagation point.
Use DEBUG for information useful during development:
  • Function entry/exit
  • Intermediate computation results
  • Configuration values
  • Cache hits/misses
Use TRACE for very detailed diagnostics:
  • Loop iterations
  • Individual field values
  • Low-level protocol details

Error handling

From CONTRIBUTING.md:
All other errors are recoverable errors and should be expressed using custom error types (usually enums) that implement std::error::Error. These errors should be propagated to the caller using the ? operator. Each error is propagated until it’s handled, i.e., when a caller intercepts the error value to either use it for control flow or log it.
Key principle: Don’t log errors if you’re propagating them with ?. Only log where you handle the error.
// Bad: Logs at every propagation
fn process() -> Result<()> {
    let result = foo().inspect_err(|e| log_error!(e))?;  // Don't do this
    bar(result).inspect_err(|e| log_error!(e))?;         // Or this
    Ok(())
}

// Good: Log only at handling site
fn process() -> Result<()> {
    let result = foo()?;  // Just propagate
    bar(result)?;         // Just propagate
    Ok(())
}

fn main() {
    if let Err(e) = process() {
        log_error!(e, "processing failed");  // Log here
    }
}

Troubleshooting

No logs appearing

Check that you haven’t set RUST_LOG=off:
echo $RUST_LOG
Unset it if needed:
unset RUST_LOG

Too many logs from dependencies

Filter noisy crates:
RUST_LOG=info,hyper=warn,tokio=warn cargo run --bin scheduler -- ztf

Span events not showing

Verify BOOM_SPAN_EVENTS is set:
echo $BOOM_SPAN_EVENTS
Set it if needed:
export BOOM_SPAN_EVENTS=new,close

Next steps

Monitoring

Track pipeline performance with Prometheus metrics

Contributing

Learn BOOM’s development conventions

Tracing docs

Learn more about the tracing crate

Build docs developers (and LLMs) love