Skip to main content
Historical queries let you retrieve aggregated values over past time ranges. Unlike live queries that stream updates, historical queries execute once and return a single aggregate value.

Executing a Historical Query

Use query_history() to query historical data:
let avg_temp = query_history("AVG:temp:[sensor=1]:[1h]")?;
println!("Average temperature last hour: {}", avg_temp);
Function signature:
pub fn query_history(filter: &str) -> Result<f64>
Parameters:
  • filter: &str - Query in the format OP:SERIES:[TAGS]:[RANGE]
Returns:
  • Ok(f64) - The aggregated value
Important: The host ABI returns 0.0 for both valid zero results and some failures. There’s no way to distinguish between them. Host function:
unsafe extern "C" {
    fn u_query_history(filter_ptr: usize) -> f64;
}
The SDK passes a pointer to a Region containing UTF-8 filter bytes. The host returns the aggregated value.

Query Syntax

OP:SERIES:[TAGS]:[RANGE]

Operators

  • AVG - Average of values in range
  • MIN - Minimum value in range
  • MAX - Maximum value in range
  • SUM - Sum of values in range
  • COUNT - Number of events in range

Time Ranges

Historical queries typically specify a time range:
// Last hour
query_history("AVG:temp:[sensor=1]:[1h]")?;

// Last 30 minutes
query_history("SUM:requests:[api=v1]:[30min]")?;

// Last day
query_history("MAX:cpu:[host=prod]:[1d]")?;

// Last week
query_history("COUNT:errors:[]:[1week]")?;
Supported units: sec, min, hour, day, week (singular and plural forms)

Absolute Time Ranges

You can also specify absolute timestamps in microseconds:
let result = query_history(
    "AVG:temp:[sensor=1]:[1609459200000000,1609545600000000]"
)?;
The format is [start_timestamp,end_timestamp] where both timestamps are Unix epoch microseconds.

Tag Filters

Use boolean operators to filter:
query_history("AVG:cpu:[host=prod AND region=us-west]:[1h]")?;
query_history("SUM:requests:[status=200 OR status=201]:[1d]")?;
query_history("MAX:latency:[NOT region=test]:[30min]")?;

Common Patterns

Baseline Comparison

Compare current values against historical baselines:
use slung::prelude::*;

#[main]
fn main() -> Result<()> {
    // Get baseline from last 24 hours
    let baseline = query_history("AVG:cpu:[host=prod]:[1d]")?;
    
    // Subscribe to live updates
    let handle = query_live("AVG:cpu:[host=prod]:[5min]")?;
    poll_handle(handle, on_event, baseline)?;
    
    Ok(())
}

fn on_event(event: Event, baseline: f64) -> Result<()> {
    let deviation = ((event.value - baseline) / baseline) * 100.0;
    
    if deviation.abs() > 20.0 {
        println!("CPU deviation: {:.1}% from baseline", deviation);
        for producer in event.producers {
            writeback_ws(producer, &format!("CPU anomaly: {:.1}%", deviation))?;
        }
    }
    
    Ok(())
}

Periodic Aggregation

Query historical data periodically:
use slung::prelude::*;

#[main]
fn main() -> Result<()> {
    loop {
        let hourly_avg = query_history("AVG:requests:[app=api]:[1h]")?;
        let daily_avg = query_history("AVG:requests:[app=api]:[1d]")?;
        
        println!("Hourly avg: {:.2}, Daily avg: {:.2}", hourly_avg, daily_avg);
        
        // Write derived metric
        write_event(
            unix_micros(),
            hourly_avg / daily_avg,
            vec!["series=request_ratio".to_string(), "period=hourly".to_string()]
        )?;
        
        // Wait before next query (pseudo-code, actual implementation varies)
        std::thread::sleep(std::time::Duration::from_secs(60));
    }
}

Multi-Series Aggregation

Query multiple series and correlate:
use slung::prelude::*;

#[main]
fn main() -> Result<()> {
    let cpu_avg = query_history("AVG:cpu:[host=prod]:[1h]")?;
    let memory_avg = query_history("AVG:memory:[host=prod]:[1h]")?;
    let disk_avg = query_history("AVG:disk:[host=prod]:[1h]")?;
    
    let resource_score = (cpu_avg + memory_avg + disk_avg) / 3.0;
    
    println!("Resource utilization score: {:.2}", resource_score);
    
    // Write composite metric
    write_event(
        unix_micros(),
        resource_score,
        vec!["series=resource_score".to_string(), "host=prod".to_string()]
    )?;
    
    Ok(())
}

Downsampling

Create lower-resolution aggregates:
use slung::prelude::*;

#[main]
fn main() -> Result<()> {
    // Query high-resolution data in chunks
    let ranges = vec![
        "[0h,1h]",
        "[1h,2h]",
        "[2h,3h]",
        "[3h,4h]",
    ];
    
    for range in ranges {
        let query = format!("AVG:temp:[sensor=1]:{}", range);
        let avg = query_history(&query)?;
        
        // Write downsampled data
        write_event(
            unix_micros(),
            avg,
            vec!["series=temp_hourly".to_string(), "sensor=1".to_string()]
        )?;
    }
    
    Ok(())
}

Combining with Live Queries

Use historical queries to initialize state, then maintain it with live queries:
use slung::prelude::*;

struct State {
    max_24h: f64,
    avg_7d: f64,
}

#[main]
fn main() -> Result<()> {
    // Initialize state with historical data
    let state = State {
        max_24h: query_history("MAX:temp:[sensor=1]:[1d]")?,
        avg_7d: query_history("AVG:temp:[sensor=1]:[1week]")?,
    };
    
    println!("24h max: {}, 7d avg: {}", state.max_24h, state.avg_7d);
    
    // Subscribe to live updates
    let handle = query_live("AVG:temp:[sensor=1]")?;
    poll_handle(handle, on_event, state)?;
    
    Ok(())
}

fn on_event(event: Event, state: State) -> Result<()> {
    if event.value > state.max_24h * 1.1 {
        println!("Temperature exceeds 24h max by 10%");
    }
    
    if event.value > state.avg_7d * 2.0 {
        println!("Temperature is 2x the weekly average");
    }
    
    Ok(())
}

Performance Considerations

Query Cost

Historical queries scan stored data. Larger time ranges and broader tag filters increase query cost:
// Cheaper: narrow time range and specific tags
query_history("AVG:cpu:[host=prod,region=us-west]:[1h]")?;

// More expensive: wide time range and broad tags
query_history("AVG:cpu:[host=prod]:[1week]")?;

Caching Results

If you query the same historical range repeatedly, cache the result:
static mut CACHED_BASELINE: Option<f64> = None;

fn get_baseline() -> Result<f64> {
    unsafe {
        if let Some(baseline) = CACHED_BASELINE {
            return Ok(baseline);
        }
        
        let baseline = query_history("AVG:cpu:[host=prod]:[1d]")?;
        CACHED_BASELINE = Some(baseline);
        Ok(baseline)
    }
}

Query Frequency

Avoid executing historical queries in tight loops:
// Bad: queries every event
fn on_event(event: Event, _: ()) -> Result<()> {
    let baseline = query_history("AVG:temp:[sensor=1]:[1d]")?;  // Don't do this
    // ...
}

// Good: query once, pass as state
let baseline = query_history("AVG:temp:[sensor=1]:[1d]")?;
poll_handle(handle, on_event, baseline)?;

Error Handling

The query_history() function returns 0.0 for both:
  1. Valid aggregate results that are actually zero
  2. Host-side failures
There’s no way to distinguish between these cases in the current API. Handle this by:
let result = query_history("SUM:bytes:[app=api]:[1h]")?;

if result == 0.0 {
    // Could be:
    // - No events in the time range (valid)
    // - Host query failure
    // - Sum actually equals zero
    println!("Warning: zero result (may indicate no data)");
}

Next Steps

Live Queries

Subscribe to real-time stream updates

Writeback

Send results to external systems

Build docs developers (and LLMs) love