Skip to main content

Overview

RTK’s tracking system records every command execution in a local SQLite database, providing comprehensive analytics on token savings, execution times, and usage patterns. The system:
  • Stores command history in SQLite (~/.local/share/rtk/tracking.db)
  • Tracks input/output tokens, savings percentage, and execution time
  • Automatically cleans up records older than 90 days
  • Provides aggregation APIs (daily/weekly/monthly)
  • Exports to JSON/CSV for external integrations
  • Supports project-scoped tracking for multi-project workflows
For CLI usage, see rtk gain Command. This page covers the underlying architecture and programmatic API.

Architecture

Data Flow

rtk command execution

TimedExecution::start()

[command runs]

TimedExecution::track(original_cmd, rtk_cmd, input, output)

Tracker::record(original_cmd, rtk_cmd, input_tokens, output_tokens, exec_time_ms)

SQLite database (~/.local/share/rtk/tracking.db)

Aggregation APIs (get_summary, get_all_days, etc.)

CLI output (rtk gain) or JSON/CSV export

Storage Locations

Default paths:
  • Linux: ~/.local/share/rtk/tracking.db
  • macOS: ~/Library/Application Support/rtk/tracking.db
  • Windows: %APPDATA%\rtk\tracking.db
Custom database path (priority order):
  1. RTK_DB_PATH environment variable
  2. tracking.database_path in ~/.config/rtk/config.toml
  3. Default platform-specific location

Data Retention

Records older than 90 days are automatically deleted on each write operation to prevent unbounded database growth.

Database Schema

Table: commands

CREATE TABLE commands (
    id INTEGER PRIMARY KEY,
    timestamp TEXT NOT NULL,           -- RFC3339 UTC timestamp
    original_cmd TEXT NOT NULL,        -- Original command (e.g., "ls -la")
    rtk_cmd TEXT NOT NULL,             -- RTK command (e.g., "rtk ls")
    project_path TEXT DEFAULT '',      -- Canonical working directory
    input_tokens INTEGER NOT NULL,     -- Estimated input tokens
    output_tokens INTEGER NOT NULL,    -- Actual output tokens
    saved_tokens INTEGER NOT NULL,     -- input_tokens - output_tokens
    savings_pct REAL NOT NULL,         -- (saved/input) * 100
    exec_time_ms INTEGER DEFAULT 0     -- Execution time in milliseconds
);

CREATE INDEX idx_timestamp ON commands(timestamp);
CREATE INDEX idx_project_path_timestamp ON commands(project_path, timestamp);
Migration support: The system automatically adds new columns if they don’t exist (e.g., exec_time_ms, project_path were added later).

Table: parse_failures

CREATE TABLE parse_failures (
    id INTEGER PRIMARY KEY,
    timestamp TEXT NOT NULL,
    raw_command TEXT NOT NULL,
    error_message TEXT NOT NULL,
    fallback_succeeded INTEGER NOT NULL DEFAULT 0
);

CREATE INDEX idx_pf_timestamp ON parse_failures(timestamp);

Public API

Core Types

Tracker

Main tracking interface for recording and querying command history.
pub struct Tracker {
    conn: Connection, // SQLite connection
}

impl Tracker {
    /// Create new tracker instance (opens/creates database)
    pub fn new() -> Result<Self>;

    /// Record a command execution
    pub fn record(
        &self,
        original_cmd: &str,      // Standard command (e.g., "ls -la")
        rtk_cmd: &str,            // RTK command (e.g., "rtk ls")
        input_tokens: usize,      // Estimated input tokens
        output_tokens: usize,     // Actual output tokens
        exec_time_ms: u64,        // Execution time in milliseconds
    ) -> Result<()>;

    /// Get overall summary statistics
    pub fn get_summary(&self) -> Result<GainSummary>;

    /// Get summary filtered by project path
    pub fn get_summary_filtered(&self, project_path: Option<&str>) -> Result<GainSummary>;

    /// Get daily statistics (all days)
    pub fn get_all_days(&self) -> Result<Vec<DayStats>>;

    /// Get daily statistics filtered by project
    pub fn get_all_days_filtered(&self, project_path: Option<&str>) -> Result<Vec<DayStats>>;

    /// Get weekly statistics (grouped by week)
    pub fn get_by_week(&self) -> Result<Vec<WeekStats>>;
    pub fn get_by_week_filtered(&self, project_path: Option<&str>) -> Result<Vec<WeekStats>>;

    /// Get monthly statistics (grouped by month)
    pub fn get_by_month(&self) -> Result<Vec<MonthStats>>;
    pub fn get_by_month_filtered(&self, project_path: Option<&str>) -> Result<Vec<MonthStats>>;

    /// Get recent command history (limit = max records)
    pub fn get_recent(&self, limit: usize) -> Result<Vec<CommandRecord>>;
    pub fn get_recent_filtered(&self, limit: usize, project_path: Option<&str>) -> Result<Vec<CommandRecord>>;
}

GainSummary

Aggregated statistics across all recorded commands.
pub struct GainSummary {
    pub total_commands: usize,
    pub total_input: usize,
    pub total_output: usize,
    pub total_saved: usize,
    pub avg_savings_pct: f64,
    pub total_time_ms: u64,
    pub avg_time_ms: u64,
    pub by_command: Vec<(String, usize, usize, f64, u64)>, // (cmd, count, saved, avg_pct, avg_time_ms)
    pub by_day: Vec<(String, usize)>,                       // (date, saved_tokens)
}

DayStats, WeekStats, MonthStats

Temporal statistics (Serializable for JSON export).
#[derive(Debug, Serialize)]
pub struct DayStats {
    pub date: String,            // ISO date (YYYY-MM-DD)
    pub commands: usize,
    pub input_tokens: usize,
    pub output_tokens: usize,
    pub saved_tokens: usize,
    pub savings_pct: f64,
    pub total_time_ms: u64,
    pub avg_time_ms: u64,
}

#[derive(Debug, Serialize)]
pub struct WeekStats {
    pub week_start: String,      // ISO date (YYYY-MM-DD)
    pub week_end: String,
    pub commands: usize,
    pub input_tokens: usize,
    pub output_tokens: usize,
    pub saved_tokens: usize,
    pub savings_pct: f64,
    pub total_time_ms: u64,
    pub avg_time_ms: u64,
}

#[derive(Debug, Serialize)]
pub struct MonthStats {
    pub month: String,           // YYYY-MM format
    pub commands: usize,
    pub input_tokens: usize,
    pub output_tokens: usize,
    pub saved_tokens: usize,
    pub savings_pct: f64,
    pub total_time_ms: u64,
    pub avg_time_ms: u64,
}

TimedExecution

Helper for timing command execution (preferred API).
pub struct TimedExecution {
    start: Instant,
}

impl TimedExecution {
    /// Start timing a command execution
    pub fn start() -> Self;

    /// Track command with elapsed time
    pub fn track(&self, original_cmd: &str, rtk_cmd: &str, input: &str, output: &str);

    /// Track passthrough commands (timing-only, no token counting)
    pub fn track_passthrough(&self, original_cmd: &str, rtk_cmd: &str);
}

Utility Functions

/// Estimate token count (~4 chars = 1 token)
pub fn estimate_tokens(text: &str) -> usize;

/// Format OsString args for display
pub fn args_display(args: &[OsString]) -> String;

Usage Examples

Basic Tracking

use rtk::tracking::{TimedExecution, Tracker};

fn main() -> anyhow::Result<()> {
    // Start timer
    let timer = TimedExecution::start();

    // Execute command
    let input = execute_original_command()?;
    let output = execute_rtk_command()?;

    // Track execution
    timer.track("ls -la", "rtk ls", &input, &output);

    Ok(())
}

Querying Statistics

use rtk::tracking::Tracker;

fn main() -> anyhow::Result<()> {
    let tracker = Tracker::new()?;

    // Get overall summary
    let summary = tracker.get_summary()?;
    println!("Total commands: {}", summary.total_commands);
    println!("Total saved: {} tokens", summary.total_saved);
    println!("Average savings: {:.1}%", summary.avg_savings_pct);

    // Get daily breakdown
    let days = tracker.get_all_days()?;
    for day in days.iter().take(7) {
        println!("{}: {} commands, {} tokens saved",
            day.date, day.commands, day.saved_tokens);
    }

    // Get recent history
    let recent = tracker.get_recent(10)?;
    for cmd in recent {
        println!("{}: {} saved {:.1}%",
            cmd.timestamp, cmd.rtk_cmd, cmd.savings_pct);
    }

    Ok(())
}

Project-Scoped Tracking

use rtk::tracking::Tracker;

fn main() -> anyhow::Result<()> {
    let tracker = Tracker::new()?;
    let project_path = "/Users/foo/Sites/myproject";

    // Get summary for specific project
    let summary = tracker.get_summary_filtered(Some(project_path))?;
    println!("Project: {}", project_path);
    println!("Commands: {}", summary.total_commands);
    println!("Saved: {} tokens", summary.total_saved);

    Ok(())
}

Passthrough Commands

For commands that stream output or run interactively (no output capture):
use rtk::tracking::TimedExecution;

fn main() -> anyhow::Result<()> {
    let timer = TimedExecution::start();

    // Execute streaming command (e.g., git tag --list)
    execute_streaming_command()?;

    // Track timing only (input_tokens=0, output_tokens=0)
    timer.track_passthrough("git tag --list", "rtk git tag --list");

    Ok(())
}

Custom Database Path

Environment Variable

export RTK_DB_PATH="/custom/path/tracking.db"
rtk git status  # Uses custom DB

Config File

# ~/.config/rtk/config.toml
[tracking]
database_path = "/custom/path/tracking.db"

Programmatic

use std::env;

fn main() -> anyhow::Result<()> {
    // Set custom DB path before creating tracker
    env::set_var("RTK_DB_PATH", "/custom/path/tracking.db");

    let tracker = Tracker::new()?;
    // ... use tracker

    Ok(())
}

Integration Examples

GitHub Actions - Track Savings in CI

# .github/workflows/track-rtk-savings.yml
name: Track RTK Savings

on:
  schedule:
    - cron: '0 0 * * 1'  # Weekly on Monday
  workflow_dispatch:

jobs:
  track-savings:
    runs-on: ubuntu-latest
    steps:
      - name: Install RTK
        run: cargo install --git https://github.com/rtk-ai/rtk

      - name: Export weekly stats
        run: |
          rtk gain --weekly --format json > rtk-weekly.json
          cat rtk-weekly.json

      - name: Upload artifact
        uses: actions/upload-artifact@v3
        with:
          name: rtk-metrics
          path: rtk-weekly.json

      - name: Post to Slack
        if: success()
        env:
          SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
        run: |
          SAVINGS=$(jq -r '.[0].saved_tokens' rtk-weekly.json)
          PCT=$(jq -r '.[0].savings_pct' rtk-weekly.json)
          curl -X POST -H 'Content-type: application/json' \
            --data "{\"text\":\"📊 RTK Weekly: ${SAVINGS} tokens saved (${PCT}%)\"}" \
            $SLACK_WEBHOOK

Custom Dashboard Script

#!/usr/bin/env python3
"""
Export RTK metrics to Grafana/Datadog/etc.
"""
import json
import subprocess
from datetime import datetime

def get_rtk_metrics():
    """Fetch RTK metrics as JSON."""
    result = subprocess.run(
        ["rtk", "gain", "--all", "--format", "json"],
        capture_output=True,
        text=True
    )
    return json.loads(result.stdout)

def export_to_datadog(metrics):
    """Send metrics to Datadog."""
    import datadog

    datadog.initialize(api_key="YOUR_API_KEY")

    for day in metrics.get("daily", []):
        datadog.api.Metric.send(
            metric="rtk.tokens_saved",
            points=[(datetime.now().timestamp(), day["saved_tokens"])],
            tags=[f"date:{day['date']}"]
        )

        datadog.api.Metric.send(
            metric="rtk.savings_pct",
            points=[(datetime.now().timestamp(), day["savings_pct"])],
            tags=[f"date:{day['date']}"]
        )

if __name__ == "__main__":
    metrics = get_rtk_metrics()
    export_to_datadog(metrics)
    print(f"Exported {len(metrics.get('daily', []))} days to Datadog")

Rust Integration (Using RTK as Library)

// In your Cargo.toml
// [dependencies]
// rtk = { git = "https://github.com/rtk-ai/rtk" }

use rtk::tracking::{Tracker, TimedExecution};
use anyhow::Result;

fn main() -> Result<()> {
    // Track your own commands
    let timer = TimedExecution::start();

    let input = run_expensive_operation()?;
    let output = run_optimized_operation()?;

    timer.track(
        "expensive_operation",
        "optimized_operation",
        &input,
        &output
    );

    // Query aggregated stats
    let tracker = Tracker::new()?;
    let summary = tracker.get_summary()?;

    println!("Total savings: {} tokens ({:.1}%)",
        summary.total_saved,
        summary.avg_savings_pct
    );

    // Export to JSON for external tools
    let days = tracker.get_all_days()?;
    let json = serde_json::to_string_pretty(&days)?;
    std::fs::write("metrics.json", json)?;

    Ok(())
}

Performance Considerations

  • SQLite WAL mode: Not enabled (may add in future for concurrent writes)
  • Index on timestamp: Enables fast date-range queries
  • Index on project_path: Enables fast project-scoped queries
  • Automatic cleanup: Prevents database from growing unbounded (90-day retention)
  • Token estimation: ~4 chars = 1 token (simple, fast approximation)
  • Aggregation queries: Use SQL GROUP BY for efficient aggregation

Security & Privacy

  • Local storage only: Database never leaves the machine
  • No telemetry: RTK does not phone home or send analytics
  • User control: Users can delete ~/.local/share/rtk/tracking.db anytime
  • 90-day retention: Old data automatically purged
  • Project paths: Stored as canonical absolute paths (e.g., /Users/foo/project)

Troubleshooting

Database locked error

If you see “database is locked” errors:
  • Ensure only one RTK process writes at a time
  • Check file permissions on ~/.local/share/rtk/tracking.db
  • Delete and recreate: rm ~/.local/share/rtk/tracking.db && rtk gain

Missing exec_time_ms column

Older databases may not have the exec_time_ms column. RTK automatically migrates on first use, but you can force it:
sqlite3 ~/.local/share/rtk/tracking.db \
  "ALTER TABLE commands ADD COLUMN exec_time_ms INTEGER DEFAULT 0"

Incorrect token counts

Token estimation uses ~4 chars = 1 token. This is approximate. For precise counts, integrate with your LLM’s tokenizer API:
import tiktoken

enc = tiktoken.get_encoding('cl100k_base')
text = "your output here"
actual_tokens = len(enc.encode(text))
estimated_tokens = len(text) // 4

print(f"Actual: {actual_tokens}, Estimated: {estimated_tokens}")

Future Enhancements

Planned improvements (contributions welcome):
  • Export to Prometheus/OpenMetrics format
  • Support for custom retention periods (not just 90 days)
  • SQLite WAL mode for concurrent writes
  • Per-project tracking with multiple databases
  • Integration with Claude API for precise token counts
  • Web dashboard (localhost) for visualizing trends

See Also

rtk gain Command

CLI interface for viewing token savings analytics

rtk discover Command

Find missed savings opportunities in Claude Code session history

Build docs developers (and LLMs) love