Documentation Index
Fetch the complete documentation index at: https://mintlify.com/nearai/ironclaw/llms.txt
Use this file to discover all available pages before exploring further.
Quick Start
Run All Tests
# Run all tests with default features (uses libsql temp DB)
cargo test
No external database required - tests use embedded libSQL by default.
Run Tests with Logging
# Enable debug logging
RUST_LOG=ironclaw=debug cargo test
# Show test output
cargo test -- --nocapture
# Both logging and output
RUST_LOG=ironclaw=debug cargo test -- --nocapture
Run Specific Tests
# Run a specific test
cargo test test_name
# Run tests matching a pattern
cargo test workspace
# Run tests in a specific module
cargo test agent::tests
Test Categories
Unit Tests
Unit tests are located in the same file as the code they test:
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_feature() {
// Synchronous test
assert_eq!(2 + 2, 4);
}
#[tokio::test]
async fn test_async_feature() {
// Asynchronous test
let result = some_async_function().await;
assert!(result.is_ok());
}
}
Run unit tests:
Integration Tests
Integration tests are in the tests/ directory:
tests/
├── integration/
│ ├── agent_test.rs
│ ├── memory_test.rs
│ └── sandbox_test.rs
└── html_to_markdown.rs
Run integration tests:
Feature-Gated Tests
Some tests require specific features:
# HTML to Markdown tests
cargo test --features html-to-markdown
# All features
cargo test --all-features
Database Testing
libSQL (Default)
Tests use embedded libSQL by default - no setup required:
Each test gets a temporary database that’s automatically cleaned up.
PostgreSQL Integration Tests
For PostgreSQL-specific tests:
# Create test database
createdb ironclaw_test
# Enable pgvector
psql ironclaw_test -c "CREATE EXTENSION IF NOT EXISTS vector;"
# Run integration tests
cargo test --features integration
Set the test database URL:
export DATABASE_URL="postgresql://localhost/ironclaw_test"
cargo test --features integration
Test Containers
Some integration tests use testcontainers for PostgreSQL:
# Requires Docker
cargo test --features integration
Docker will automatically start a PostgreSQL container for testing.
Test Organization
Module Tests
Tests in the same file as the code:
// src/agent/mod.rs
pub struct Agent {
// ...
}
impl Agent {
pub fn new() -> Self {
// ...
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_agent_creation() {
let agent = Agent::new();
assert!(agent.is_valid());
}
}
Separate Test Files
Large test suites in tests/:
// tests/integration/memory_test.rs
use ironclaw::memory::Workspace;
#[tokio::test]
async fn test_workspace_operations() {
// Test implementation
}
Writing Tests
Basic Test Structure
#[cfg(test)]
mod tests {
use super::*;
use pretty_assertions::assert_eq;
#[test]
fn test_basic_feature() {
// Arrange
let input = "test";
// Act
let result = process_input(input);
// Assert
assert_eq!(result, "expected");
}
}
Async Tests
#[tokio::test]
async fn test_async_operation() {
let result = async_function().await;
assert!(result.is_ok());
}
Testing Errors
#[test]
fn test_error_handling() {
let result = fallible_function();
assert!(result.is_err());
let err = result.unwrap_err();
assert_eq!(err.to_string(), "expected error message");
}
Testing with Mock Data
#[tokio::test]
async fn test_with_temp_db() {
use tempfile::tempdir;
// Create temporary directory
let temp_dir = tempdir().unwrap();
let db_path = temp_dir.path().join("test.db");
// Test with temporary database
let workspace = Workspace::new(&db_path).await.unwrap();
// Run tests
// ...
// Cleanup happens automatically
}
Snapshot Testing
IronClaw uses insta for snapshot testing:
use insta::assert_snapshot;
#[test]
fn test_output_format() {
let output = generate_output();
assert_snapshot!(output);
}
Update snapshots:
Test Utilities
Test Fixtures
Create reusable test data:
#[cfg(test)]
mod fixtures {
use super::*;
pub fn sample_message() -> Message {
Message {
id: "test-id".to_string(),
content: "test content".to_string(),
// ...
}
}
pub async fn test_workspace() -> Workspace {
// Create workspace with test data
}
}
Pretty Assertions
For better diff output:
use pretty_assertions::assert_eq;
#[test]
fn test_with_better_diffs() {
let expected = vec![1, 2, 3, 4, 5];
let actual = compute_result();
assert_eq!(expected, actual); // Shows nice diff on failure
}
Running Specific Test Suites
All Tests
Library Tests Only
Integration Tests Only
Doc Tests
Benchmarks
All Targets
Test Configuration
Parallel Execution
# Run tests in parallel (default)
cargo test
# Run tests sequentially
cargo test -- --test-threads=1
Test Timeout
# No timeout (for debugging)
cargo test -- --nocapture
# Custom timeout
RUST_TEST_TIMEOUT=300 cargo test
Ignore Tests
#[test]
#[ignore]
fn expensive_test() {
// This test is skipped by default
}
Run ignored tests:
cargo test -- --ignored
# Run all tests including ignored
cargo test -- --include-ignored
Coverage
# Using cargo-llvm-cov
cargo install cargo-llvm-cov
Generate Coverage Report
# HTML report
cargo llvm-cov --html
# Open in browser
open target/llvm-cov/html/index.html
Coverage with Specific Features
cargo llvm-cov --all-features --html
CI Coverage
# Generate lcov format for CI
cargo llvm-cov --lcov --output-path lcov.info
Continuous Integration
GitHub Actions
IronClaw uses GitHub Actions for CI:
# .github/workflows/test.yml
name: Test
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: dtolnay/rust-toolchain@stable
- run: cargo test --all-features
Local CI Testing
Run the same tests as CI:
# Format check
cargo fmt --check
# Clippy lints
cargo clippy --all --benches --tests --examples --all-features
# Run tests
cargo test --all-features
# Check docs
cargo doc --no-deps
Benchmarking
Criterion Benchmarks
For performance testing:
// benches/my_benchmark.rs
use criterion::{black_box, criterion_group, criterion_main, Criterion};
fn fibonacci_benchmark(c: &mut Criterion) {
c.bench_function("fib 20", |b| {
b.iter(|| fibonacci(black_box(20)))
});
}
criterion_group!(benches, fibonacci_benchmark);
criterion_main!(benches);
Run benchmarks:
Debugging Tests
Run with Debugger
# Build tests
cargo test --no-run
# Find test binary
find target/debug/deps -name 'ironclaw-*' -type f
# Run with lldb/gdb
lldb target/debug/deps/ironclaw-<hash>
Print Debugging
#[test]
fn debug_test() {
let value = compute_value();
println!("Debug: {:?}", value);
assert_eq!(value, expected);
}
Run with output:
cargo test -- --nocapture
Conditional Logging
#[tokio::test]
async fn test_with_logging() {
let _ = tracing_subscriber::fmt()
.with_env_filter("ironclaw=debug")
.try_init();
// Test code with logging
}
Best Practices
Test Naming
#[test]
fn test_feature_returns_expected_result() {
// Clear, descriptive name
}
#[test]
fn test_feature_handles_empty_input() {
// Describes the scenario
}
#[test]
fn test_feature_errors_on_invalid_data() {
// Describes the expected behavior
}
Arrange-Act-Assert Pattern
#[test]
fn test_example() {
// Arrange: Set up test data
let input = "test";
let expected = "result";
// Act: Execute the operation
let actual = process(input);
// Assert: Verify the result
assert_eq!(actual, expected);
}
Test Independence
Each test should be independent:
#[test]
fn test_a() {
// Don't rely on test_b running first
let state = create_clean_state();
// ...
}
#[test]
fn test_b() {
// Create own state
let state = create_clean_state();
// ...
}
Use Test Helpers
#[cfg(test)]
mod test_helpers {
pub fn create_test_context() -> Context {
// Shared setup logic
}
}
#[test]
fn test_with_helper() {
let ctx = test_helpers::create_test_context();
// Use ctx
}
Common Test Patterns
Testing Result Types
#[test]
fn test_result_success() {
let result = fallible_operation();
assert!(result.is_ok());
assert_eq!(result.unwrap(), expected_value);
}
#[test]
fn test_result_error() {
let result = invalid_operation();
assert!(result.is_err());
}
Testing Option Types
#[test]
fn test_option_some() {
let result = find_item();
assert!(result.is_some());
assert_eq!(result.unwrap(), expected);
}
#[test]
fn test_option_none() {
let result = find_nonexistent();
assert!(result.is_none());
}
Testing Panics
#[test]
#[should_panic(expected = "invalid input")]
fn test_panics_on_invalid() {
process_invalid_input();
}
Troubleshooting
Test Database Conflicts
Issue: Tests fail due to database conflicts
Solution: Use isolated test databases
# Use different database for each test run
export DATABASE_URL="postgresql://localhost/ironclaw_test_$$"
cargo test
Flaky Tests
Issue: Tests pass/fail intermittently
Solutions:
- Check for race conditions
- Use proper synchronization
- Avoid time-dependent tests
- Make tests deterministic
Slow Tests
Issue: Test suite takes too long
Solutions:
# Run tests in parallel
cargo test --release
# Profile test execution
cargo test -- --show-output --test-threads=1
# Use faster feature set
cargo test --no-default-features --features libsql
Next Steps
Resources