Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/Augani/kael/llms.txt

Use this file to discover all available pages before exploring further.

Kael is built around the principle that the GPU should only redraw what changed. Its dirty-tracking system means that at idle — when no state has been mutated and no animations are running — the render loop never executes, which is why a quiescent Kael app consumes 0 % CPU. Understanding when to cooperate with that system, and when to reach for heavier tools like virtualized lists or explicit caching, lets you keep both startup time and long-session energy impact well below Electron baselines.

How dirty tracking works

Every entity mutation that should produce a visual update must eventually result in a notification reaching the current window. Kael provides two paths:
  • Automatic notifications — any model update performed inside a context-provided closure (e.g., cx.update, an event handler, a Task result) is observed by the framework. If the closure mutated a model that a rendered view depends on, Kael marks the window dirty and schedules a repaint.
  • Manual notifications — call cx.notify() when you mutate state outside a tracked context, or when you need to signal downstream subscribers without going through an entity update.
rust
// Automatic: the framework sees the mutation and schedules a repaint.
cx.update(|cx| {
    my_entity.update(cx, |model, cx| {
        model.value += 1;
        // cx.notify() is not needed here; the update closure handles it.
    });
});

// Manual: state was mutated by an external callback and the framework
// cannot see it automatically.
some_external_callback(move || {
    cx.notify();
});
Calling cx.notify() unconditionally on every tick defeats dirty tracking and reintroduces the idle-CPU problem. Only call it when you know state has changed outside a tracked context.

Virtualizing large lists

Rendering thousands of rows naively causes layout to scale linearly with item count. Kael ships two virtualized list primitives.
Use uniform_list when all items have the same height. It measures one item once, then renders only the visible range.
rust
use kael::prelude::*;

fn render_file_list(&self, cx: &mut ViewContext<Self>) -> impl IntoElement {
    uniform_list(
        cx.view().clone(),
        "files",
        self.files.len(),
        |this, range, cx| {
            range
                .map(|i| div().child(this.files[i].name.clone()))
                .collect()
        },
    )
    .flex_grow()
}

Subtree caching with cached()

Wrap a subtree in cached() to memoize its element tree between frames. Kael skips re-rendering the subtree entirely until the cache key changes.
rust
use kael::prelude::*;

fn render(&self, cx: &mut ViewContext<Self>) -> impl IntoElement {
    div()
        .child(cached(
            self.theme_version,  // cache key — any PartialEq type
            || self.render_heavy_sidebar(cx),
        ))
        .child(self.render_editor(cx))
}
Use cached() for static or rarely-changing subtrees such as sidebars, toolbars, and icon grids. Avoid it for subtrees that change on every frame — the key comparison overhead is not worth it.

Benchmark utilities

The benchmark module provides a full harness for measuring startup, memory, latency, and smoothness against predefined product-level scenarios.

Scenarios

rust
use kael::benchmark::BenchmarkScenario;

// Three scenarios with realistic complexity scores:
// Messaging  -> 500 elements (chat UI)
// Workspace  -> 1200 elements (IDE-like layout)
// MediaControl -> 800 elements (OBS-style dashboard)
println!("{}", BenchmarkScenario::Workspace.description());
// "IDE-like workspace with sidebar, editor tabs, and terminal panel"
println!("{}", BenchmarkScenario::Workspace.complexity_score()); // 1200

Running a benchmark with BenchmarkHarness

rust
use kael::benchmark::{
    BenchmarkHarness, BenchmarkMetric, BenchmarkMeasurement,
    BenchmarkScenario, MetricUnit,
};
use std::time::Duration;

let mut harness = BenchmarkHarness::new();

let result = harness.run(BenchmarkScenario::Messaging, "kael", |measurements| {
    measurements.push(BenchmarkMeasurement {
        metric: BenchmarkMetric::ColdStart,
        value: 120.0,
        unit: MetricUnit::Milliseconds,
        elapsed: Duration::from_secs(1),
    });
});

println!("cold start: {}ms", result.measurements[0].value);

Using metric collectors

Kael ships collectors that measure specific metrics automatically:
use kael::benchmark::{BenchmarkHarness, BenchmarkScenario, ColdStartCollector, MetricCollector};

let mut harness = BenchmarkHarness::new();
let mut cold_start = ColdStartCollector::new();
let mut collectors: [&mut dyn MetricCollector; 1] = [&mut cold_start];

harness.run_with_collectors(
    BenchmarkScenario::Workspace,
    "kael",
    &mut collectors,
    |_| { /* exercise your startup path here */ },
);

Comparing results and detecting regressions

rust
use kael::benchmark::{compare_results, check_regressions, load_results_from_json};

// Load a saved baseline from a previous CI run.
let baseline_json = std::fs::read_to_string("baseline.json")?;
let baseline = load_results_from_json(&baseline_json)?;

// Run the current candidate.
let mut harness = BenchmarkHarness::new();
// ... populate harness ...
let candidate_json = harness.export_to_json()?;
let candidate = load_results_from_json(&candidate_json)?;

// Flag any metric that regressed by more than 10 %.
let regressions = check_regressions(&baseline, &candidate, 10.0);
for r in &regressions {
    eprintln!("{:?} regressed by {:.1}%", r.metric, r.percent_change);
}
You can also capture snapshots from the command line using the perf harness:
bash
cargo perf-test -p kael -- --json=baseline
cargo perf-test -p kael -- --json=candidate
cargo perf-compare candidate baseline

Memory measurement

MemoryCollector::resident_mb() is a static method that reads resident memory on the current platform (/proc/self/status on Linux, getrusage on macOS, GetProcessMemoryInfo on Windows):
rust
use kael::benchmark::MemoryCollector;

let resident = MemoryCollector::resident_mb();
println!("resident: {:.1} MB", resident);
Add a MemoryCollector to your harness to track idle memory automatically alongside other metrics.

Profiling with the profiling crate and the inspector feature

Kael integrates with the profiling crate. Enable your preferred backend in Cargo.toml and Kael’s internal render, layout, and input spans will appear in your profiler:
toml
[features]
# Choose one backend:
profile-with-puffin  = ["profiling/profile-with-puffin"]
profile-with-tracy   = ["profiling/profile-with-tracy"]
profile-with-optick  = ["profiling/profile-with-optick"]
Enable the built-in visual inspector — which renders an overlay showing frame timing and entity update counts — with the inspector feature flag:
toml
[dependencies]
kael = { version = "*", features = ["inspector"] }
For ad hoc frame timing during development, set the ZED_MEASUREMENTS environment variable before running any example:
bash
ZED_MEASUREMENTS=1 cargo run -p kael --example perf_bench

Tracing frame events

Install a Tracer at startup to enable Kael’s built-in probes for window.draw_frame, window.present, window.dispatch_event, and text.layout. Export to JSON and open the result in Chrome Trace or Perfetto:
rust
use kael::{Tracer, TracePhase};

let tracer = Tracer::default();
tracer.enable();
tracer.install_global();

// ... exercise the UI ...

tracer.write_to_file("trace.json")?;

Build docs developers (and LLMs) love