Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/Augani/kael/llms.txt

Use this file to discover all available pages before exploring further.

Kael lets you embed full web content, play audio and video files, and capture screens or input devices — all from the same GPU-accelerated render tree. Web content is rendered through platform-native engines so you get excellent fidelity with no bundled browser. Media playback and screen capture are optional feature-flagged modules that add zero overhead when you do not need them.

WebView

Platform backends

Kael’s WebView element delegates to a platform-native engine:
PlatformBackend
macOSWKWebView
WindowsWebView2
LinuxWebKitGTK (via wry)
No additional dependency is required beyond enabling the right platform feature flag in your Cargo.toml (see below).

Enabling WebView

[dependencies]
kael = { version = "*", features = ["webview"] }

Using the WebView element

Place a WebView in your render tree like any other element. Give it a unique string id and an initial url:
use kael::prelude::*;
use kael::elements::WebView;

fn render(&mut self, _window: &mut Window, _cx: &mut Context<Self>) -> impl IntoElement {
    div()
        .flex_1()
        .child(
            WebView::new("main-webview")
                .url("https://example.com")
                .w_full()
                .h_full(),
        )
}
Send navigation commands to a WebView through the PlatformWebViewCommand enum. Commands are dispatched by matching on the id you assigned to the element:
use kael::webview::PlatformWebViewCommand;

window.dispatch_webview_command(PlatformWebViewCommand::Navigate {
    id:  "main-webview".into(),
    url: "https://docs.example.com".into(),
});

Injecting JavaScript

Use EvaluateJavaScript to run a script in the webview’s page context:
use kael::webview::PlatformWebViewCommand;

window.dispatch_webview_command(PlatformWebViewCommand::EvaluateJavaScript {
    id:     "main-webview".into(),
    script: "document.title = 'Hello from Kael';".into(),
});

Posting messages from Rust

Send structured JSON data to the page with PostMessage. On the JavaScript side, listen for window.__kael_message__ (or the bridge your app defines):
use kael::webview::PlatformWebViewCommand;
use serde_json::json;

window.dispatch_webview_command(PlatformWebViewCommand::PostMessage {
    id:      "main-webview".into(),
    message: json!({ "type": "theme-changed", "value": "dark" }),
});

Controlling navigation policy

Implement a WebViewNavigationHandler to intercept navigations and decide whether to allow or deny them:
use kael::webview::{NavigationPolicy, WebViewNavigationHandler};
use std::rc::Rc;

let handler: WebViewNavigationHandler = Rc::new(|url, _window, _cx| {
    if url.starts_with("https://trusted.example.com") {
        NavigationPolicy::Allow
    } else {
        NavigationPolicy::Deny
    }
});
NavigationPolicyMeaning
AllowThe navigation proceeds normally.
DenyThe navigation is cancelled before the browser fetches the URL.

Media playback

Audio and video playback are provided by the media feature flag, which pulls in the kael-media crate for decoding.

Enabling the feature

[dependencies]
kael = { version = "*", features = ["media"] }

Audio playback

Create an AudioHandle from any MediaSource and call .play():
use kael::media_playback::{audio, MediaPlaybackState};

// From a file path
let handle = audio("assets/background.mp3");
handle.play().unwrap();

// Pause and resume
handle.pause();
handle.play().unwrap();

// Stop entirely
handle.stop();

// Check state
match handle.state() {
    MediaPlaybackState::Playing => {}
    MediaPlaybackState::Paused  => {}
    _                           => {}
}
AudioHandle is Clone — store it in your view’s state to control playback from UI events.

Binding media keys

Call bind_audio_media_keys to wire the system media keys (play/pause, stop) to your AudioHandle automatically:
use kael::media_playback::bind_audio_media_keys;

bind_audio_media_keys(cx, &my_audio_handle);

Video playback

Use the video() constructor to create a Video element and embed it in your render tree:
use kael::media_playback::{audio, video};

// Audio handle drives the playback clock
let audio_handle = audio("assets/movie.mp3");

div()
    .w(px(1280.))
    .h(px(720.))
    .child(
        video("assets/movie.mp4")
            .autoplay()
            .sync_to(&audio_handle)
            .object_fit(ObjectFit::Contain)
            .w_full()
            .h_full(),
    )
MethodDescription
.autoplay()Begin playback as soon as the media has loaded.
.sync_to(&AudioHandle)Synchronize video frame rendering to the given audio clock.
.object_fit(ObjectFit)How to fit the decoded frame within the element’s bounds (Contain, Cover, Fill, etc.).
Always pair a Video element with an AudioHandle via .sync_to() when your file contains an audio track. Without synchronization, audio and video can drift over time.

Decoding a video to a static image

decode_video_image renders all video frames into an animated RenderImage you can use anywhere an image source is accepted:
use kael::media_playback::decode_video_image;

let render_image = decode_video_image("assets/animation.gif").unwrap();
img(render_image).w_32().h_32()

Screen and media capture

Screen, window, microphone, camera, and system-audio capture are gated behind the screen-capture feature flag.

Enabling the feature

[dependencies]
kael = { version = "*", features = ["screen-capture"] }
On Windows and Linux the screen-capture feature also requires either x11 or wayland. On Linux, Kael uses PipeWire via the scap_screen_capture module. On macOS it uses ScreenCaptureKit, and on Windows it uses the Windows Graphics Capture API.

Capture device kinds

CaptureDeviceKind identifies what you want to capture:
VariantDescription
ScreenA physical or virtual display.
WindowA specific application window.
MicrophoneA microphone input.
CameraA camera or virtual camera.
SystemAudioSystem audio output (loopback).

Enumerating devices

Register a backend, then call manager.devices(kind):
use kael::media_capture::{CaptureManager, CaptureDeviceKind};

let manager = CaptureManager::new();
// (register your platform backend here)

let screens = manager.devices(CaptureDeviceKind::Screen).unwrap();
for device in &screens {
    println!("{}: {}", device.id, device.name);
}

Creating a capture session

Build a CaptureConfig and call manager.create_session:
use kael::media_capture::{CaptureConfig, CaptureDeviceKind, CaptureFrame};
use std::sync::Arc;

let config = CaptureConfig::new(screens[0].id.clone(), CaptureDeviceKind::Screen)
    .frame_rate(30.0)
    .resolution(1920, 1080)
    .include_audio(false);

let mut session = manager.create_session(&config).unwrap();

session.start(config, Arc::new(|frame| {
    match frame {
        CaptureFrame::Video { width, height, data, .. } => {
            // process BGRA / RGBA pixel data
            let _ = (width, height, data);
        }
        CaptureFrame::Audio { channels, sample_rate, data, .. } => {
            let _ = (channels, sample_rate, data);
        }
    }
})).unwrap();

Session lifecycle

session.pause().unwrap();
session.resume().unwrap();
session.stop().unwrap();

// Query state at any time
let state = session.state(); // CaptureSessionState::Running | Paused | Stopped | …

Permissions

Capture operations go through Kael’s PermissionBroker. Use cx.microphone_status(), cx.camera_status(), and cx.accessibility_status() to check permission state before starting a session, and call the corresponding request_*_permission methods to prompt the user:
use kael::PermissionStatus;

match cx.microphone_status() {
    PermissionStatus::Granted => {
        // start microphone capture
    }
    PermissionStatus::Denied => {
        // show an error — user must grant permission in system settings
    }
    PermissionStatus::NotDetermined => {
        cx.request_microphone_permission(|granted| {
            if granted {
                // start microphone capture
            }
        });
    }
}

Build docs developers (and LLMs) love