Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/Effectful-Tech/clanka/llms.txt

Use this file to discover all available pages before exploring further.

Clanka is a TypeScript library for building AI coding agents that are fully integrated with the Effect ecosystem. Instead of exposing a sprawling set of tools to the LLM, Clanka follows a single execute tool pattern: the model generates JavaScript, that code runs in a sandboxed Node.js VM, and the results are streamed back to drive the next turn. Everything — agents, providers, executors, and output — composes as Effect layers.

Key features

  • Single execute tool — the LLM writes JavaScript rather than calling dozens of named tools, giving it full procedural flexibility within a controlled sandbox
  • Sandboxed VM execution — generated scripts run in node:vm with no access to require, import, or process, preventing accidental side-effects outside the working directory
  • Effect layer composition — providers, executors, semantic search, and subagent models are all Effect Layer values you combine with Layer.provide and Layer.merge
  • Streaming outputagent.send() returns a Stream of typed Output events (reasoning deltas, script deltas, script output, token usage) so you can render progress in real time
  • Subagent support — a primary agent can spin up isolated subagents for parallel work, each with its own model and conversation history
  • Semantic search — an optional SQLite-backed vector index lets the agent search the codebase by meaning rather than just text patterns
  • Provider agnostic — ships built-in layers for OpenAI Codex and GitHub Copilot; any OpenAI-compatible endpoint works through @effect/ai-openai

How the agent loop works

Each call to agent.send() starts a continuous agentic loop:
  1. The agent receives your prompt and the accumulated conversation history
  2. The configured language model streams a response; Clanka exposes exactly one tool to the model — execute
  3. When the model calls execute, the JavaScript argument is compiled and run inside a fresh node:vm sandbox
  4. Console output from the script is collected and returned to the model as the tool result
  5. The loop continues until the model calls the built-in taskComplete function (or, in conversation mode, until the model stops calling the tool)
The executor layer injects typed helper functions — readFile, ls, bash, applyPatch, search, and others — into the sandbox globals so the model can interact with the filesystem and run commands without importing any Node.js modules directly.

Quickstart

Build and run your first Clanka agent in under five minutes.

Agent concept

Understand the Agent service, the send/steer API, and conversation history.

Providers

Configure Codex, Copilot, or a custom OpenAI-compatible model provider.

Agent API reference

Full type signatures for Agent, AgentExecutor, and related services.

Build docs developers (and LLMs) love