TheDocumentation Index
Fetch the complete documentation index at: https://mintlify.com/Effectful-Tech/clanka/llms.txt
Use this file to discover all available pages before exploring further.
Agent service is the primary entry point for interacting with a language model in Clanka. It owns the conversation history, sends prompts to the configured LanguageModel, and returns a live Stream of typed Output events that you can observe or render as the model responds.
The Agent interface
Agent service is provided via Context.Service under the key "clanka/Agent". Retrieve it inside an Effect with yield* Agent.Agent.
send(options)
Sends a prompt and returns an Effect that resolves to a Stream<Output, AgentFinished | AiError>. The stream emits typed output events as the model reasons and executes tools. Once the agent calls taskComplete, the stream fails with AgentFinished (carrying the final summary), which is the signal that the turn is done.
| Option | Type | Description |
|---|---|---|
prompt | Prompt.RawInput | The user message or structured prompt to send |
system | string | function | undefined | Additional system instructions, or a function that receives toolInstructions and agentsMd and returns a string |
system is a function, the agent calls it with the generated tool instructions and any AGENTS.md content read from the working directory. This lets you position your instructions around the built-in instructions.
steer(message)
Injects a user message into the current turn. The effect completes once the message has been accepted into the pending queue. Interrupting the effect withdraws the message before it is sent to the model.
history
A MutableRef<Prompt> that holds the full conversation transcript. The agent appends each request and response to this ref automatically, so subsequent send calls share context.
Output stream events
The stream emitted bysend produces a union of typed events:
AgentStart → ReasoningStart / ReasoningDelta / ReasoningEnd → ScriptStart / ScriptDelta / ScriptEnd → ScriptOutput → (repeat for multiple tool calls) → AgentFinished (as a stream error).
Context references
The agent reads several context references that let you tune its behaviour without touching the layer wiring:| Reference | Default | Description |
|---|---|---|
SubagentModel | — | The Layer used to provide a LanguageModel for spawned sub-agents. Required when the agent delegates work via delegate(). |
ConversationMode | false | When true, the agent does not wait for taskComplete and ends the turn after the first non-tool-call response. Useful for chat applications. |
TurnTimeout | Duration.minutes(5) | Inactivity duration after which the current turn is retried. Resets each time a tool call returns. |
AgentModelConfig | {} | Low-level model configuration. Currently supports systemPromptTransform. |
AgentModelConfig.systemPromptTransform exists for models such as OpenAI Codex that use an instructions field instead of the standard system message. When this transform is set, the agent skips the normal Prompt.setSystem call and defers to the transform instead.Setting up Agent layers
Local executor (recommended for getting started)
Agent.layerLocal is a convenience that composes Agent.layer with AgentExecutor.layerLocal. It runs the JavaScript sandbox in the same process.
Custom executor
If you need to run the executor out-of-process or over RPC, composeAgent.layer with your own AgentExecutor layer:
Providing a sub-agent model
When your agent uses thedelegate() tool, it spawns sub-agents under a separate model. Provide that model with Agent.layerSubagentModel:
Sending a prompt and streaming output
Steering a running agent
Callagent.steer from a concurrent fiber to inject follow-up instructions mid-turn: