Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/Effectful-Tech/clanka/llms.txt

Use this file to discover all available pages before exploring further.

The Agent service is the primary entry point for interacting with a language model in Clanka. It owns the conversation history, sends prompts to the configured LanguageModel, and returns a live Stream of typed Output events that you can observe or render as the model responds.

The Agent interface

interface Agent {
  readonly history: MutableRef<Prompt>

  send(options: {
    readonly prompt: Prompt.RawInput
    readonly system?:
      | string
      | ((options: {
          readonly toolInstructions: string
          readonly agentsMd: string
        }) => string)
      | undefined
  }): Effect<Stream<Output, AgentFinished | AiError>, never, Scope | LanguageModel | ProviderName | ModelName | SubagentModel>

  steer(message: string): Effect<void>
}
The Agent service is provided via Context.Service under the key "clanka/Agent". Retrieve it inside an Effect with yield* Agent.Agent.

send(options)

Sends a prompt and returns an Effect that resolves to a Stream<Output, AgentFinished | AiError>. The stream emits typed output events as the model reasons and executes tools. Once the agent calls taskComplete, the stream fails with AgentFinished (carrying the final summary), which is the signal that the turn is done.
OptionTypeDescription
promptPrompt.RawInputThe user message or structured prompt to send
systemstring | function | undefinedAdditional system instructions, or a function that receives toolInstructions and agentsMd and returns a string
When system is a function, the agent calls it with the generated tool instructions and any AGENTS.md content read from the working directory. This lets you position your instructions around the built-in instructions.

steer(message)

Injects a user message into the current turn. The effect completes once the message has been accepted into the pending queue. Interrupting the effect withdraws the message before it is sent to the model.

history

A MutableRef<Prompt> that holds the full conversation transcript. The agent appends each request and response to this ref automatically, so subsequent send calls share context.

Output stream events

The stream emitted by send produces a union of typed events:
type Output =
  | AgentStart       // turn begins — carries agent id, model, provider
  | ReasoningStart   // model begins a reasoning/thinking block
  | ReasoningDelta   // incremental reasoning text
  | ReasoningEnd     // reasoning block complete
  | ScriptStart      // model starts emitting JavaScript to execute
  | ScriptDelta      // incremental script text
  | ScriptEnd        // script submitted to the executor
  | ScriptOutput     // stdout from the executed script
  | Usage            // token counts for the turn so far
  | ErrorRetry       // a retryable error occurred; the agent is retrying
  | SubagentStart    // a sub-agent was spawned
  | SubagentPart     // output from a running sub-agent
  | SubagentComplete // sub-agent finished
A turn typically follows this sequence: AgentStartReasoningStart / ReasoningDelta / ReasoningEndScriptStart / ScriptDelta / ScriptEndScriptOutput → (repeat for multiple tool calls) → AgentFinished (as a stream error).

Context references

The agent reads several context references that let you tune its behaviour without touching the layer wiring:
ReferenceDefaultDescription
SubagentModelThe Layer used to provide a LanguageModel for spawned sub-agents. Required when the agent delegates work via delegate().
ConversationModefalseWhen true, the agent does not wait for taskComplete and ends the turn after the first non-tool-call response. Useful for chat applications.
TurnTimeoutDuration.minutes(5)Inactivity duration after which the current turn is retried. Resets each time a tool call returns.
AgentModelConfig{}Low-level model configuration. Currently supports systemPromptTransform.
AgentModelConfig.systemPromptTransform exists for models such as OpenAI Codex that use an instructions field instead of the standard system message. When this transform is set, the agent skips the normal Prompt.setSystem call and defers to the transform instead.

Setting up Agent layers

Agent.layerLocal is a convenience that composes Agent.layer with AgentExecutor.layerLocal. It runs the JavaScript sandbox in the same process.
import { Agent } from "clanka"
import { NodeServices, NodeHttpClient } from "@effect/platform-node"
import { Layer } from "effect"

const AgentLayer = Agent.layerLocal({
  directory: process.cwd(), // working directory for file tools
}).pipe(
  Layer.provide(NodeServices.layer),
  Layer.provide(NodeHttpClient.layerUndici),
)

Custom executor

If you need to run the executor out-of-process or over RPC, compose Agent.layer with your own AgentExecutor layer:
import { Agent, AgentExecutor } from "clanka"
import { Layer } from "effect"

const AgentLayer = Agent.layer.pipe(
  Layer.provide(AgentExecutor.layerRpc),
  // layerRpc requires an RpcClient.Protocol in context
)

Providing a sub-agent model

When your agent uses the delegate() tool, it spawns sub-agents under a separate model. Provide that model with Agent.layerSubagentModel:
import { Agent, Codex } from "clanka"
import { Effect, Layer } from "effect"

const SubAgentModel = Codex.model("gpt-5.4", {
  reasoning: { effort: "low", summary: "auto" },
}).pipe(Layer.provide(ModelServices))

Effect.gen(function* () {
  const agent = yield* Agent.Agent

  const stream = yield* agent.send({
    prompt: "Refactor the auth module and write tests.",
  })
  // consume stream …
}).pipe(
  Effect.scoped,
  Effect.provide([AgentLayer, PrimaryModel, Agent.layerSubagentModel(SubAgentModel)]),
)

Sending a prompt and streaming output

import { Agent, OutputFormatter } from "clanka"
import { Effect, Stream } from "effect"

Effect.gen(function* () {
  const agent = yield* Agent.Agent

  const output = yield* agent.send({
    prompt: "List all TypeScript files and count the lines in each.",
  })

  yield* output.pipe(
    OutputFormatter.pretty(),
    Stream.runForEachArray((chunk) => {
      for (const out of chunk) process.stdout.write(out)
      return Effect.void
    }),
  )
}).pipe(Effect.scoped, Effect.provide([AgentLayer, ModelLayer]))

Steering a running agent

Call agent.steer from a concurrent fiber to inject follow-up instructions mid-turn:
Effect.gen(function* () {
  const agent = yield* Agent.Agent
  const stream = yield* agent.send({ prompt: "Analyse the codebase." })

  // In a concurrent fiber, inject a mid-turn message
  yield* Effect.fork(
    Effect.delay(agent.steer("Focus only on the src/ directory."), "2 seconds"),
  )

  yield* Stream.runDrain(stream)
}).pipe(Effect.scoped)

Build docs developers (and LLMs) love