Documentation Index
Fetch the complete documentation index at: https://mintlify.com/Effectful-Tech/clanka/llms.txt
Use this file to discover all available pages before exploring further.
The Agent service is the primary entry point for running the Clanka AI agent. It manages conversation history, streams structured output events, and supports real-time steering while the agent is executing.
TypeId
export type TypeId = "~clanka/Agent"
export const TypeId: TypeId = "~clanka/Agent"
Every Agent instance carries the brand [TypeId]: TypeId to enable nominal type-checking at runtime.
Agent service
export const Agent = Context.Service<Agent>("clanka/Agent")
Agent is an Effect Context.Service tag. Provide it to your Effect runtime via one of the layer constructors below.
Interface
export interface Agent {
readonly [TypeId]: TypeId
readonly history: MutableRef.MutableRef<Prompt.Prompt>
send(options: {
readonly prompt: Prompt.RawInput
readonly system?:
| string
| ((options: {
readonly toolInstructions: string
readonly agentsMd: string
}) => string)
| undefined
}): Effect.Effect<
Stream.Stream<Output, AgentFinished | AiError.AiError>,
never,
| Scope.Scope
| LanguageModel.LanguageModel
| Model.ProviderName
| Model.ModelName
| SubagentModel
>
steer(message: string): Effect.Effect<void>
}
Properties
history
readonly history: MutableRef.MutableRef<Prompt.Prompt>
A mutable reference to the full conversation history. You can read the current value with MutableRef.get(agent.history) or update it directly when you need to reset or prepopulate the conversation.
Methods
send()
Sends a prompt to the agent and returns an Effect that resolves to a Stream of output events. The stream can fail with AgentFinished (a tagged error that carries the final summary) or AiError.
send(options: {
readonly prompt: Prompt.RawInput
readonly system?:
| string
| ((options: {
readonly toolInstructions: string
readonly agentsMd: string
}) => string)
| undefined
}): Effect.Effect<
Stream.Stream<Output, AgentFinished | AiError.AiError>,
never,
| Scope.Scope
| LanguageModel.LanguageModel
| Model.ProviderName
| Model.ModelName
| SubagentModel
>
The prompt to send. Accepts any value accepted by Prompt.make() — a plain string, a user/assistant message object, or an array of message parts.
options.system
string | ((options: { toolInstructions: string, agentsMd: string }) => string)
Optional system instructions. Pass a string to append it to the default system prompt. Pass a function to replace the entire system section — the function receives toolInstructions (rendered tool TypeScript declarations) and agentsMd (contents of AGENTS.md if present).
steer()
Sends an in-flight steering message to the agent while it is executing. The Effect completes only after the message has been queued for the agent’s next turn. Interrupting the effect withdraws the message before it is sent.
steer(message: string): Effect.Effect<void>
A plain-text instruction or feedback string delivered to the agent at the start of its next turn.
Constructors
make
export const make: Effect.Effect<
Agent,
never,
Scope.Scope | AgentExecutor.AgentExecutor
>
Low-level constructor. Requires an AgentExecutor and a Scope in context. Prefer the layer helpers below for most use cases.
Layers
layer
export const layer: Layer.Layer<Agent, never, AgentExecutor.AgentExecutor>
Wraps make in a Layer. Use this when you already have an AgentExecutor layer composed separately.
layerLocal()
export const layerLocal: <Toolkit extends Toolkit.Any = never>(options: {
readonly directory: string
readonly tools?: Toolkit | undefined
}) => Layer.Layer<
Agent,
never,
| FileSystem.FileSystem
| Path.Path
| ChildProcessSpawner.ChildProcessSpawner
| HttpClient.HttpClient
| Exclude<
Toolkit extends Toolkit.Toolkit<infer T>
? Tool.HandlersFor<T> | Tool.HandlerServices<T[keyof T]>
: never,
CurrentDirectory | SubagentExecutor | TaskCompleter
>
>
Creates an Agent layer backed by a local in-process AgentExecutor. This is the most common setup for CLI tools and single-process servers.
Absolute path to the working directory the agent will use for file operations and tool execution.
Optional additional Toolkit of custom tools to expose to the agent alongside the built-in tools.
layerSubagentModel()
export const layerSubagentModel: <E, R>(
layer: Layer.Layer<
LanguageModel.LanguageModel | Model.ProviderName | Model.ModelName,
E,
R
>
) => Layer.Layer<SubagentModel, E, R>
Wraps a language model layer so that it is used specifically for sub-agent calls spawned by spawnSubagent. When not provided, sub-agents inherit the outer LanguageModel context.
Configuration references
ConversationMode
export class ConversationMode extends Context.Reference<boolean>(
"clanka/Agent/ConversationMode",
{ defaultValue: () => false }
) {
static readonly layer: (enabled: boolean) => Layer.Layer<ConversationMode>
}
A Context.Reference that switches the agent into interactive conversation mode. When true, the agent stops after each response instead of looping until taskComplete is called. Default is false.
In conversation mode the agent does not require a taskComplete call — the turn ends as soon as the model stops producing tool calls.
TurnTimeout
export class TurnTimeout extends Context.Reference<Duration.Duration>(
"clanka/Agent/TurnTimeout",
{ defaultValue: () => Duration.minutes(5) }
) {
static readonly layer: (timeout: Duration.Input) => Layer.Layer<TurnTimeout>
}
Inactivity timeout before the current turn is retried. If the model produces no response part within this window the agent emits an ErrorRetry event and restarts the turn. Default is 5 minutes.
AgentModelConfig
export class AgentModelConfig extends Context.Reference<{
readonly systemPromptTransform?:
| (<A, E, R>(
system: string,
effect: Effect.Effect<A, E, R>
) => Effect.Effect<A, E, R>)
| undefined
}>("clanka/Agent/SystemPromptTransform", {
defaultValue: () => ({})
}) {
static readonly layer: (
options: typeof AgentModelConfig.Service
) => Layer.Layer<AgentModelConfig>
}
Holds optional per-model configuration. The systemPromptTransform field, when set, receives the generated system string and wraps the model call — useful for providers that require the system prompt to be injected via a different mechanism (e.g., Anthropic’s system parameter vs. a system role message).
SubagentModel service
export class SubagentModel extends Context.Service<
SubagentModel,
Layer.Layer<
LanguageModel.LanguageModel | Model.ProviderName | Model.ModelName
>
>()("clanka/Agent/SubagentModel") {}
A Context.Service whose value is itself a Layer that provides LanguageModel, ProviderName, and ModelName for sub-agents. Build this service with layerSubagentModel().
Usage examples
Basic usage with a local executor
import { Agent } from "clanka"
import { NodeServices, NodeHttpClient, NodeRuntime } from "@effect/platform-node"
import { Effect, Stream, Layer } from "effect"
const program = Effect.gen(function* () {
const agent = yield* Agent.Agent
const stream = yield* agent.send({
prompt: "List all TypeScript files in the src directory.",
})
yield* stream.pipe(
Stream.runForEach((event) =>
Effect.sync(() => console.log(event))
)
)
}).pipe(Effect.scoped)
const AppLayer = Agent.layerLocal({ directory: process.cwd() }).pipe(
Layer.provide(NodeServices.layer),
Layer.provide(NodeHttpClient.layerUndici),
)
Effect.provide(program, AppLayer).pipe(NodeRuntime.runMain)
Steering the agent mid-run
const program = Effect.gen(function* () {
const agent = yield* Agent.Agent
const stream = yield* agent.send({
prompt: "Refactor the authentication module.",
})
// Fork the stream consumer
const fiber = yield* stream.pipe(Stream.runDrain, Effect.fork)
// Send steering message while the agent is working
yield* Effect.sleep("2 seconds")
yield* agent.steer("Focus only on the login flow for now.")
yield* fiber.join
})
Custom system prompt
yield* agent.send({
prompt: "Review the codebase for security issues.",
system: ({ toolInstructions, agentsMd }) =>
`You are a security-focused code reviewer.\n\n${toolInstructions}\n\n${agentsMd}`,
})
Conversation mode
const ConversationLayer = Agent.ConversationMode.layer(true)
const AppLayer = Agent.layerLocal({ directory: "/workspace" }).pipe(
Layer.provide(ConversationLayer),
// ... platform layers
)