Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/Effectful-Tech/clanka/llms.txt

Use this file to discover all available pages before exploring further.

By default, Clanka operates as a task-completion agent: it keeps running until the agent calls taskComplete with a final summary. Conversation mode changes that behaviour so the agent exits a turn as soon as it produces a reply that contains no tool calls. This makes it natural to build interactive REPLs and chat-style interfaces where the user sends messages and reads responses in a loop.

Task mode vs conversation mode

Task mode (default)Conversation mode
Turn ends whenAgent calls taskCompleteAgent replies with no tool calls
System promptIncludes taskComplete instructionsOmits taskComplete instructions
Use caseAutonomous coding tasksInteractive chat / REPL
In conversation mode the taskComplete instruction is removed from the system prompt, so the agent is not prompted to call it. The turn ends naturally as soon as the model produces a text reply without invoking any tools.

Enabling conversation mode

Provide Agent.ConversationMode.layer(true) anywhere in your Effect layer stack.
import * as Agent from "clanka/Agent"
import * as Layer from "effect/Layer"

const ConversationLayer = Agent.ConversationMode.layer(true)

// Merge it with the rest of your agent layers:
const AppLayer = Layer.mergeAll(
  Agent.layerLocal({ directory: process.cwd() }),
  ConversationLayer,
  // ...model and platform layers
)
ConversationMode is a Context.Reference with a default value of false, so omitting the layer is equivalent to Agent.ConversationMode.layer(false).

Building an interactive REPL

The CLI built into Clanka uses Agent.ConversationMode together with Prompt.text to create a simple read-eval-print loop. Here is the pattern drawn directly from src/cli.ts:
import * as Effect from "effect/Effect"
import * as Prompt from "effect/unstable/cli/Prompt"
import * as Stream from "effect/Stream"
import * as OutputFormatter from "clanka/OutputFormatter"
import * as Stdio from "effect/Stdio"
import { pipe } from "effect/Function"
import * as Agent from "clanka/Agent"

Effect.gen(function* () {
  const agent = yield* Agent.Agent
  const stdio = yield* Stdio.Stdio

  while (true) {
    const prompt = yield* Prompt.text({ message: ">" })

    yield* pipe(
      agent.send({ prompt }),
      Stream.unwrap,
      OutputFormatter.pretty({ outputTruncation: 20 }),
      Stream.run(stdio.stdout()),
    )

    console.log("")
  }
})
Each iteration of the loop:
  1. Reads a line of input from the terminal.
  2. Sends the prompt to the agent with agent.send.
  3. Streams formatted output to stdout until the turn ends.
  4. Returns control to the loop so the user can type the next message.

Maintaining history across turns

Agent exposes a history field typed as MutableRef.MutableRef<Prompt.Prompt>. It accumulates the full conversation — both user messages and assistant responses — across every agent.send call.
import * as Agent from "clanka/Agent"
import * as MutableRef from "effect/MutableRef"
import * as Prompt from "effect/unstable/ai/Prompt"

const agent = yield* Agent.Agent

// Read current history
const current = MutableRef.get(agent.history)

// Clear history between conversations
MutableRef.set(agent.history, Prompt.empty)
Clearing history resets the conversation context completely. The agent will not remember anything from previous turns after a reset.

Steering the agent mid-turn

agent.steer(message) injects a user message into an ongoing turn without waiting for the current response to finish. This is useful for providing feedback or corrections while the agent is still thinking.
const agent = yield* Agent.Agent

// Send a prompt to start a turn
const stream = yield* agent.send({ prompt: "Refactor the auth module." })

// From another fiber, steer the agent if needed:
yield* agent.steer("Focus on the login flow only, skip logout.")
The steer effect completes once the message has been accepted. Interrupting the effect withdraws the message before it is delivered.
agent.steer is only meaningful while a turn is in progress. Calling it between turns has no effect on the next agent.send call.

Choosing the right mode

  • Building a terminal REPL or chat interface
  • When users ask short questions and expect immediate replies
  • When you want natural back-and-forth without requiring taskComplete
  • When prompts in the CLI are provided interactively (Option.isNone(prompt) in cli.ts)
  • Running autonomous coding tasks (examples/cli.ts passes a prompt via argv)
  • When the agent needs to complete multi-step work before surfacing a result
  • When you want a definitive final summary returned via taskComplete

Wiring conversation mode in the CLI

The Clanka CLI enables conversation mode only when no --prompt flag is provided, so non-interactive invocations still run in task mode:
Command.provide(({ prompt }) =>
  Agent.ConversationMode.layer(Option.isNone(prompt)),
)
You can apply the same pattern in your own application: check whether the user supplied a one-shot prompt at startup, and set conversation mode accordingly.

Build docs developers (and LLMs) love