Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/Effectful-Tech/clanka/llms.txt

Use this file to discover all available pages before exploring further.

This guide walks you through building a minimal but fully functional Clanka agent. You will configure a provider, compose Effect layers, send a prompt, and stream formatted output to your terminal — all based on the patterns in the official examples/cli.ts.
Clanka requires Effect and several @effect/* peer packages. See the installation guide for exact version ranges and tsconfig requirements before continuing.
1

Install Clanka and peer dependencies

Install clanka and its required peer packages. The example uses pnpm, but npm and yarn work the same way.
pnpm add clanka effect @effect/ai-openai @effect/ai-openai-compat \
  @effect/platform-node @effect/sql-sqlite-node
2

Configure the Codex provider layer

The Codex provider authenticates through the ChatGPT web API. Build the shared model services layer by composing Codex.layerClient with the Node.js platform services and a persistent key-value store for credentials.
import { Layer } from "effect"
import { Codex } from "clanka"
import {
  NodeHttpClient,
  NodeServices,
  NodeSocket,
} from "@effect/platform-node"
import { KeyValueStore } from "effect/unstable/persistence"
import * as NodePath from "node:path"

const XDG_CONFIG_HOME =
  process.env.XDG_CONFIG_HOME ||
  NodePath.join(process.env.HOME || "", ".config")

const ModelServices = Codex.layerClient.pipe(
  Layer.provide(
    KeyValueStore.layerFileSystem(NodePath.join(XDG_CONFIG_HOME, "clanka")),
  ),
  Layer.provideMerge(NodeServices.layer),
  Layer.provideMerge(NodeHttpClient.layerUndici),
  Layer.merge(NodeSocket.layerWebSocketConstructorWS),
)
3

Choose a model

Use Codex.modelWebSocket to create a model layer. The model layer provides the LanguageModel service that the agent consumes. Pass ModelServices to satisfy its dependencies.
const Gpt54 = Codex.modelWebSocket("gpt-5.3-codex", {
  reasoning: {
    effort: "high",
  },
}).pipe(Layer.provide(ModelServices))
You can configure a separate, lighter model for subagents to keep costs down. Pass it to Agent.layerSubagentModel alongside your main model layer.
const SubAgentModel = Codex.model("gpt-5.4", {
  reasoning: {
    effort: "low",
    summary: "auto",
  },
}).pipe(Layer.provide(ModelServices))
4

Create the agent layer

Agent.layerLocal wires together the Agent service, the AgentExecutor that manages the sandbox, and the Node.js filesystem and process services. Point it at the directory the agent should work in — typically process.cwd().
import { Agent } from "clanka"
import { NodeHttpClient, NodeServices } from "@effect/platform-node"
import { Layer } from "effect"

const AgentLayer = Agent.layerLocal({
  directory: process.cwd(),
}).pipe(
  Layer.provide(NodeServices.layer),
  Layer.provide(NodeHttpClient.layerUndici),
)
5

Send a prompt and stream output

Use Effect.gen to access the Agent service, call agent.send() with your prompt, and pipe the resulting stream through OutputFormatter.pretty() to get human-readable terminal output.
import { Effect, Stream } from "effect"
import { Agent, OutputFormatter } from "clanka"
import { NodeRuntime } from "@effect/platform-node"

Effect.gen(function* () {
  const agent = yield* Agent.Agent

  const output = yield* agent.send({
    prompt: process.argv.slice(2).join(" "),
  })

  yield* output.pipe(
    OutputFormatter.pretty(),
    Stream.runForEachArray((chunk) => {
      for (const out of chunk) {
        process.stdout.write(out)
      }
      return Effect.void
    }),
  )
}).pipe(
  Effect.scoped,
  Effect.provide([AgentLayer, Gpt54, Agent.layerSubagentModel(SubAgentModel)]),
  NodeRuntime.runMain,
)
6

Run your agent

Compile the file with tsc or run it directly with a TypeScript runner, passing your task as command-line arguments.
# Pass a prompt directly (non-interactive)
npx tsx agent.ts "Refactor the fetchUser function to use async/await"
The agent prints reasoning steps, the generated JavaScript, sandbox output, and a final task-complete summary as it works through your request.

Complete example

Here is the full file combining all the steps above, mirroring examples/cli.ts:
import { Effect, Layer, Stream } from "effect"
import { Agent, Codex, OutputFormatter } from "clanka"
import {
  NodeHttpClient,
  NodeRuntime,
  NodeServices,
  NodeSocket,
} from "@effect/platform-node"
import { KeyValueStore } from "effect/unstable/persistence"
import * as NodePath from "node:path"

const XDG_CONFIG_HOME =
  process.env.XDG_CONFIG_HOME ||
  NodePath.join(process.env.HOME || "", ".config")

const ModelServices = Codex.layerClient.pipe(
  Layer.provide(
    KeyValueStore.layerFileSystem(NodePath.join(XDG_CONFIG_HOME, "clanka")),
  ),
  Layer.provideMerge(NodeServices.layer),
  Layer.provideMerge(NodeHttpClient.layerUndici),
  Layer.merge(NodeSocket.layerWebSocketConstructorWS),
)

const Gpt54 = Codex.modelWebSocket("gpt-5.3-codex", {
  reasoning: { effort: "high" },
}).pipe(Layer.provide(ModelServices))

const SubAgentModel = Codex.model("gpt-5.4", {
  reasoning: { effort: "low", summary: "auto" },
}).pipe(Layer.provide(ModelServices))

const AgentLayer = Agent.layerLocal({
  directory: process.cwd(),
}).pipe(
  Layer.provide(NodeServices.layer),
  Layer.provide(NodeHttpClient.layerUndici),
)

Effect.gen(function* () {
  const agent = yield* Agent.Agent

  const output = yield* agent.send({
    prompt: process.argv.slice(2).join(" "),
  })

  yield* output.pipe(
    OutputFormatter.pretty(),
    Stream.runForEachArray((chunk) => {
      for (const out of chunk) {
        process.stdout.write(out)
      }
      return Effect.void
    }),
  )
}).pipe(
  Effect.scoped,
  Effect.provide([AgentLayer, Gpt54, Agent.layerSubagentModel(SubAgentModel)]),
  NodeRuntime.runMain,
)

Next steps

Installation

Full peer dependency list, version ranges, and tsconfig requirements.

Executor concept

How the sandboxed VM executor works and what tools it exposes.

Conversation mode

Keep history across multiple turns for interactive chat-style agents.

Codex provider

Authentication flow, model options, and WebSocket vs. HTTP modes.

Build docs developers (and LLMs) love