Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/Effectful-Tech/clanka/llms.txt

Use this file to discover all available pages before exploring further.

The Codex provider connects Clanka to OpenAI’s chatgpt.com backend API (https://chatgpt.com/backend-api/codex). It handles OAuth device-flow authentication automatically, persists tokens to disk via KeyValueStore, and exposes two model factories for HTTP streaming and WebSocket streaming.

Authentication

CodexAuth manages the full OpenAI device-flow lifecycle:
  1. On first use, it opens a device-authorization request and calls your DeviceCodeHandler with the verification URL (https://auth.openai.com/codex/device) and user code.
  2. It polls for authorization, then exchanges the authorization code for an access token.
  3. The token (including refresh token and account ID) is persisted to KeyValueStore under the key prefix codex.auth/.
  4. On subsequent runs, the stored token is loaded from disk. If it is expired, the refresh token is used automatically. If the refresh fails, the device flow runs again.
The default storage path (when using KeyValueStore.layerFileSystem) is ~/.config/clanka.
CodexAuth uses a semaphore internally, so concurrent requests never trigger multiple simultaneous device flows or token refreshes.

Setting up the client layer

Codex.layerClient wires together the OpenAiClient and CodexAuth. Provide it with HTTP client, file system, and (for WebSocket mode) WebSocket constructor services:
import { Codex } from "clanka"
import { Layer } from "effect"
import { NodeHttpClient, NodeServices, NodeSocket } from "@effect/platform-node"
import { KeyValueStore } from "effect/unstable/persistence"
import * as NodePath from "node:path"

const ModelServices = Codex.layerClient.pipe(
  Layer.provide(
    KeyValueStore.layerFileSystem(
      NodePath.join(process.env.HOME!, ".config", "clanka"),
    ),
  ),
  Layer.provideMerge(NodeServices.layer),
  Layer.provideMerge(NodeHttpClient.layerUndici),
  Layer.merge(NodeSocket.layerWebSocketConstructorWS), // required for WebSocket mode
)
NodeSocket.layerWebSocketConstructorWS is only required when using Codex.modelWebSocket. You can omit it if you only use Codex.model.

Model factories

Clanka exposes two model factory functions. Both accept the same model name string and options object.

Codex.model — HTTP streaming

export const model = (
  model: (string & {}) | OpenAiLanguageModel.Model,
  options?: OpenAiLanguageModel.Config["Service"] & typeof AgentModelConfig.Service,
): Model.Model<"openai", LanguageModel.LanguageModel, OpenAiClient.OpenAiClient>
Uses standard server-sent events over HTTPS. Choose this when you do not need WebSocket support or cannot provide a WebSocketConstructor.
export const modelWebSocket = (
  model: (string & {}) | OpenAiLanguageModel.Model,
  options?: OpenAiLanguageModel.Config["Service"] & typeof AgentModelConfig.Service,
): Model.Model<
  "openai",
  LanguageModel.LanguageModel | OpenAiClient.OpenAiSocket | ResponseIdTracker.ResponseIdTracker,
  OpenAiClient.OpenAiClient | Socket.WebSocketConstructor
>
Opens a persistent WebSocket connection to the Codex backend. This reduces latency for the first token compared to starting a new HTTPS request per turn. It requires NodeSocket.layerWebSocketConstructorWS (or any other WebSocketConstructor layer) in the dependency graph.
Prefer Codex.modelWebSocket for interactive agents. Use Codex.model for batch workloads or environments where WebSocket connections are not available.

Model options

Both factory functions accept an optional options object. All fields are optional.
FieldTypeDefaultDescription
reasoning.effort"low" | "medium" | "high""medium"Controls how much reasoning the model performs before generating a response.
reasoning.summarystring"auto"Controls whether a reasoning summary is included in the response.
Other fields from OpenAiLanguageModel.Config["Service"] are passed through to the underlying @effect/ai-openai layer. The store field is always set to false by Clanka.
The Codex API expects system prompts in the instructions field of the request body, not in the system message role. Clanka handles this automatically via AgentModelConfig’s systemPromptTransform — you do not need to do anything special.

Creating a model layer

Call the factory and pipe the result through Layer.provide (or Layer.provideMerge) with your ModelServices:
import { Codex } from "clanka"
import { Layer } from "effect"

const Gpt54 = Codex.modelWebSocket("gpt-5.3-codex", {
  reasoning: { effort: "high" },
}).pipe(Layer.provide(ModelServices))
This produces a Layer that provides LanguageModel.LanguageModel, Model.ProviderName, and Model.ModelName — the three services consumed by Agent.send.

Sub-agent model

For tasks where the main agent spawns sub-agents, you can designate a lighter (and cheaper) model as the sub-agent executor using Agent.layerSubagentModel:
import { Agent, Codex } from "clanka"
import { Layer } from "effect"

const SubAgentModel = Codex.model("gpt-5.4", {
  reasoning: {
    effort: "low",
    summary: "auto",
  },
}).pipe(Layer.provide(ModelServices))

// Provide both the primary model and the sub-agent model
Effect.gen(function* () {
  // ...
}).pipe(
  Effect.scoped,
  Effect.provide([AgentLayer, Gpt54, Agent.layerSubagentModel(SubAgentModel)]),
)
Agent.layerSubagentModel captures the current Effect context and wraps the layer so sub-agents inherit all services from the parent scope.

Complete example

The following is the Codex setup from examples/cli.ts:
import { Agent, Codex } from "clanka"
import { Config, Effect, Layer, Stream } from "effect"
import { NodeHttpClient, NodeRuntime, NodeServices, NodeSocket } from "@effect/platform-node"
import { KeyValueStore } from "effect/unstable/persistence"
import * as NodePath from "node:path"

const XDG_CONFIG_HOME =
  process.env.XDG_CONFIG_HOME ||
  NodePath.join(process.env.HOME || "", ".config")

const ModelServices = Codex.layerClient.pipe(
  Layer.provide(
    KeyValueStore.layerFileSystem(NodePath.join(XDG_CONFIG_HOME, "clanka")),
  ),
  Layer.provideMerge(NodeServices.layer),
  Layer.provideMerge(NodeHttpClient.layerUndici),
  Layer.merge(NodeSocket.layerWebSocketConstructorWS),
)

const Gpt54 = Codex.modelWebSocket("gpt-5.3-codex", {
  reasoning: { effort: "high" },
}).pipe(Layer.provide(ModelServices))

const SubAgentModel = Codex.model("gpt-5.4", {
  reasoning: { effort: "low", summary: "auto" },
}).pipe(Layer.provide(ModelServices))

const AgentLayer = Agent.layerLocal({
  directory: process.cwd(),
}).pipe(
  Layer.provide(NodeServices.layer),
  Layer.provide(NodeHttpClient.layerUndici),
)

Effect.gen(function* () {
  const agent = yield* Agent.Agent
  const output = yield* agent.send({ prompt: process.argv.slice(2).join(" ") })
  // consume output ...
}).pipe(
  Effect.scoped,
  Effect.provide([AgentLayer, Gpt54, Agent.layerSubagentModel(SubAgentModel)]),
  NodeRuntime.runMain,
)

Build docs developers (and LLMs) love