The Codex provider connects Clanka to OpenAI’sDocumentation Index
Fetch the complete documentation index at: https://mintlify.com/Effectful-Tech/clanka/llms.txt
Use this file to discover all available pages before exploring further.
chatgpt.com backend API (https://chatgpt.com/backend-api/codex). It handles OAuth device-flow authentication automatically, persists tokens to disk via KeyValueStore, and exposes two model factories for HTTP streaming and WebSocket streaming.
Authentication
CodexAuth manages the full OpenAI device-flow lifecycle:
- On first use, it opens a device-authorization request and calls your
DeviceCodeHandlerwith the verification URL (https://auth.openai.com/codex/device) and user code. - It polls for authorization, then exchanges the authorization code for an access token.
- The token (including refresh token and account ID) is persisted to
KeyValueStoreunder the key prefixcodex.auth/. - On subsequent runs, the stored token is loaded from disk. If it is expired, the refresh token is used automatically. If the refresh fails, the device flow runs again.
KeyValueStore.layerFileSystem) is ~/.config/clanka.
CodexAuth uses a semaphore internally, so concurrent requests never trigger multiple simultaneous device flows or token refreshes.Setting up the client layer
Codex.layerClient wires together the OpenAiClient and CodexAuth. Provide it with HTTP client, file system, and (for WebSocket mode) WebSocket constructor services:
NodeSocket.layerWebSocketConstructorWS is only required when using Codex.modelWebSocket. You can omit it if you only use Codex.model.
Model factories
Clanka exposes two model factory functions. Both accept the same model name string and options object.Codex.model — HTTP streaming
WebSocketConstructor.
Codex.modelWebSocket — WebSocket streaming (recommended)
NodeSocket.layerWebSocketConstructorWS (or any other WebSocketConstructor layer) in the dependency graph.
Model options
Both factory functions accept an optionaloptions object. All fields are optional.
| Field | Type | Default | Description |
|---|---|---|---|
reasoning.effort | "low" | "medium" | "high" | "medium" | Controls how much reasoning the model performs before generating a response. |
reasoning.summary | string | "auto" | Controls whether a reasoning summary is included in the response. |
OpenAiLanguageModel.Config["Service"] are passed through to the underlying @effect/ai-openai layer. The store field is always set to false by Clanka.
The Codex API expects system prompts in the
instructions field of the request body, not in the system message role. Clanka handles this automatically via AgentModelConfig’s systemPromptTransform — you do not need to do anything special.Creating a model layer
Call the factory and pipe the result throughLayer.provide (or Layer.provideMerge) with your ModelServices:
Layer that provides LanguageModel.LanguageModel, Model.ProviderName, and Model.ModelName — the three services consumed by Agent.send.
Sub-agent model
For tasks where the main agent spawns sub-agents, you can designate a lighter (and cheaper) model as the sub-agent executor usingAgent.layerSubagentModel:
Agent.layerSubagentModel captures the current Effect context and wraps the layer so sub-agents inherit all services from the parent scope.
Complete example
The following is the Codex setup fromexamples/cli.ts: