Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/Effectful-Tech/clanka/llms.txt

Use this file to discover all available pages before exploring further.

Clanka supports two AI providers out of the box: OpenAI Codex and GitHub Copilot. Both expose an OpenAI-compatible chat completions API, so they share the same Model.make pattern and integrate with the Agent.layerLocal composition pipeline in identical ways. The main differences are how they authenticate and which transport options they offer.

Provider comparison

OpenAI Codex

Connects to the chatgpt.com backend API. Requires an OpenAI account and authenticates via a device-flow token that is persisted locally. Supports both standard HTTP streaming and a WebSocket mode for lower-latency streaming.

GitHub Copilot

Connects to api.githubcopilot.com. Uses GitHub’s OAuth device flow — no OpenAI account required. On first run, prints a verification URL and code; after authorization, the token is cached locally.
OpenAI CodexGitHub Copilot
Account requiredOpenAI accountGitHub account
Auth mechanismOpenAI device flowGitHub OAuth device flow
Token persistence~/.config/clanka (KeyValueStore)~/.config/clanka (KeyValueStore)
TransportHTTP streaming or WebSocketHTTP streaming only
Package@effect/ai-openai@effect/ai-openai-compat
Sub-agent supportYes (Codex.model)Yes (Copilot.model)

How providers are structured

Each provider module exposes two things: a client layer that handles authentication and HTTP wiring, and one or more model factories that return Model.Model objects.
// A Model.Model value bundles a provider tag, model name, and a Layer
const myModel: Model.Model<"openai", LanguageModel.LanguageModel, OpenAiClient.OpenAiClient> =
  Codex.modelWebSocket("gpt-5.3-codex", { reasoning: { effort: "high" } })
The Model.make call inside each factory stores the provider tag ("openai"), the model name string, and the internal layer that satisfies LanguageModel.LanguageModel. You never call Model.make directly — use Codex.model, Codex.modelWebSocket, or Copilot.model instead.

Composing providers with Agent.layerLocal

Providers are Effect Layers. You pass a model layer to Effect.provide alongside Agent.layerLocal to wire everything together:
import { Agent, Codex } from "clanka"
import { Effect, Layer } from "effect"
import { NodeHttpClient, NodeServices, NodeSocket } from "@effect/platform-node"
import { KeyValueStore } from "effect/unstable/persistence"
import * as NodePath from "node:path"

// 1. Build the shared client infrastructure layer
const ModelServices = Codex.layerClient.pipe(
  Layer.provide(
    KeyValueStore.layerFileSystem(NodePath.join(process.env.HOME!, ".config", "clanka")),
  ),
  Layer.provideMerge(NodeServices.layer),
  Layer.provideMerge(NodeHttpClient.layerUndici),
  Layer.merge(NodeSocket.layerWebSocketConstructorWS),
)

// 2. Create a model layer by providing client services to a model factory
const MyModel = Codex.modelWebSocket("gpt-5.3-codex", {
  reasoning: { effort: "high" },
}).pipe(Layer.provide(ModelServices))

// 3. Build the agent layer
const AgentLayer = Agent.layerLocal({
  directory: process.cwd(),
}).pipe(
  Layer.provide(NodeServices.layer),
  Layer.provide(NodeHttpClient.layerUndici),
)

// 4. Run, providing both the agent layer and model layer
Effect.gen(function* () {
  const agent = yield* Agent.Agent
  // ...
}).pipe(
  Effect.scoped,
  Effect.provide([AgentLayer, MyModel]),
)
Both providers store their tokens under ~/.config/clanka by default (when you use KeyValueStore.layerFileSystem with that path). Tokens are refreshed automatically — you only go through the device flow once per machine.

Choosing a provider

  • Use Codex if you have an OpenAI account and want the fastest possible streaming latency via WebSocket mode.
  • Use Copilot if you want to use models exposed through your GitHub Copilot subscription (including Claude models) without an OpenAI account.
Both providers can be mixed in the same program — for example, using Codex as the primary model and Copilot (or a lighter Codex variant) as the sub-agent model. See the Codex page and the Copilot page for complete setup examples.

Build docs developers (and LLMs) love