Clanka supports two AI providers out of the box: OpenAI Codex and GitHub Copilot. Both expose an OpenAI-compatible chat completions API, so they share the sameDocumentation Index
Fetch the complete documentation index at: https://mintlify.com/Effectful-Tech/clanka/llms.txt
Use this file to discover all available pages before exploring further.
Model.make pattern and integrate with the Agent.layerLocal composition pipeline in identical ways. The main differences are how they authenticate and which transport options they offer.
Provider comparison
OpenAI Codex
Connects to the
chatgpt.com backend API. Requires an OpenAI account and authenticates via a device-flow token that is persisted locally. Supports both standard HTTP streaming and a WebSocket mode for lower-latency streaming.GitHub Copilot
Connects to
api.githubcopilot.com. Uses GitHub’s OAuth device flow — no OpenAI account required. On first run, prints a verification URL and code; after authorization, the token is cached locally.| OpenAI Codex | GitHub Copilot | |
|---|---|---|
| Account required | OpenAI account | GitHub account |
| Auth mechanism | OpenAI device flow | GitHub OAuth device flow |
| Token persistence | ~/.config/clanka (KeyValueStore) | ~/.config/clanka (KeyValueStore) |
| Transport | HTTP streaming or WebSocket | HTTP streaming only |
| Package | @effect/ai-openai | @effect/ai-openai-compat |
| Sub-agent support | Yes (Codex.model) | Yes (Copilot.model) |
How providers are structured
Each provider module exposes two things: a client layer that handles authentication and HTTP wiring, and one or more model factories that returnModel.Model objects.
Model.make call inside each factory stores the provider tag ("openai"), the model name string, and the internal layer that satisfies LanguageModel.LanguageModel. You never call Model.make directly — use Codex.model, Codex.modelWebSocket, or Copilot.model instead.
Composing providers with Agent.layerLocal
Providers are Effect Layers. You pass a model layer toEffect.provide alongside Agent.layerLocal to wire everything together:
Choosing a provider
- Use Codex if you have an OpenAI account and want the fastest possible streaming latency via WebSocket mode.
- Use Copilot if you want to use models exposed through your GitHub Copilot subscription (including Claude models) without an OpenAI account.