Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/Effectful-Tech/clanka/llms.txt

Use this file to discover all available pages before exploring further.

The Copilot provider connects Clanka to the GitHub Copilot API (https://api.githubcopilot.com) using GitHub’s OAuth device flow. No OpenAI account or API key is needed — authentication is tied to your GitHub account and any active Copilot subscription.

Authentication

GithubCopilotAuth implements the GitHub OAuth device authorization flow:
  1. On first use, Clanka requests a device code from GitHub and calls your DeviceCodeHandler with the verification URL (https://github.com/login/device) and the user code.
  2. The user opens the URL in a browser, enters the code, and approves access.
  3. Clanka polls GitHub until authorization is granted, then stores the access token locally.
  4. The token is persisted in KeyValueStore under the prefix github-copilot.auth/. On the next run, the stored token is loaded from disk.
The default storage path (when using KeyValueStore.layerFileSystem) is ~/.config/clanka.
Unlike the Codex provider, GitHub Copilot access tokens do not have a refresh token. If the stored token expires, Clanka will automatically re-run the device flow.

Displaying the verification prompt

DeviceCodeHandler.layerConsole is a built-in layer that prints the verification URL and user code to stdout:
import { DeviceCodeHandler } from "clanka"

// Prints: "Open https://github.com/login/device and enter code XXXX-XXXX."
DeviceCodeHandler.layerConsole
You can replace layerConsole with a custom DeviceCodeHandler layer if you want to display the prompt differently — for example, opening the browser automatically or showing a UI dialog. The DeviceCodeHandler service interface is:
interface DeviceCodeHandler {
  onCode(options: {
    readonly verifyUrl: string
    readonly deviceCode: string
  }): Effect.Effect<void>
}

Setting up the client layer

Copilot.layerClient wires together the OpenAiClient (from @effect/ai-openai-compat) and GithubCopilotAuth. Provide it with HTTP client and file system services, plus a DeviceCodeHandler:
import { Copilot, DeviceCodeHandler } from "clanka"
import { Layer } from "effect"
import { NodeHttpClient, NodeServices } from "@effect/platform-node"
import { KeyValueStore } from "effect/unstable/persistence"
import * as NodePath from "node:path"

const ModelServices = Copilot.layerClient.pipe(
  Layer.provide(
    KeyValueStore.layerFileSystem(
      NodePath.join(process.env.HOME!, ".config", "clanka"),
    ),
  ),
  Layer.provideMerge(NodeServices.layer),
  Layer.provideMerge(NodeHttpClient.layerUndici),
  Layer.provide(DeviceCodeHandler.layerConsole),
)
The Copilot provider uses @effect/ai-openai-compat for API compatibility with the GitHub Copilot endpoint. This package provides the same OpenAiClient and OpenAiLanguageModel interfaces as @effect/ai-openai, so the programming model is identical.

Model factory

Copilot.model creates a Model.Model value from a model name string and optional options:
export const model = (
  model: string,
  options?: OpenAiLanguageModel.Config["Service"] & typeof AgentModelConfig.Service,
): Model.Model<"openai", LanguageModel.LanguageModel, OpenAiClient.OpenAiClient>
GitHub Copilot exposes whatever models GitHub has enabled for your subscription. Any model string is accepted — Clanka passes it directly to the API. Examples of model names observed in practice include claude-opus-4.6, claude-sonnet-4.5, gpt-4o, and others depending on your Copilot plan.

Model options

The options object accepts any field from OpenAiLanguageModel.Config["Service"]. Provider-specific parameters (such as thinking for Claude models) can be included and are forwarded as-is to the API.
FieldDescription
thinking.thinking_budgetFor Claude models: sets the extended thinking token budget.
Other OpenAI-compatible fieldsForwarded directly to the GitHub Copilot API.

Creating a model layer

import { Copilot } from "clanka"
import { Layer } from "effect"

export const Opus = Copilot.model("claude-opus-4.6", {
  thinking: { thinking_budget: 4000 },
}).pipe(Layer.provideMerge(ModelServices))
Layer.provideMerge is used here (instead of Layer.provide) so that ModelServices remains available in the output layer — useful when you want to reuse the same ModelServices for other models or services in the same program.

Complete example

The following shows the Copilot model setup from examples/cli.ts, alongside a Codex primary model, demonstrating that both providers can be used in the same program:
import { Agent, Codex, Copilot, DeviceCodeHandler } from "clanka"
import { Effect, Layer, Stream } from "effect"
import { NodeHttpClient, NodeRuntime, NodeServices, NodeSocket } from "@effect/platform-node"
import { KeyValueStore } from "effect/unstable/persistence"
import * as NodePath from "node:path"

const XDG_CONFIG_HOME =
  process.env.XDG_CONFIG_HOME ||
  NodePath.join(process.env.HOME || "", ".config")

// Copilot client services (note: no NodeSocket needed — HTTP streaming only)
const CopilotServices = Copilot.layerClient.pipe(
  Layer.provide(
    KeyValueStore.layerFileSystem(NodePath.join(XDG_CONFIG_HOME, "clanka")),
  ),
  Layer.provideMerge(NodeServices.layer),
  Layer.provideMerge(NodeHttpClient.layerUndici),
  Layer.provide(DeviceCodeHandler.layerConsole),
)

export const Opus = Copilot.model("claude-opus-4.6", {
  thinking: { thinking_budget: 4000 },
}).pipe(Layer.provideMerge(CopilotServices))

const AgentLayer = Agent.layerLocal({
  directory: process.cwd(),
}).pipe(
  Layer.provide(NodeServices.layer),
  Layer.provide(NodeHttpClient.layerUndici),
)

Effect.gen(function* () {
  const agent = yield* Agent.Agent
  const output = yield* agent.send({ prompt: process.argv.slice(2).join(" ") })
  // consume output ...
}).pipe(
  Effect.scoped,
  Effect.provide([AgentLayer, Opus]),
  NodeRuntime.runMain,
)
You can mix Copilot and Codex models in the same program. For example, use Opus as the primary model and a Codex model as the sub-agent model (or vice versa) by providing both layers and using Agent.layerSubagentModel.

Build docs developers (and LLMs) love