Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/Effectful-Tech/clanka/llms.txt

Use this file to discover all available pages before exploring further.

In the default setup, both the language model calls and the tool execution happen in the same process. Clanka also supports a split architecture where the agent (language model) runs on one machine and the AgentExecutor (tool runner) runs on a separate server that has access to the codebase. This is useful when the codebase lives on a remote development server and you want to keep the AI API calls local.

Architecture overview

┌─────────────────────────┐        RPC over HTTP/WS       ┌──────────────────────────────┐
│  Local process           │  ◄────────────────────────►  │  Remote server                │
│                          │                               │                               │
│  Agent (LLM calls)       │                               │  AgentExecutor (tool runner)  │
│  layerRpc ──────────────►│                               │◄── layerRpcServer             │
│                          │                               │                               │
│  RpcClient.Protocol      │                               │  RpcServer.Protocol           │
└─────────────────────────┘                               └──────────────────────────────┘
The RPC protocol is defined by the Rpcs group in AgentExecutor.ts and uses Effect’s effect/unstable/rpc system.

The Rpcs protocol

AgentExecutor.Rpcs is an RpcGroup that declares four endpoints:
EndpointPayloadSuccessNotes
capabilities()CapabilitiesReturns the tool TypeScript declarations and AGENTS.md content
execute({ script }){ script: string }ExecuteOutput streamStreams text output, task-complete signals, and subagent requests
subagentOutput({ id, output }){ id: number, output: string }voidReturns output for a pending subagent by ID
executeUnsafe({ tool, params }){ tool: string, params: Json }JsonCalls a single tool directly by name
ExecuteOutput is a tagged union:
export const ExecuteOutput = Schema.TaggedUnion({
  Text:         { text: Schema.String },
  TaskComplete: { summary: Schema.String },
  Subagent:     { id: Schema.Finite, prompt: Schema.String },
})
Subagent output is correlated across the RPC boundary by the numeric id that the server assigns to each subagent request. The client forks a fiber for each Subagent message, runs the sub-agent locally, then sends the result back via subagentOutput.

Setting up the RPC server

On the remote machine, use AgentExecutor.layerRpcServer. It wraps a local executor in the Rpcs handler and serves it via RpcServer.Protocol.
import * as AgentExecutor from "clanka/AgentExecutor"
import * as RpcServer from "effect/unstable/rpc/RpcServer"
import { NodeHttpServer, NodeServices } from "@effect/platform-node"
import * as Layer from "effect/Layer"
import { createServer } from "node:http"

const ServerLayer = AgentExecutor.layerRpcServer({
  directory: "/home/user/myproject",
}).pipe(
  // Provide the RpcServer transport — HTTP in this example
  Layer.provide(
    NodeHttpServer.layer(createServer, { port: 4000 }),
  ),
  Layer.provide(NodeServices.layer),
  // ...other platform layers (FileSystem, Path, ChildProcessSpawner, HttpClient)
)
layerRpcServer returns Layer<never, ...> — it has no output services. All it does is start the RPC server.

Connecting the client

On the local machine, use AgentExecutor.layerRpc together with RpcClient.Protocol to connect to the remote server.
import * as AgentExecutor from "clanka/AgentExecutor"
import * as Agent from "clanka/Agent"
import * as RpcClient from "effect/unstable/rpc/RpcClient"
import { NodeHttpClient, NodeServices } from "@effect/platform-node"
import * as Layer from "effect/Layer"

const RemoteExecutor = AgentExecutor.layerRpc.pipe(
  Layer.provide(
    // Wire up the RpcClient transport that speaks to your server
    RpcClient.layerHttp("http://remote-server:4000"),
  ),
  Layer.provide(NodeHttpClient.layerUndici),
)

const AgentLayer = Agent.layer.pipe(
  Layer.provide(RemoteExecutor),
  Layer.provide(NodeServices.layer),
  // ...model layers
)
layerRpc is a Layer<AgentExecutor, never, RpcClient.Protocol>, so it fulfills the same AgentExecutor requirement that Agent.layer expects. Swap it in place of AgentExecutor.layerLocal and the rest of your agent code remains unchanged.

Subagent output correlation

When the remote executor needs to spawn a sub-agent, it emits a Subagent message on the execute stream with a unique id. The client picks up that message and:
  1. Forks a fiber that runs the sub-agent locally (using the same language model layer).
  2. Sends the resulting output back to the server with client.subagentOutput({ id, output }).
  3. The server resumes the suspended handler for that subagent ID.
This happens transparently inside makeRpc in AgentExecutor.ts:
case "Subagent": {
  const id = part.id
  return pipe(
    opts.onSubagent(part.prompt),
    Effect.flatMap((output) =>
      client.subagentOutput({ id, output }),
    ),
    Effect.forkIn(scope),
  )
}
Each execute call gets its own scope for subagent fibers. Fibers are automatically cleaned up when the execute stream ends.

Choosing between local and RPC mode

  • The codebase and the LLM API calls are on the same machine
  • You want the simplest possible setup
  • You are prototyping or running tests
  • The codebase lives on a remote server (e.g. a cloud dev environment)
  • You want to keep LLM API keys local and never expose them to the server
  • You need to run multiple agents against the same remote executor
The RPC protocol does not include authentication. If the server is exposed to a network you do not control, add authentication at the transport layer (e.g. mutual TLS or an authenticated reverse proxy) before exposing it.

Build docs developers (and LLMs) love