In the default setup, both the language model calls and the tool execution happen in the same process. Clanka also supports a split architecture where the agent (language model) runs on one machine and theDocumentation Index
Fetch the complete documentation index at: https://mintlify.com/Effectful-Tech/clanka/llms.txt
Use this file to discover all available pages before exploring further.
AgentExecutor (tool runner) runs on a separate server that has access to the codebase. This is useful when the codebase lives on a remote development server and you want to keep the AI API calls local.
Architecture overview
Rpcs group in AgentExecutor.ts and uses Effect’s effect/unstable/rpc system.
The Rpcs protocol
AgentExecutor.Rpcs is an RpcGroup that declares four endpoints:
| Endpoint | Payload | Success | Notes |
|---|---|---|---|
capabilities() | — | Capabilities | Returns the tool TypeScript declarations and AGENTS.md content |
execute({ script }) | { script: string } | ExecuteOutput stream | Streams text output, task-complete signals, and subagent requests |
subagentOutput({ id, output }) | { id: number, output: string } | void | Returns output for a pending subagent by ID |
executeUnsafe({ tool, params }) | { tool: string, params: Json } | Json | Calls a single tool directly by name |
ExecuteOutput is a tagged union:
id that the server assigns to each subagent request. The client forks a fiber for each Subagent message, runs the sub-agent locally, then sends the result back via subagentOutput.
Setting up the RPC server
On the remote machine, useAgentExecutor.layerRpcServer. It wraps a local executor in the Rpcs handler and serves it via RpcServer.Protocol.
layerRpcServer returns Layer<never, ...> — it has no output services. All it does is start the RPC server.
Connecting the client
On the local machine, useAgentExecutor.layerRpc together with RpcClient.Protocol to connect to the remote server.
layerRpc is a Layer<AgentExecutor, never, RpcClient.Protocol>, so it fulfills the same AgentExecutor requirement that Agent.layer expects. Swap it in place of AgentExecutor.layerLocal and the rest of your agent code remains unchanged.
Subagent output correlation
When the remote executor needs to spawn a sub-agent, it emits aSubagent message on the execute stream with a unique id. The client picks up that message and:
- Forks a fiber that runs the sub-agent locally (using the same language model layer).
- Sends the resulting output back to the server with
client.subagentOutput({ id, output }). - The server resumes the suspended handler for that subagent ID.
makeRpc in AgentExecutor.ts:
Each
execute call gets its own scope for subagent fibers. Fibers are
automatically cleaned up when the execute stream ends.Choosing between local and RPC mode
Use layerLocal when
Use layerLocal when
- The codebase and the LLM API calls are on the same machine
- You want the simplest possible setup
- You are prototyping or running tests
Use layerRpcServer + layerRpc when
Use layerRpcServer + layerRpc when
- The codebase lives on a remote server (e.g. a cloud dev environment)
- You want to keep LLM API keys local and never expose them to the server
- You need to run multiple agents against the same remote executor