Documentation Index
Fetch the complete documentation index at: https://mintlify.com/earendil-works/pi/llms.txt
Use this file to discover all available pages before exploring further.
pi-agent-core tools extend the LLM’s capabilities with custom execution logic. Each tool is an AgentTool object that describes itself to the LLM and provides an execute function the agent calls when the LLM requests it.
interface AgentTool<TParams = unknown> {
name: string;
label?: string;
description: string;
parameters: TSchema; // TypeBox schema
executionMode?: "parallel" | "sequential";
execute: (
toolCallId: string,
params: TParams,
signal: AbortSignal,
onUpdate?: (update: { content: ContentBlock[]; details: Record<string, unknown> }) => void
) => Promise<{
content: ContentBlock[];
details?: Record<string, unknown>;
terminate?: boolean;
}>;
}
| Field | Required | Description |
|---|
name | Yes | Identifier the LLM uses to call the tool |
label | No | Human-readable name for UI display |
description | Yes | Tells the LLM what the tool does and when to use it |
parameters | Yes | TypeBox object schema describing the tool’s inputs |
executionMode | No | Per-tool override for parallel vs sequential execution |
execute | Yes | Async function that performs the work |
import { Type } from "typebox";
import type { AgentTool } from "@earendil-works/pi-agent-core";
const readFileTool: AgentTool = {
name: "read_file",
label: "Read File",
description: "Read a file's contents from the filesystem.",
parameters: Type.Object({
path: Type.String({ description: "Absolute or relative file path" }),
}),
execute: async (toolCallId, params, signal, onUpdate) => {
const content = await fs.readFile(params.path, "utf-8");
return {
content: [{ type: "text", text: content }],
details: { path: params.path, size: content.length },
};
},
};
agent.state.tools = [readFileTool];
Streaming progress with onUpdate
Call onUpdate to emit tool_execution_update events while the tool is running. This lets your UI show incremental progress before the tool completes.
execute: async (toolCallId, params, signal, onUpdate) => {
onUpdate?.({
content: [{ type: "text", text: "Connecting to database..." }],
details: { step: "connect" },
});
const rows = await db.query(params.sql);
onUpdate?.({
content: [{ type: "text", text: `Fetched ${rows.length} rows` }],
details: { step: "fetch", count: rows.length },
});
return {
content: [{ type: "text", text: JSON.stringify(rows) }],
details: { rowCount: rows.length },
};
},
Error handling
Throw an error when a tool fails. Do not return error content — thrown errors are caught by the agent and reported to the LLM as a tool error with isError: true.
execute: async (toolCallId, params, signal, onUpdate) => {
if (!fs.existsSync(params.path)) {
throw new Error(`File not found: ${params.path}`);
}
const content = await fs.readFile(params.path, "utf-8");
return { content: [{ type: "text", text: content }] };
},
Returning an error message as content tells the LLM the tool succeeded with error text. Throw instead so the agent marks the result as isError: true.
Return terminate: true from execute to hint that the agent should skip the automatic follow-up LLM call. The loop only stops early when every finalized result in the batch sets terminate: true. Mixed batches (some terminating, some not) continue normally.
execute: async (toolCallId, params, signal, onUpdate) => {
await sendNotification(params.message);
return {
content: [{ type: "text", text: "Notification sent." }],
terminate: true,
};
},
terminate: true is a runtime hint only. The emitted toolResult transcript messages remain standard LLM tool results.
Global setting
Set the default execution mode on the Agent:
const agent = new Agent({ toolExecution: "parallel" }); // default
const agent = new Agent({ toolExecution: "sequential" });
Override the mode for a specific tool via executionMode:
const writeFileTool: AgentTool = {
name: "write_file",
executionMode: "sequential", // forces entire batch to run sequentially
// ...
};
If any tool in a batch has executionMode: "sequential", the entire batch runs sequentially regardless of the global setting.
Parallel mode ordering
Preflight sequentially
beforeToolCall runs for each tool in source order before any tool starts executing.
Execute concurrently
All allowed tools in the batch execute at the same time.
Emit tool_execution_end as tools finish
Events fire in completion order — the fastest tool emits first.
Persist toolResult messages in source order
The toolResult messages added to the transcript and turn_end.toolResults follow the assistant’s original tool call order, not completion order.
Custom message types
Extend AgentMessage via declaration merging to add app-specific message types:
declare module "@earendil-works/pi-agent-core" {
interface CustomAgentMessages {
notification: { role: "notification"; text: string; timestamp: number };
}
}
// Now valid
const msg: AgentMessage = { role: "notification", text: "Info", timestamp: Date.now() };
Filter custom types out in convertToLlm so the LLM never sees them:
const agent = new Agent({
convertToLlm: (messages) => messages.flatMap(m => {
if (m.role === "notification") return [];
return [m];
}),
});
Low-level API
Use AgentContext and AgentLoopConfig with the bare agentLoop() function when you need direct control:
import { agentLoop } from "@earendil-works/pi-agent-core";
import type { AgentContext, AgentLoopConfig } from "@earendil-works/pi-agent-core";
const context: AgentContext = {
systemPrompt: "You are helpful.",
messages: [],
tools: [readFileTool],
};
const config: AgentLoopConfig = {
model: getModel("openai", "gpt-4o"),
convertToLlm: (msgs) =>
msgs.filter(m => ["user", "assistant", "toolResult"].includes(m.role)),
toolExecution: "parallel",
beforeToolCall: async ({ toolCall, args, context }) => undefined,
afterToolCall: async ({ toolCall, result, isError, context }) => undefined,
};
const userMessage = { role: "user", content: "Read package.json", timestamp: Date.now() };
for await (const event of agentLoop([userMessage], context, config)) {
console.log(event.type, event);
}
The agentLoop function takes an array of user messages to prepend, the mutable AgentContext, and an AgentLoopConfig. It returns an async generator of typed agent events.
Related pages
pi-agent-core overview
Agent class, constructor options, state management, and proxy usage.
Agent events and lifecycle
beforeToolCall, afterToolCall, tool execution events, and subscribe().
Tool definitions (pi-ai)
Low-level tool schema definitions the agent builds on.