Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/earendil-works/pi/llms.txt

Use this file to discover all available pages before exploring further.

@earendil-works/pi-agent-core is a stateful agent runtime built on @earendil-works/pi-ai. It manages conversation history, drives the LLM loop, executes tools, and emits typed events your UI can subscribe to.

Installation

npm install @earendil-works/pi-agent-core

Quick start

import { Agent } from "@earendil-works/pi-agent-core";
import { getModel } from "@earendil-works/pi-ai";

const agent = new Agent({
  initialState: {
    systemPrompt: "You are a helpful assistant.",
    model: getModel("anthropic", "claude-sonnet-4-20250514"),
  },
});

agent.subscribe((event) => {
  if (event.type === "message_update" && event.assistantMessageEvent.type === "text_delta") {
    // Stream just the new text chunk
    process.stdout.write(event.assistantMessageEvent.delta);
  }
});

await agent.prompt("Hello!");

Core concepts

AgentMessage vs LLM message

The agent works with AgentMessage, a flexible type that can include standard LLM messages (user, assistant, toolResult) and custom app-specific message types added via declaration merging. LLMs only understand user, assistant, and toolResult. The convertToLlm function bridges this gap by filtering and transforming messages before each LLM call.

Message flow

AgentMessage[] → transformContext() → AgentMessage[] → convertToLlm() → Message[] → LLM
                    (optional)                           (required)
  1. transformContext — prune old messages, inject external context
  2. convertToLlm — filter out UI-only messages, convert custom types to LLM format

Constructor options

Seed the agent’s state before the first prompt. All fields are optional.
initialState: {
  systemPrompt: string,
  model: Model<any>,
  thinkingLevel: "off" | "minimal" | "low" | "medium" | "high" | "xhigh",
  tools: AgentTool<any>[],
  messages: AgentMessage[],
}
Convert AgentMessage[] to the Message[] format the LLM understands. Required when you have custom message types.
convertToLlm: (messages) => messages.filter(
  m => ["user", "assistant", "toolResult"].includes(m.role)
)
Transform the message list before convertToLlm is called. Use it for pruning, compaction, or injecting external context.
transformContext: async (messages, signal) => pruneOldMessages(messages)
Controls how many queued steering or follow-up messages are injected per turn.
  • "one-at-a-time" (default) — inject one message per turn
  • "all" — inject all queued messages at once
steeringMode: "one-at-a-time",
followUpMode: "one-at-a-time",
Replace the default LLM stream function. Used to route requests through a proxy backend.
streamFn: (model, context, options) =>
  streamProxy(model, context, {
    ...options,
    authToken: "...",
    proxyUrl: "https://your-server.com",
  })
Session identifier forwarded to providers that support prompt caching.
sessionId: "session-123"
Dynamically resolve the API key before each call. Use this for short-lived OAuth tokens.
getApiKey: async (provider) => refreshToken()
Global tool execution mode. "parallel" (default) runs tools concurrently; "sequential" runs them one at a time. A per-tool executionMode: "sequential" overrides this for the whole batch.
toolExecution: "parallel"
Preflight hook called after argument validation. Return { block: true, reason } to prevent execution.
beforeToolCall: async ({ toolCall, args, context }) => {
  if (toolCall.name === "bash") {
    return { block: true, reason: "bash is disabled" };
  }
}
Postprocess hook called after execution finishes, before final tool events are emitted. Return { terminate: true } to stop the agent after the batch, or { details: {...} } to augment the result details.
afterToolCall: async ({ toolCall, result, isError, context }) => {
  if (toolCall.name === "notify_done" && !isError) {
    return { terminate: true };
  }
  if (!isError) {
    return { details: { ...result.details, audited: true } };
  }
}
Override token budgets for thinking levels on token-based providers.
thinkingBudgets: {
  minimal: 128,
  low: 512,
  medium: 1024,
  high: 2048,
}

AgentState interface

Access the current state via agent.state.
interface AgentState {
  systemPrompt: string;
  model: Model<any>;
  thinkingLevel: ThinkingLevel;
  tools: AgentTool<any>[];
  messages: AgentMessage[];
  readonly isStreaming: boolean;
  readonly streamingMessage?: AgentMessage;
  readonly pendingToolCalls: ReadonlySet<string>;
  readonly errorMessage?: string;
}
Assigning agent.state.tools or agent.state.messages copies the top-level array before storing it. Mutating the returned array mutates current agent state. agent.state.isStreaming remains true until the run fully settles, including awaited agent_end subscribers.

Methods

prompt()

Send a message and start the agent loop.
// Text prompt
await agent.prompt("Hello");

// With images
await agent.prompt("What's in this image?", [
  { type: "image", data: base64Data, mimeType: "image/jpeg" }
]);

// AgentMessage directly
await agent.prompt({ role: "user", content: "Hello", timestamp: Date.now() });

continue()

Resume from existing context without adding a new message. The last message in context must be user or toolResult. Use it for retries after errors.
await agent.continue();

reset()

Clear all messages and reset the agent to its initial state.
agent.reset();

abort()

Cancel the current operation.
agent.abort();

waitForIdle()

Wait until the agent is fully settled, including any awaited agent_end subscribers.
await agent.waitForIdle();

subscribe()

Register an async event listener. Returns an unsubscribe function.
const unsubscribe = agent.subscribe(async (event, signal) => {
  if (event.type === "agent_end") {
    await flushSessionState(signal);
  }
});
unsubscribe();

State management

You can mutate state between or during runs:
agent.state.systemPrompt = "New prompt";
agent.state.model = getModel("openai", "gpt-4o");
agent.state.thinkingLevel = "medium";
agent.state.tools = [myTool];
agent.state.messages = newMessages;
agent.state.messages.push(message);
agent.toolExecution = "sequential";
agent.beforeToolCall = async ({ toolCall }) => undefined;
agent.afterToolCall = async ({ toolCall, result }) => undefined;
agent.sessionId = "session-123";
agent.thinkingBudgets = { minimal: 128, low: 512, medium: 1024, high: 2048 };

Steering and follow-up

Steering messages interrupt the agent while tools are running. Follow-up messages queue work after the agent would otherwise stop.
// While agent is running tools
agent.steer({
  role: "user",
  content: "Stop! Do this instead.",
  timestamp: Date.now(),
});

// After the agent finishes its current work
agent.followUp({
  role: "user",
  content: "Also summarize the result.",
  timestamp: Date.now(),
});

agent.clearSteeringQueue();
agent.clearFollowUpQueue();
agent.clearAllQueues();
When steering messages are detected after a turn completes, all tool calls from the current assistant message have already finished. Steering messages are then injected and the LLM responds on the next turn. Follow-up messages are checked only when there are no more tool calls and no steering messages. If any are queued, they are injected and another turn runs.

Proxy usage

For browser apps that route LLM calls through a backend:
import { Agent, streamProxy } from "@earendil-works/pi-agent-core";

const agent = new Agent({
  streamFn: (model, context, options) =>
    streamProxy(model, context, {
      ...options,
      authToken: "...",
      proxyUrl: "https://your-server.com",
    }),
});

Agent events and lifecycle

Full event sequence diagrams and event type reference.

Defining agent tools

AgentTool interface, execute signature, error handling, and parallel mode.

LLM streaming API

The underlying streaming primitives pi-agent-core is built on.

Tool definitions

Low-level tool schema definitions used by AgentTool.

Build docs developers (and LLMs) love