Documentation Index
Fetch the complete documentation index at: https://mintlify.com/earendil-works/pi/llms.txt
Use this file to discover all available pages before exploring further.
@earendil-works/pi-agent-core is a stateful agent runtime built on @earendil-works/pi-ai. It manages conversation history, drives the LLM loop, executes tools, and emits typed events your UI can subscribe to.
Installation
Quick start
Core concepts
AgentMessage vs LLM message
The agent works withAgentMessage, a flexible type that can include standard LLM messages (user, assistant, toolResult) and custom app-specific message types added via declaration merging.
LLMs only understand user, assistant, and toolResult. The convertToLlm function bridges this gap by filtering and transforming messages before each LLM call.
Message flow
- transformContext — prune old messages, inject external context
- convertToLlm — filter out UI-only messages, convert custom types to LLM format
Constructor options
initialState
initialState
Seed the agent’s state before the first prompt. All fields are optional.
convertToLlm
convertToLlm
Convert
AgentMessage[] to the Message[] format the LLM understands. Required when you have custom message types.transformContext
transformContext
Transform the message list before
convertToLlm is called. Use it for pruning, compaction, or injecting external context.steeringMode / followUpMode
steeringMode / followUpMode
Controls how many queued steering or follow-up messages are injected per turn.
"one-at-a-time"(default) — inject one message per turn"all"— inject all queued messages at once
streamFn
streamFn
Replace the default LLM stream function. Used to route requests through a proxy backend.
sessionId
sessionId
Session identifier forwarded to providers that support prompt caching.
getApiKey
getApiKey
Dynamically resolve the API key before each call. Use this for short-lived OAuth tokens.
toolExecution
toolExecution
Global tool execution mode.
"parallel" (default) runs tools concurrently; "sequential" runs them one at a time. A per-tool executionMode: "sequential" overrides this for the whole batch.beforeToolCall
beforeToolCall
Preflight hook called after argument validation. Return
{ block: true, reason } to prevent execution.afterToolCall
afterToolCall
Postprocess hook called after execution finishes, before final tool events are emitted. Return
{ terminate: true } to stop the agent after the batch, or { details: {...} } to augment the result details.thinkingBudgets
thinkingBudgets
Override token budgets for thinking levels on token-based providers.
AgentState interface
Access the current state viaagent.state.
Assigning
agent.state.tools or agent.state.messages copies the top-level array before storing it. Mutating the returned array mutates current agent state. agent.state.isStreaming remains true until the run fully settles, including awaited agent_end subscribers.Methods
prompt()
Send a message and start the agent loop.continue()
Resume from existing context without adding a new message. The last message in context must beuser or toolResult. Use it for retries after errors.
reset()
Clear all messages and reset the agent to its initial state.abort()
Cancel the current operation.waitForIdle()
Wait until the agent is fully settled, including any awaitedagent_end subscribers.
subscribe()
Register an async event listener. Returns an unsubscribe function.State management
You can mutate state between or during runs:Steering and follow-up
Steering messages interrupt the agent while tools are running. Follow-up messages queue work after the agent would otherwise stop.Proxy usage
For browser apps that route LLM calls through a backend:Related pages
Agent events and lifecycle
Full event sequence diagrams and event type reference.
Defining agent tools
AgentTool interface, execute signature, error handling, and parallel mode.
LLM streaming API
The underlying streaming primitives pi-agent-core is built on.
Tool definitions
Low-level tool schema definitions used by AgentTool.