Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/earendil-works/pi/llms.txt

Use this file to discover all available pages before exploring further.

The Agent class emits typed events for every phase of LLM interaction. Subscribe to these events to drive streaming UI, track tool progress, and perform barrier work at the end of a run.

Event types

EventDescription
agent_startAgent begins processing a prompt or continuation
agent_endFinal event for the run; awaited subscribers count toward settlement
turn_startA new turn begins (one LLM call plus its tool executions)
turn_endTurn completes with the assistant message and collected tool results
message_startAny message begins — user, assistant, or toolResult
message_updateAssistant only. Contains assistantMessageEvent with streaming delta
message_endA message is complete
tool_execution_startA tool call begins
tool_execution_updateA tool streams progress
tool_execution_endA tool call completes

message_update and assistantMessageEvent

message_update wraps the underlying pi-ai streaming event in assistantMessageEvent. Check its type field to handle each delta kind:
agent.subscribe((event) => {
  if (event.type !== "message_update") return;

  const e = event.assistantMessageEvent;

  if (e.type === "text_delta") {
    process.stdout.write(e.delta);
  } else if (e.type === "thinking_delta") {
    // Extended thinking chunk
    console.log("[thinking]", e.delta);
  }
});

Event sequence diagrams

prompt() without tool calls

prompt("Hello")
├─ agent_start
├─ turn_start
├─ message_start   { message: userMessage }
├─ message_end     { message: userMessage }
├─ message_start   { message: assistantMessage }
├─ message_update  { message: partial..., assistantMessageEvent }
├─ message_update  { message: partial..., assistantMessageEvent }
├─ message_end     { message: assistantMessage }
├─ turn_end        { message, toolResults: [] }
└─ agent_end       { messages: [...] }

prompt() with tool calls

When the assistant calls tools, the loop adds a second turn for the follow-up LLM response:
prompt("Read config.json")
├─ agent_start
├─ turn_start
├─ message_start/end  { userMessage }
├─ message_start      { assistantMessage with toolCall }
├─ message_update...
├─ message_end        { assistantMessage }
├─ tool_execution_start  { toolCallId, toolName, args }
├─ tool_execution_update { partialResult }      // if tool streams progress
├─ tool_execution_end    { toolCallId, result }
├─ message_start/end  { toolResultMessage }
├─ turn_end           { message, toolResults: [toolResult] }

├─ turn_start                                   // next turn
├─ message_start      { assistantMessage }
├─ message_update...
├─ message_end
├─ turn_end
└─ agent_end
When using the Agent class, message_end processing acts as a barrier before tool preflight begins. beforeToolCall therefore sees agent state that already includes the assistant message that requested the tool call.

subscribe() and agent_end settlement

subscribe() returns an unsubscribe function. Listeners are awaited in registration order.
const unsubscribe = agent.subscribe(async (event, signal) => {
  if (event.type === "agent_end") {
    // Barrier work — agent.state.isStreaming stays true until this resolves
    await flushSessionState(signal);
  }
});

// Remove the listener when no longer needed
unsubscribe();
agent_end signals that no more loop events will be emitted. await agent.waitForIdle() and await agent.prompt(...) only settle after all awaited agent_end listeners finish.

Tool execution modes

Tool execution is configurable globally and per-tool.
Tools in a batch are preflighted sequentially, then executed concurrently. tool_execution_end events fire in completion order. toolResult messages and turn_end.toolResults follow assistant source order.
const agent = new Agent({ toolExecution: "parallel" });

beforeToolCall hook

Runs after tool_execution_start and after argument validation. Return { block: true, reason } to prevent execution.
const agent = new Agent({
  beforeToolCall: async ({ toolCall, args, context }) => {
    if (toolCall.name === "bash") {
      return { block: true, reason: "bash is disabled" };
    }
    // Return undefined to allow execution
  },
});
When a call is blocked, the tool is not executed and the result is marked as blocked with the provided reason.

afterToolCall hook

Runs after execution finishes, before tool_execution_end and final tool result message events are emitted.
const agent = new Agent({
  afterToolCall: async ({ toolCall, result, isError, context }) => {
    // Stop the agent after the batch (only if every result terminates)
    if (toolCall.name === "notify_done" && !isError) {
      return { terminate: true };
    }
    // Augment result details
    if (!isError) {
      return { details: { ...result.details, audited: true } };
    }
  },
});
Return values:
  • { terminate: true } — hint that the agent should stop after this batch (only takes effect when every result in the batch terminates)
  • { details: {...} } — replace or augment the result’s details object
  • undefined — no change

shouldStopAfterTurn

The low-level agentLoop() API exposes shouldStopAfterTurn for graceful loop exit after any turn.
const stream = agentLoop(prompts, context, {
  model,
  convertToLlm,
  shouldStopAfterTurn: async ({ message, toolResults, context, newMessages }) => {
    return shouldCompactBeforeNextTurn(context.messages);
  },
});
shouldStopAfterTurn runs after turn_end is emitted and after the assistant response and tool executions have completed normally. If it returns true, the loop emits agent_end and exits before polling steering or follow-up queues, and before starting another LLM call. It does not abort the provider stream, cancel running tools, or alter the assistant message stop reason.

Low-level API

Use agentLoop() and agentLoopContinue() when you need direct control without the Agent class.
import { agentLoop, agentLoopContinue } from "@earendil-works/pi-agent-core";

const context: AgentContext = {
  systemPrompt: "You are helpful.",
  messages: [],
  tools: [],
};

const config: AgentLoopConfig = {
  model: getModel("openai", "gpt-4o"),
  convertToLlm: (msgs) =>
    msgs.filter(m => ["user", "assistant", "toolResult"].includes(m.role)),
};

const userMessage = { role: "user", content: "Hello", timestamp: Date.now() };

for await (const event of agentLoop([userMessage], context, config)) {
  console.log(event.type);
}

// Continue from existing context
for await (const event of agentLoopContinue(context, config)) {
  console.log(event.type);
}
Low-level streams are observational. They preserve event order but do not wait for your async event handling to settle before later producer phases continue. If you need message processing to act as a barrier before tool preflight, use the Agent class instead.

pi-agent-core overview

Agent class constructor, state, methods, and proxy usage.

Defining agent tools

AgentTool interface, execute signature, and tool error handling.

Build docs developers (and LLMs) love