Documentation Index
Fetch the complete documentation index at: https://mintlify.com/earendil-works/pi/llms.txt
Use this file to discover all available pages before exploring further.
pi-ai tools use TypeBox schemas for type-safe definitions and automatic validation. TypeBox schemas serialize as plain JSON, which makes them portable across processes, services, and storage systems. All TypeBox exports (Type, Static, TSchema) are re-exported from @earendil-works/pi-ai.
Only models that support tool calling are included in the registry. The tools field on a Context is passed to the model as-is; validation of arguments returned by the model is your responsibility (or use validateToolCall).
import { Type, Tool, StringEnum } from '@earendil-works/pi-ai';
// Basic tool with optional parameter
const weatherTool: Tool = {
name: 'get_weather',
description: 'Get current weather for a location',
parameters: Type.Object({
location: Type.String({ description: 'City name or coordinates' }),
units: StringEnum(['celsius', 'fahrenheit'], { default: 'celsius' })
})
};
// Tool with required fields and array parameter
const bookMeetingTool: Tool = {
name: 'book_meeting',
description: 'Schedule a meeting',
parameters: Type.Object({
title: Type.String({ minLength: 1 }),
startTime: Type.String({ format: 'date-time' }),
endTime: Type.String({ format: 'date-time' }),
attendees: Type.Array(Type.String({ format: 'email' }), { minItems: 1 })
})
};
For Google provider compatibility, use the StringEnum helper instead of Type.Enum. Type.Enum generates anyOf/const patterns that the Google Generative AI API does not support.
After a model response with stopReason === 'toolUse', iterate over response.content and filter for toolCall blocks. Push the results back as toolResult messages and call the model again to continue:
import { readFileSync } from 'fs';
import { complete, Context } from '@earendil-works/pi-ai';
const context: Context = {
messages: [{ role: 'user', content: 'What is the weather in London?' }],
tools: [weatherTool]
};
const response = await complete(model, context);
context.messages.push(response);
for (const block of response.content) {
if (block.type === 'toolCall') {
const result = await executeWeatherApi(block.arguments);
// Add text tool result
context.messages.push({
role: 'toolResult',
toolCallId: block.id,
toolName: block.name,
content: [{ type: 'text', text: JSON.stringify(result) }],
isError: false,
timestamp: Date.now()
});
}
}
// Tool results can also include images (for vision-capable models)
const imageBuffer = readFileSync('chart.png');
context.messages.push({
role: 'toolResult',
toolCallId: 'tool_xyz',
toolName: 'generate_chart',
content: [
{ type: 'text', text: 'Generated chart showing temperature trends' },
{ type: 'image', data: imageBuffer.toString('base64'), mimeType: 'image/png' }
],
isError: false,
timestamp: Date.now()
});
| Field | Type | Description |
|---|
role | 'toolResult' | Must be 'toolResult' |
toolCallId | string | The id from the ToolCall block |
toolName | string | The name from the ToolCall block |
content | (TextContent | ImageContent)[] | One or more text or image blocks |
isError | boolean | Set to true to signal a tool execution error |
timestamp | number | Unix timestamp in milliseconds |
During streaming, tool arguments arrive as partial JSON. This lets you update UI before the full arguments are available. Always check for field existence before accessing nested values.
import { stream } from '@earendil-works/pi-ai';
const s = stream(model, context);
for await (const event of s) {
if (event.type === 'toolcall_delta') {
const toolCall = event.partial.content[event.contentIndex];
// toolCall.arguments contains the best-effort parse of partial JSON
if (toolCall.type === 'toolCall' && toolCall.arguments) {
if (toolCall.name === 'write_file' && toolCall.arguments.path) {
console.log(`Writing to: ${toolCall.arguments.path}`);
// Content might be partial or missing — always guard
if (toolCall.arguments.content) {
console.log(`Preview: ${toolCall.arguments.content.substring(0, 100)}...`);
}
}
}
}
if (event.type === 'toolcall_end') {
// toolCall.arguments is fully parsed here (not yet validated)
const toolCall = event.toolCall;
console.log(`Tool completed: ${toolCall.name}`, toolCall.arguments);
}
}
During toolcall_delta, arguments may have missing fields, truncated strings, incomplete arrays, or partially populated objects. arguments is always at least {} — never undefined — but treat every field as possibly absent.The Google provider does not support function call streaming. You will receive a single toolcall_delta with the full arguments instead of incremental chunks.
Use validateToolCall to validate arguments against the tool’s TypeBox schema before executing the tool. It throws on invalid arguments, which you can catch and return as a tool error so the model can retry.
import { stream, validateToolCall, Tool } from '@earendil-works/pi-ai';
const tools: Tool[] = [weatherTool, calculatorTool];
const s = stream(model, { messages, tools });
for await (const event of s) {
if (event.type === 'toolcall_end') {
const toolCall = event.toolCall;
try {
// Throws if arguments don't match the tool's schema
const validatedArgs = validateToolCall(tools, toolCall);
const result = await executeMyTool(toolCall.name, validatedArgs);
context.messages.push({
role: 'toolResult',
toolCallId: toolCall.id,
toolName: toolCall.name,
content: [{ type: 'text', text: JSON.stringify(result) }],
isError: false,
timestamp: Date.now()
});
} catch (error) {
// Return the validation error so the model can retry with corrected arguments
context.messages.push({
role: 'toolResult',
toolCallId: toolCall.id,
toolName: toolCall.name,
content: [{ type: 'text', text: error.message }],
isError: true,
timestamp: Date.now()
});
}
}
}
When using agentLoop, tool arguments are validated automatically before execution. If validation fails, the error is returned to the model as a tool result. You only need to call validateToolCall manually when implementing your own tool execution loop with stream() or complete().
Models with vision capabilities can process images. Check model.input.includes('image') before sending image content. Images passed to non-vision models are silently ignored.
import { readFileSync } from 'fs';
import { getModel, complete } from '@earendil-works/pi-ai';
const model = getModel('openai', 'gpt-4o-mini');
if (model.input.includes('image')) {
console.log('Model supports vision');
}
const imageBuffer = readFileSync('image.png');
const base64Image = imageBuffer.toString('base64');
const response = await complete(model, {
messages: [{
role: 'user',
content: [
{ type: 'text', text: 'What is in this image?' },
{ type: 'image', data: base64Image, mimeType: 'image/png' }
]
}]
});
for (const block of response.content) {
if (block.type === 'text') {
console.log(block.text);
}
}