Flue usesDocumentation Index
Fetch the complete documentation index at: https://mintlify.com/withastro/flue/llms.txt
Use this file to discover all available pages before exploring further.
'provider/model-id' strings to identify models:
ANTHROPIC_API_KEY, OPENAI_API_KEY, etc.) by default. Use configureProvider or registerProvider in app.ts when you need to override endpoints, inject custom headers, or add providers that pi-ai doesn’t ship.
Provider precedence
Model resolution follows this precedence from highest to lowest:- Call-level —
modelpassed toprompt(),skill(), ortask() - Role-level —
modelset in the role definition - Harness-level —
modelpassed toinit()
configureProvider
Patch transport-level settings on a built-in provider without replacing its catalog metadata (context window, cost table, token limits). Common uses include enterprise API gateways, audit logging proxies, traffic routing, and managed credentials.
Import from @flue/runtime/app:
.flue/app.ts
Settings
Override the provider’s API endpoint. Use this to point at an enterprise gateway, a regional endpoint, or a self-hosted OpenAI-compatible server.
Extra headers merged into every outgoing request for this provider. Useful for gateway-specific authentication headers (
X-Custom-Auth, X-Api-Key, etc.).Override the API key for this provider. Pass
'dummy' when your proxy or gateway manages authentication and the underlying model server doesn’t require a real key.Sends
store: true on OpenAI Responses API requests. Only enable this when you need OpenAI-hosted item persistence and accept its data retention policy.registerProvider
Register a brand-new provider prefix that Flue doesn’t know about. Use this for self-hosted OpenAI-compatible servers (Ollama, LM Studio, vLLM, etc.), custom inference infrastructure, or Cloudflare Workers AI.
Import from @flue/runtime/app:
'<name>/model-id':
HTTP providers
Register any OpenAI-compatible (or other pi-aiapi-compatible) endpoint:
.flue/app.ts
The pi-ai wire-protocol handler to use. Use
'openai-completions' for any OpenAI-compatible endpoint. Use 'anthropic', 'google-gemini', etc. for other supported protocols.The endpoint root (e.g.
'http://localhost:11434/v1').Optional API key. Falls back to pi-ai’s normal environment variable lookup if unset. Pass a dummy value if the server doesn’t require authentication.
Default headers sent on every request routed through this provider.
Default context window size in tokens for every model resolved through this registration. Overridden per-model via
models. Defaults to 0 (unknown) when unset — compaction’s threshold trigger is disabled for unknown window sizes.Default maximum output tokens for every model resolved through this registration. Overridden per-model via
models.Per-model overrides for
contextWindow and maxTokens, keyed by model id. Useful when a self-hosted server hosts multiple models with different limits.Cloudflare Workers AI
On the Cloudflare target,cloudflare/... model strings route through env.AI.run() — no API key required. The cloudflare prefix is registered automatically when you have "ai": { "binding": "AI" } in your wrangler.jsonc.
cloudflare yourself in app.ts before the auto-registration runs:
.flue/app.ts
gateway: false to opt out of the gateway entirely. Using cloudflare/... model strings with --target node raises a clear error.
registerApiProvider shorthand
If you need to register a completely new wire-protocol handler (not just a new endpoint for an existing protocol), use registerApiProvider first, then registerProvider:
registerApiProvider is a direct re-export of pi-ai’s function. Calling it with the same api string overwrites the previous registration, so you can safely call it on every isolate boot.