Documentation Index
Fetch the complete documentation index at: https://mintlify.com/withastro/flue/llms.txt
Use this file to discover all available pages before exploring further.
AgentInit is the options object passed to ctx.init(). Calling init() returns a FlueHarness that manages model defaults, sandbox, tools, session store, and sessions for the current run.
Options
Default model for every
prompt(), skill(), and task() call in this harness. Format: 'provider/model-id' (e.g. 'anthropic/claude-opus-4-20250514', 'openai/gpt-4.1-mini').Pass false to require every call to resolve a model from a role or per-call model option. Useful when different sessions or calls need different models and you want the code to fail loudly if none is set.Precedence (highest wins): per-call model > role model > harness model.Harness name. Use unique names when one run needs multiple isolated harness scopes (e.g. a setup harness and a project harness pointed at different working directories).
Working directory for context discovery (
AGENTS.md, .agents/skills/, roles), built-in tools (bash, read, write, etc.), and shell calls. Defaults to the sandbox connector’s native working directory.Set cwd when you want the agent to discover project context from a specific location, for example after cloning a repository into a sandbox.Sandbox mode for this harness:
- Omitted /
false: virtual in-memory sandbox powered by just-bash. No host filesystem access. Fastest and most scalable option. SandboxFactory: a remote sandbox connector (Daytona, E2B, Cloudflare Containers, etc.). The connector’screateSessionEnv()is called once per session.BashFactory: a() => BashLike | Promise<BashLike>factory for a custom just-bash instance. Share a filesystem object in the closure to persist files across sessions.
Custom session store for persisting conversation history. Defaults to in-memory on Node.js and Durable Object SQLite on Cloudflare. Implement the
SessionStore interface to provide your own storage backend.Harness-wide default role. Applies to every
prompt(), skill(), and task() call unless overridden at the session or call level.Precedence (highest wins): per-call role > session role > harness role.Roles are defined in .flue/roles/<name>.md (or roles/<name>.md for the root layout). Each role file provides a system prompt overlay, and optionally a default model and thinking level.Default reasoning effort for every
prompt(), skill(), and task() call. Forwarded to the underlying model’s reasoning/thinking capability. Models that do not support reasoning silently ignore this setting after the level is clamped.'off'— disable extended reasoning even on models that support it.'low'/'medium'/'high'— progressively more reasoning effort and token budget.
thinkingLevel > role thinkingLevel > harness thinkingLevel. When nothing is set, the harness defaults to 'medium'.Harness-wide custom tools. Available to every
prompt(), skill(), and task() call in this harness. Per-call tools are added on top; their names must not overlap with harness tools or built-in tool names.See ToolDef for the interface and BUILTIN_TOOL_NAMES for the reserved names.Context window compaction tuning.When a session’s message history approaches the model’s context limit, Flue automatically summarizes older messages so the session can continue. This is compaction.
- Omitted: model-aware defaults apply. Compaction triggers at approximately 96% of the context window; 8,000 tokens of recent history are preserved verbatim.
false: disable automatic threshold compaction. Overflow recovery and explicitsession.compact()still run.CompactionConfigobject: override individual fields.
Precedence rules summary
| Setting | Precedence (highest → lowest) |
|---|---|
model | per-call → role → harness |
role | per-call → session → harness |
thinkingLevel | per-call → role → harness (default: 'medium') |