Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/withastro/flue/llms.txt

Use this file to discover all available pages before exploring further.

AgentInit is the options object passed to ctx.init(). Calling init() returns a FlueHarness that manages model defaults, sandbox, tools, session store, and sessions for the current run.
const harness = await ctx.init({
  model: 'anthropic/claude-sonnet-4-6',
  sandbox: local(),
  role: 'coder',
});

Options

model
string | false
required
Default model for every prompt(), skill(), and task() call in this harness. Format: 'provider/model-id' (e.g. 'anthropic/claude-opus-4-20250514', 'openai/gpt-4.1-mini').Pass false to require every call to resolve a model from a role or per-call model option. Useful when different sessions or calls need different models and you want the code to fail loudly if none is set.Precedence (highest wins): per-call model > role model > harness model.
name
string
default:"default"
Harness name. Use unique names when one run needs multiple isolated harness scopes (e.g. a setup harness and a project harness pointed at different working directories).
const setup = await ctx.init({ name: 'setup', model: 'anthropic/claude-sonnet-4-6' });
const project = await ctx.init({ name: 'project', model: 'anthropic/claude-sonnet-4-6', cwd: '/workspace/project' });
cwd
string
Working directory for context discovery (AGENTS.md, .agents/skills/, roles), built-in tools (bash, read, write, etc.), and shell calls. Defaults to the sandbox connector’s native working directory.Set cwd when you want the agent to discover project context from a specific location, for example after cloning a repository into a sandbox.
sandbox
false | SandboxFactory | BashFactory
Sandbox mode for this harness:
  • Omitted / false: virtual in-memory sandbox powered by just-bash. No host filesystem access. Fastest and most scalable option.
  • SandboxFactory: a remote sandbox connector (Daytona, E2B, Cloudflare Containers, etc.). The connector’s createSessionEnv() is called once per session.
  • BashFactory: a () => BashLike | Promise<BashLike> factory for a custom just-bash instance. Share a filesystem object in the closure to persist files across sessions.
// Virtual sandbox (default)
await ctx.init({ model: 'anthropic/claude-sonnet-4-6' });

// Remote sandbox via connector
await ctx.init({ sandbox: daytona(sandboxInstance), model: 'anthropic/claude-sonnet-4-6' });

// Custom just-bash factory
import { Bash, InMemoryFs } from 'just-bash';
const fs = new InMemoryFs();
await ctx.init({
  sandbox: () => new Bash({ fs, cwd: '/workspace' }),
  model: 'anthropic/claude-sonnet-4-6',
});
persist
SessionStore
Custom session store for persisting conversation history. Defaults to in-memory on Node.js and Durable Object SQLite on Cloudflare. Implement the SessionStore interface to provide your own storage backend.
interface SessionStore {
  save(id: string, data: SessionData): Promise<void>;
  load(id: string): Promise<SessionData | null>;
  delete(id: string): Promise<void>;
}
role
string
Harness-wide default role. Applies to every prompt(), skill(), and task() call unless overridden at the session or call level.Precedence (highest wins): per-call role > session role > harness role.Roles are defined in .flue/roles/<name>.md (or roles/<name>.md for the root layout). Each role file provides a system prompt overlay, and optionally a default model and thinking level.
thinkingLevel
'off' | 'low' | 'medium' | 'high'
default:"medium"
Default reasoning effort for every prompt(), skill(), and task() call. Forwarded to the underlying model’s reasoning/thinking capability. Models that do not support reasoning silently ignore this setting after the level is clamped.
  • 'off' — disable extended reasoning even on models that support it.
  • 'low' / 'medium' / 'high' — progressively more reasoning effort and token budget.
Precedence (highest wins): per-call thinkingLevel > role thinkingLevel > harness thinkingLevel. When nothing is set, the harness defaults to 'medium'.
tools
ToolDef[]
Harness-wide custom tools. Available to every prompt(), skill(), and task() call in this harness. Per-call tools are added on top; their names must not overlap with harness tools or built-in tool names.See ToolDef for the interface and BUILTIN_TOOL_NAMES for the reserved names.
import { Type } from '@flue/runtime';

const harness = await ctx.init({
  model: 'anthropic/claude-sonnet-4-6',
  tools: [
    {
      name: 'fetch_weather',
      description: 'Fetch current weather for a city.',
      parameters: Type.Object({
        city: Type.String(),
      }),
      async execute(args) {
        const res = await fetch(`https://wttr.in/${args.city}?format=3`);
        return res.text();
      },
    },
  ],
});
compaction
false | CompactionConfig
Context window compaction tuning.When a session’s message history approaches the model’s context limit, Flue automatically summarizes older messages so the session can continue. This is compaction.
  • Omitted: model-aware defaults apply. Compaction triggers at approximately 96% of the context window; 8,000 tokens of recent history are preserved verbatim.
  • false: disable automatic threshold compaction. Overflow recovery and explicit session.compact() still run.
  • CompactionConfig object: override individual fields.

Precedence rules summary

SettingPrecedence (highest → lowest)
modelper-call → role → harness
roleper-call → session → harness
thinkingLevelper-call → role → harness (default: 'medium')

Full example

// .flue/agents/code.ts
import { type FlueContext } from '@flue/runtime';
import { local } from '@flue/runtime/node';

export const triggers = { webhook: true };

export default async function ({ init, payload, env }: FlueContext) {
  const harness = await init({
    name: 'project',
    model: 'anthropic/claude-opus-4-20250514',
    sandbox: local({ env: { GH_TOKEN: env.GH_TOKEN } }),
    cwd: '/workspace/project',
    role: 'coder',
    thinkingLevel: 'high',
    tools: [/* custom tools */],
    compaction: {
      keepRecentTokens: 12000,
      model: 'anthropic/claude-haiku-4-5',
    },
  });

  const session = await harness.session();
  return await session.prompt(payload.prompt);
}

Build docs developers (and LLMs) love