Documentation Index
Fetch the complete documentation index at: https://mintlify.com/withastro/flue/llms.txt
Use this file to discover all available pages before exploring further.
Flue agents run inside a sandbox — the execution environment that provides a filesystem and shell for the agent’s tools. You choose the sandbox when you call init(). Three modes are available.
Comparison
| Virtual | Local | Remote |
|---|
| Startup time | Instant | Instant | Seconds |
| Isolation | In-process | Host process | Full container |
| Filesystem | In-memory (empty by default) | Host filesystem | Container filesystem |
| Persistence across runs | No (unless backed by R2) | No | Yes (container survives) |
| When to use | High-scale API agents, translation, classification, support bots | CI runners with gh, git, npm on $PATH | Coding agents, long-running tasks, browser automation |
Virtual (default)
Local (Node.js)
Remote (container)
The virtual sandbox runs in-process, powered by just-bash. No container is started. Startup is instant, cost is minimal, and it scales with your server.It’s the default when you don’t pass sandbox to init():const harness = await init({
model: 'anthropic/claude-sonnet-4-6',
// No sandbox option — uses the virtual sandbox
});
Files are in-memory and empty by default. The agent’s bash, read, write, edit, grep, and glob tools all work against this in-memory filesystem.Custom just-bash factory
To customize the virtual sandbox — for example, to share a filesystem instance across multiple sessions — pass a BashFactory:import { Bash, InMemoryFs } from 'just-bash';
const fs = new InMemoryFs();
const harness = await init({
sandbox: () => new Bash({ fs, cwd: '/workspace', python: true }),
model: 'anthropic/claude-sonnet-4-6',
});
The factory is called once to construct the runtime. Share the InMemoryFs instance in the closure to persist files across sessions and prompts in the same run.R2-backed virtual sandbox on Cloudflare
For Cloudflare deployments, mount an R2 bucket as the virtual filesystem. The agent can then search a knowledge base with its built-in tools (grep, glob, read) without spinning up a container:import { getVirtualSandbox } from '@flue/runtime/cloudflare';
import type { FlueContext } from '@flue/runtime';
export const triggers = { webhook: true };
export default async function ({ init, env }: FlueContext) {
const sandbox = await getVirtualSandbox(env.KNOWLEDGE_BASE);
const harness = await init({
sandbox,
model: 'openrouter/moonshotai/kimi-k2.6',
});
const session = await harness.session();
return await session.prompt('Search the knowledge base and answer: ...');
}
The local sandbox gives the agent direct access to the host filesystem and shell. It’s the right choice for CI runners where gh, git, npm, or other tools are already on $PATH.import { local } from '@flue/runtime/node';
import type { FlueContext } from '@flue/runtime';
export const triggers = {};
export default async function ({ init, payload }: FlueContext) {
const harness = await init({
sandbox: local({
env: { GH_TOKEN: process.env.GH_TOKEN },
}),
model: 'anthropic/claude-opus-4-7',
});
const session = await harness.session();
return await session.skill('triage', {
args: { issueNumber: payload.issueNumber },
});
}
The host runner is the isolation boundary. Use this only in environments where you trust the agent’s tool calls to run on the host directly.Environment variables
By default, only a small allowlist of shell-essential variables (PATH, HOME, locale, etc.) is inherited from process.env. Pass additional variables explicitly in env:local({
env: {
GH_TOKEN: process.env.GH_TOKEN,
NPM_TOKEN: process.env.NPM_TOKEN,
},
})
This keeps secrets out of the agent’s environment unless you explicitly opt them in. Remote sandboxes provide full Linux containers with persistent filesystems and shells. Each session gets a real environment where you can run browsers, databases, or arbitrary system tools.Install a connector with:flue add daytona | claude
# or
flue add https://e2b.dev --category sandbox | claude
The CLI fetches installation instructions for the connector and pipes them to your coding agent, which writes a small TypeScript adapter at .flue/connectors/<name>.ts.Daytona example
import { Type, type FlueContext } from '@flue/runtime';
import { Daytona } from '@daytona/sdk';
import { daytona } from '../connectors/daytona';
export const triggers = { webhook: true };
export default async function ({ init, payload, env }: FlueContext) {
const client = new Daytona({ apiKey: env.DAYTONA_API_KEY });
const sandbox = await client.create();
const harness = await init({
sandbox: daytona(sandbox),
model: 'openai/gpt-5.5',
});
const session = await harness.session();
await session.shell(`git clone ${payload.repo} /workspace/project`);
return await session.prompt(payload.prompt);
}
Available connectors include Daytona, E2B, Cloudflare Containers, and more. Run flue add to list all available options.
session.shell() vs harness.shell()
Both run commands in the sandbox. The difference is whether the result appears in the conversation.
Use session.shell() when the command’s output should be visible to the model in its next turn. Use harness.shell() for setup work — cloning, installing dependencies, preparing files — that the model doesn’t need to reason about.
// Recorded in conversation — model sees the output
const diff = await session.shell('git diff HEAD~1');
// NOT recorded — plumbing the model doesn't need to see
await harness.shell('npm install', { cwd: '/workspace/project' });
Both return { stdout, stderr, exitCode }.
session.fs / harness.fs
FlueFs provides out-of-band file operations that are never recorded in the conversation. Use it for staging files before a prompt, or capturing artifacts after one.
// Stage a file before prompting
await harness.fs.writeFile('/workspace/data.json', JSON.stringify(data));
// Capture output after the model writes it
const report = await session.fs.readFile('/workspace/report.md');
If you want the model to see the contents of a file you write, prompt it to read the file itself with its read tool — don’t inject the content via fs.