Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/withastro/flue/llms.txt

Use this file to discover all available pages before exploring further.

Flue agents run inside a sandbox — the execution environment that provides a filesystem and shell for the agent’s tools. You choose the sandbox when you call init(). Three modes are available.

Comparison

VirtualLocalRemote
Startup timeInstantInstantSeconds
IsolationIn-processHost processFull container
FilesystemIn-memory (empty by default)Host filesystemContainer filesystem
Persistence across runsNo (unless backed by R2)NoYes (container survives)
When to useHigh-scale API agents, translation, classification, support botsCI runners with gh, git, npm on $PATHCoding agents, long-running tasks, browser automation

The virtual sandbox runs in-process, powered by just-bash. No container is started. Startup is instant, cost is minimal, and it scales with your server.It’s the default when you don’t pass sandbox to init():
const harness = await init({
  model: 'anthropic/claude-sonnet-4-6',
  // No sandbox option — uses the virtual sandbox
});
Files are in-memory and empty by default. The agent’s bash, read, write, edit, grep, and glob tools all work against this in-memory filesystem.

Custom just-bash factory

To customize the virtual sandbox — for example, to share a filesystem instance across multiple sessions — pass a BashFactory:
import { Bash, InMemoryFs } from 'just-bash';

const fs = new InMemoryFs();

const harness = await init({
  sandbox: () => new Bash({ fs, cwd: '/workspace', python: true }),
  model: 'anthropic/claude-sonnet-4-6',
});
The factory is called once to construct the runtime. Share the InMemoryFs instance in the closure to persist files across sessions and prompts in the same run.

R2-backed virtual sandbox on Cloudflare

For Cloudflare deployments, mount an R2 bucket as the virtual filesystem. The agent can then search a knowledge base with its built-in tools (grep, glob, read) without spinning up a container:
import { getVirtualSandbox } from '@flue/runtime/cloudflare';
import type { FlueContext } from '@flue/runtime';

export const triggers = { webhook: true };

export default async function ({ init, env }: FlueContext) {
  const sandbox = await getVirtualSandbox(env.KNOWLEDGE_BASE);
  const harness = await init({
    sandbox,
    model: 'openrouter/moonshotai/kimi-k2.6',
  });
  const session = await harness.session();
  return await session.prompt('Search the knowledge base and answer: ...');
}

session.shell() vs harness.shell()

Both run commands in the sandbox. The difference is whether the result appears in the conversation.
Use session.shell() when the command’s output should be visible to the model in its next turn. Use harness.shell() for setup work — cloning, installing dependencies, preparing files — that the model doesn’t need to reason about.
// Recorded in conversation — model sees the output
const diff = await session.shell('git diff HEAD~1');

// NOT recorded — plumbing the model doesn't need to see
await harness.shell('npm install', { cwd: '/workspace/project' });
Both return { stdout, stderr, exitCode }.

session.fs / harness.fs

FlueFs provides out-of-band file operations that are never recorded in the conversation. Use it for staging files before a prompt, or capturing artifacts after one.
// Stage a file before prompting
await harness.fs.writeFile('/workspace/data.json', JSON.stringify(data));

// Capture output after the model writes it
const report = await session.fs.readFile('/workspace/report.md');
If you want the model to see the contents of a file you write, prompt it to read the file itself with its read tool — don’t inject the content via fs.

Build docs developers (and LLMs) love