Overview
Magpie integrates with Daytona to execute pipeline runs in isolated remote sandboxes via REST API. This enables:
Concurrent pipelines without filesystem conflicts
Reproducible environments via pre-built snapshots
Warm sandbox pools for sub-second acquisition
Persistent build caches via volume attachments
Daytona sandboxes replace LocalSandbox when DAYTONA_API_KEY is set and the daytona feature is enabled.
Configuration
Environment Variables
DAYTONA_BASE_URL
string
default: "https://app.daytona.io/api"
Daytona API base URL. Use the default for Daytona Cloud.
Organization ID for multi-tenant deployments.
Sandbox size class: small, medium, large. Determines CPU/RAM allocation.
Pre-built snapshot to create sandboxes from. Enables warm starts.
Comma-separated key=value pairs injected into sandboxes at creation time. Example: ANTHROPIC_API_KEY=sk-ant-...,GH_TOKEN=github_pat_...
Persistent volume UUID for build cache (e.g., cargo target/ dir).
DAYTONA_VOLUME_MOUNT_PATH
Mount point for the persistent volume inside the sandbox.
Build Configuration
Enable the daytona feature in your Cargo.toml:
[ dependencies ]
magpie-core = { path = "../magpie-core" , features = [ "daytona" ] }
Or build with:
cargo build -p magpie-discord --features daytona
DaytonaSandbox
Creation Modes
Cold Clone
Snapshot-Based
Creates a fresh sandbox and clones the repo using gh repo clone: pub async fn create ( config : & DaytonaConfig , repo_full_name : & str ) -> Result < Self > {
let client = DaytonaClient :: new ( daytona_config )
. context ( "failed to create Daytona client" ) ? ;
let sandbox = client
. sandboxes ()
. create ( CreateSandboxParams {
class : Some ( config . sandbox_class . clone ()),
env : Some ( config . env_vars . clone ()),
.. Default :: default ()
})
. await ? ;
let sandbox_id = sandbox . id;
let working_dir = format! ( "/workspace/{}" , repo_full_name . replace ( '/' , "-" ));
// Clone the repo inside the sandbox
let clone_cmd = format! ( "gh repo clone {repo_full_name} {working_dir}" );
let result = client
. process ()
. execute_command ( & sandbox_id , & clone_cmd )
. await ? ;
if result . exit_code != 0 {
let _ = client . sandboxes () . delete ( & sandbox_id ) . await ;
anyhow :: bail! ( "gh repo clone failed: {}" , result . result);
}
Ok ( Self { client , sandbox_id , working_dir })
}
Use case: First run, or when no snapshot exists.Time: 30-60 seconds (sandbox creation + clone)Creates a sandbox from a pre-built snapshot with repo + dependencies baked in: pub async fn create_from_snapshot (
config : & DaytonaConfig ,
snapshot_name : & str ,
working_dir : & str ,
env : HashMap < String , String >,
volumes : Vec < SandboxVolumeAttachment >,
) -> Result < Self > {
let client = DaytonaClient :: new ( daytona_config ) ? ;
let sandbox = client
. sandboxes ()
. create ( CreateSandboxParams {
snapshot : Some ( snapshot_name . to_string ()),
class : Some ( config . sandbox_class . clone ()),
env : if env . is_empty () { None } else { Some ( env ) },
volumes : if volumes . is_empty () { None } else { Some ( volumes ) },
.. Default :: default ()
})
. await ? ;
let sandbox_id = sandbox . id;
// Wait for sandbox to reach Started state (large images take up to 5 min)
client
. sandboxes ()
. wait_for_state ( & sandbox_id , SandboxState :: Started , 300 )
. await ? ;
// Configure git and auth inside the sandbox
let setup_cmd = format! (
"sh -c 'sudo git config --system --add safe.directory {working_dir}; \
sudo git config --system user.email magpie@bot; \
sudo git config --system user.name Magpie; \
sudo chmod -R 777 {wd} 2>/dev/null || true; \
cd {wd} && git checkout -- . 2>/dev/null || true; \
gh auth setup-git 2>/dev/null || true'"
);
client . process () . execute_command ( & sandbox_id , & setup_cmd ) . await ? ;
Ok ( Self { client , sandbox_id , working_dir : working_dir . to_string () })
}
Use case: Production deployments with warm pools.Time: 5-10 seconds (snapshot start + git setup)
Sandbox Trait Implementation
DaytonaSandbox implements the Sandbox trait:
#[async_trait]
impl Sandbox for DaytonaSandbox {
fn name ( & self ) -> & str {
"daytona"
}
fn working_dir ( & self ) -> & str {
& self . working_dir
}
async fn exec ( & self , command : & str , args : & [ & str ]) -> Result < ExecOutput > {
// Daytona's execute_command does NOT use a shell,
// so we wrap in `sh -c '...'` to support shell operators (&&, |, etc.)
let inner = if args . is_empty () {
format! ( "cd {} && {}" , self . working_dir, command )
} else {
let args_str = args . iter () . map ( | a | shell_escape ( a )) . collect :: < Vec < _ >>() . join ( " " );
format! ( "cd {} && {} {}" , self . working_dir, command , args_str )
};
let full_cmd = format! ( "sh -c {}" , shell_escape ( & inner ));
let result = self . client
. process ()
. execute_command ( & self . sandbox_id, & full_cmd )
. await ? ;
Ok ( ExecOutput {
stdout : result . result . clone (),
stderr : String :: new (), // Daytona combines output into result
exit_code : result . exit_code,
})
}
async fn read_file ( & self , path : & str ) -> Result < Vec < u8 >> {
let full_path = if path . starts_with ( '/' ) {
path . to_string ()
} else {
format! ( "{}/{}" , self . working_dir, path )
};
self . client . files () . download ( & self . sandbox_id, & full_path ) . await
}
async fn write_file ( & self , path : & str , content : & [ u8 ]) -> Result <()> {
let full_path = if path . starts_with ( '/' ) {
path . to_string ()
} else {
format! ( "{}/{}" , self . working_dir, path )
};
self . client . files () . upload ( & self . sandbox_id, & full_path , content ) . await
}
async fn destroy ( & self ) -> Result <()> {
self . client . sandboxes () . delete ( & self . sandbox_id) . await
}
}
Daytona’s execute_command does not use a shell, so commands with &&, |, or cd must be wrapped in sh -c '...'. The implementation handles this automatically.
Warm Sandbox Pool
The warm pool pre-provisions sandboxes from snapshots and maintains them in an idle state. When a pipeline run starts, it acquires a warm sandbox in milliseconds instead of waiting 30+ seconds for a cold start.
Architecture
Provision
At startup, the pool creates pool_size sandboxes per repo from the specified snapshot.
Acquire
When a pipeline run starts, it calls pool.acquire(repo_full_name) to get an idle sandbox.
Execute
The pipeline runs in the acquired sandbox. Git state is modified (new branch, commits).
Release
After the run completes, the sandbox is released back to the pool. Git state is reset to base branch.
Refresh
A background loop periodically runs git fetch && cargo check on idle sandboxes to keep them warm.
Configuration
use magpie_core :: sandbox :: pool :: { WarmPool , WarmPoolConfig , RepoPoolConfig };
let pool_config = WarmPoolConfig {
pool_size : 3 , // 3 sandboxes per repo
refresh_interval_secs : 300 , // refresh every 5 minutes
daytona : daytona_config . clone (),
repos : vec! [
RepoPoolConfig {
repo_name : "api-service" . to_string (),
repo_full_name : "myorg/api-service" . to_string (),
snapshot_name : "myorg-api-service-snapshot" . to_string (),
sandbox_class : "large" . to_string (), // override default class
base_branch : "main" . to_string (),
build_check_cmd : Some ( "cargo check --all-features" . to_string ()),
volume_id : Some ( "vol-abc123" . to_string ()),
volume_mount_path : Some ( "/workspace/api-service/target" . to_string ()),
env_vars : HashMap :: new (),
},
],
};
let pool = Arc :: new ( WarmPool :: new ( pool_config ));
pool . provision () . await ? ;
pool . start_refresh_loop () . await ;
Workspace States
Sandbox is being created from snapshot. Not yet ready for acquisition.
Ready for acquisition. Git state is clean (on base branch, no uncommitted changes).
Currently in use by a pipeline run. Cannot be acquired.
Acquisition
Pipelines try to acquire a warm sandbox before falling back to cold creation:
let pooled : Option < Box < dyn Sandbox >> = if let Some ( ref pool ) = config . pool {
match pool . acquire ( & full_name ) . await {
Some ( ws ) => {
info! ( org , repo = % repo_name , "acquired warm pooled sandbox" );
Some ( Box :: new ( crate :: sandbox :: pooled :: PooledSandbox :: new (
ws ,
Arc :: clone ( pool ),
config . base_branch . clone (),
)))
}
None => {
info! ( org , repo = % repo_name , "no idle pooled sandbox — falling back to cold clone" );
None
}
}
} else {
None
};
if let Some ( sandbox ) = pooled {
sandbox
} else if let Some ( ref daytona_cfg ) = config . daytona {
// Cold creation from snapshot or clone...
}
The pool does not block when all sandboxes are busy. It immediately returns None, and the pipeline falls back to cold creation. This ensures zero queuing delays.
Release and Reset
After a pipeline run completes, the PooledSandbox wrapper automatically releases the workspace back to the pool:
impl Sandbox for PooledSandbox {
async fn destroy ( & self ) -> Result <()> {
// Release back to pool instead of destroying
self . pool . release ( & self . workspace, & self . base_branch) . await ? ;
Ok (())
}
}
The release logic resets git state but preserves build artifacts:
pub async fn release ( & self , workspace : & Arc < Workspace >, base_branch : & str ) -> Result <()> {
// Reset git state but preserve build artifacts
let reset_cmd = format! (
"cd {} && git checkout {} && git clean -fd -e target/ -e node_modules/ -e .next/ && git reset --hard origin/{}" ,
workspace . working_dir, base_branch , base_branch
);
workspace . sandbox . exec_shell ( & reset_cmd ) . await ? ;
workspace . set_status ( WorkspaceStatus :: Idle );
Ok (())
}
The -e flags in git clean exclude build directories from deletion. This preserves incremental compilation artifacts between pipeline runs.
Refresh Loop
A background tokio task periodically refreshes idle workspaces:
pub async fn start_refresh_loop ( & self ) {
let workspaces = Arc :: clone ( & self . workspaces);
let shutdown = Arc :: clone ( & self . shutdown);
let interval = Duration :: from_secs ( self . config . refresh_interval_secs);
let handle = tokio :: spawn ( async move {
loop {
tokio :: select! {
_ = tokio :: time :: sleep ( interval ) => {},
_ = shutdown . notified () => return ,
}
let lock = workspaces . lock () . await ;
let idle : Vec < Arc < Workspace >> = lock
. iter ()
. filter ( | ws | ws . status () == WorkspaceStatus :: Idle )
. cloned ()
. collect ();
drop ( lock );
for ws in idle {
// git fetch + reset
let fetch_cmd = format! (
"cd {} && git fetch origin {} && git reset --hard origin/{}" ,
ws . working_dir, base_branch , base_branch
);
ws . sandbox . exec_shell ( & fetch_cmd ) . await ? ;
// Optional build check
if let Some ( ref cmd ) = build_cmd {
let full_cmd = format! ( "cd {} && {}" , ws . working_dir, cmd );
ws . sandbox . exec_shell ( & full_cmd ) . await ? ;
}
}
}
});
}
Snapshot Specs
Snapshots are pre-built Docker images that include:
Base OS (Ubuntu 22.04 or Debian-based)
Toolchain (Rust, Node.js, Python, etc.)
CLI tools (gh, git, claude)
Repository clone (optional, but recommended)
Build cache (optional: pre-compiled dependencies)
Creating Snapshots
Magpie uses the aurumintel/magpie-devbox:latest Docker image as a base:
FROM ubuntu:22.04
# Install base dependencies
RUN apt-get update && apt-get install -y \
git curl build-essential pkg-config libssl-dev \
&& rm -rf /var/lib/apt/lists/*
# Install Rust
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
ENV PATH= "/root/.cargo/bin:${PATH}"
# Install GitHub CLI
RUN curl -fsSL https://cli.github.com/packages/githubcli-archive-keyring.gpg \
| dd of=/usr/share/keyrings/githubcli-archive-keyring.gpg \
&& chmod go+r /usr/share/keyrings/githubcli-archive-keyring.gpg \
&& echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/githubcli-archive-keyring.gpg] https://cli.github.com/packages stable main" \
| tee /etc/apt/sources.list.d/github-cli.list > /dev/null \
&& apt-get update && apt-get install -y gh
# Install Claude CLI
RUN curl -fsSL https://download.claudeai.dev/install.sh | bash
# Pre-clone repo (optional)
WORKDIR /workspace
RUN git clone https://github.com/myorg/api-service.git
# Pre-fetch dependencies (optional)
WORKDIR /workspace/api-service
RUN cargo fetch
Registering Snapshots
After building the Docker image, push it and register a Daytona snapshot:
# Build and push image
docker build -t myregistry/api-service-devbox:latest .
docker push myregistry/api-service-devbox:latest
# Register snapshot (Daytona CLI or API)
daytona snapshot create \
--name api-service-snapshot \
--image myregistry/api-service-devbox:latest
Or use Magpie’s ensure_snapshot() helper:
use magpie_core :: sandbox :: snapshots :: ensure_snapshot;
let snapshot_name = ensure_snapshot ( & daytona_config , "api-service-snapshot" ) . await ? ;
Overlay Filesystem Constraints
Daytona sandboxes use OverlayFS with ~3 GB writable space . Any files modified from the snapshot image trigger copy-on-write, consuming overlay space. Critical rules:
NEVER run chmod -R or chown -R on large directories at runtime (causes disk full)
Pre-compile dependencies in the snapshot image (run cargo fetch during Docker build)
Use persistent volumes for build caches (target/ dir)
Run git config --system (not --global) to avoid HOME dependency
Environment Variables in Sandboxes
Daytona sandboxes require API keys and tokens for authentication:
Claude API key for agent calls inside the sandbox.
GitHub personal access token for gh repo clone and authenticated git push.
OAuth token for claude CLI if using OAuth instead of API key.
Injection via DAYTONA_ENV
Set DAYTONA_ENV to inject variables at sandbox creation time:
export DAYTONA_ENV = "ANTHROPIC_API_KEY=sk-ant-...,GH_TOKEN=github_pat_..."
Magpie parses this and passes it to CreateSandboxParams:
let env : HashMap < String , String > = std :: env :: var ( "DAYTONA_ENV" )
. unwrap_or_default ()
. split ( ',' )
. filter_map ( | pair | {
let ( k , v ) = pair . split_once ( '=' ) ? ;
Some (( k . trim () . to_string (), v . trim () . to_string ()))
})
. collect ();
Cold Start (No Snapshot)
Create sandbox
API call to Daytona → 5-10s
Clone repo
gh repo clone → 10-30s (depends on repo size)
Warm Start (Snapshot, No Pool)
Create from snapshot
API call + image pull → 5-10s
Git setup
Configure safe.directory + auth → 1-2s
Pooled Warm Start
Acquire from pool
Lock + status check → under 100ms
Checkout base branch
Already on base, no-op → under 100ms
Total
Under 200ms (sub-second)
Testing
Unit Tests
cargo test -p magpie-core --features daytona -- daytona
Integration Tests (Requires API Key)
export DAYTONA_API_KEY = "..."
cargo test -p magpie-core --features daytona -- daytona --ignored
Key test cases:
✅ test_daytona_sandbox_connectivity
✅ test_magpie_devbox (end-to-end snapshot test)
✅ test_sandbox_env_vars (verify secret injection)
✅ test_pipeline_sandbox_flow (git ops)
✅ test_claude_in_sandbox (agent call)
Org-Scoped Repos Dynamic repo resolution for multi-repo deployments
Sandbox Abstraction Trait design and LocalSandbox comparison