Skip to main content
The Wizard CLI (alpasim_wizard) is AlpaSim’s configuration and deployment tool. It generates service configurations, manages Docker containers, and orchestrates distributed simulations.

Installation

The wizard is installed as part of the AlpaSim workspace:
# From repository root
./setup_local_env.sh
source .venv/bin/activate

Usage

The wizard is invoked with a YAML configuration file:
python -m alpasim_wizard --config path/to/config.yaml

Configuration File

The wizard configuration is structured using OmegaConf/Hydra:
# config.yaml
defines:
  ALPASIM_ROOT: "/path/to/alpasim"
  LOG_ROOT: "/path/to/logs"

wizard:
  run_name: "my_experiment"
  run_method: "docker_compose"  # or "slurm", "none"
  run_mode: "batch"              # or "attach_bash", "attach_vscode"
  log_level: "INFO"
  log_dir: "${LOG_ROOT}/runs/${wizard.run_name}"
  baseport: 50000
  dry_run: false
  latest_symlink: true

scenes:
  test_suite_id: "validation_set_v1"
  # OR: scene_ids: ["scene_001", "scene_002"]
  limit_to_first_n: 0  # 0 = no limit
  scene_cache: "/data/scenes"
  scenes_csv: "${ALPASIM_ROOT}/data/sim_scenes.csv"
  suites_csv: "${ALPASIM_ROOT}/data/sim_suites.csv"

services:
  driver:
    image: "alpasim/driver:latest"
    instances: 4
    gpus_per_instance: 0
    
  sensorsim:
    image: "alpasim/sensorsim:latest"
    instances: 2
    gpus_per_instance: 1
    
  physics:
    image: "alpasim/physics:latest"
    instances: 2
    gpus_per_instance: 0
    
  trafficsim:
    image: "alpasim/traffic:latest"
    instances: 1
    gpus_per_instance: 1
    
  controller:
    image: "alpasim/controller:latest"
    instances: 4
    gpus_per_instance: 0
    
  runtime:
    image: "alpasim/runtime:latest"
    instances: 1
    cpus_per_instance: 8
    env:
      PYTHONUNBUFFERED: "1"

runtime:
  enable_rendering: true
  enable_eval: true
  physics_update_mode: "once_per_step"
  vehicle:
    aabb_x_m: 4.5
    aabb_y_m: 2.0
    aabb_z_m: 1.5
  cameras:
    - logical_id: "camera_front_wide_120fov"
      height: 720
      width: 1280
      frame_interval_us: 33000

eval:
  scorers:
    - collision
    - comfort
    - progress
  video:
    render_video: false

Configuration Sections

Wizard Section

run_name
string
required
Unique identifier for this experiment run
run_method
RunMethod
required
Deployment method:
  • docker_compose: Docker Compose orchestration
  • slurm: SLURM cluster deployment
  • none: Generate configs without running
run_mode
RunMode
Execution mode:
  • batch: Normal batch processing
  • attach_bash: Interactive bash session in container
  • attach_vscode: VSCode remote debugging
log_dir
string
required
Output directory for logs, configs, and results
log_level
string
Global log level: DEBUG, INFO, WARNING, ERROR
baseport
int
required
Starting port for service allocation (increments for each service)
external_services
dict[str, list[str]]
External service addresses (for hybrid local/container setups)
external_services:
  driver: ["localhost:6789"]

Scenes Section

test_suite_id
string
Test suite identifier from suites_csv (mutually exclusive with scene_ids)
scene_ids
list[string]
Explicit list of scene IDs to simulate (mutually exclusive with test_suite_id)
limit_to_first_n
int
Limit number of scenes (0 = no limit)
scene_cache
string
required
Directory containing scene USDZ files
local_usdz_dir
string
Local directory with USDZ files (bypasses CSV files)

Services Section

Each service has similar configuration:
image
string
required
Docker image name and tag
instances
int
required
Number of service instances (for load balancing)
gpus_per_instance
int
GPUs to allocate per instance (default: 0)
cpus_per_instance
int
CPUs to allocate per instance (default: 4)
mem_gb
int
Memory in GB (SLURM only)
env
dict[string, string]
Environment variables for the service

Core Functionality

AlpasimWizard Class

Main entry point for the wizard:
from alpasim_wizard.schema import AlpasimConfig
from alpasim_wizard.wizard import AlpasimWizard
from alpasim_wizard.setup_omegaconf import main_wrapper

def run_wizard(cfg: AlpasimConfig) -> None:
    wizard = AlpasimWizard.create(cfg)
    wizard.cast()  # Execute deployment

main_wrapper(run_wizard)  # Load config and call run_wizard
Methods:
create
staticmethod
Factory method to create wizard from configuration
cast
method
Main execution method:
  1. Clone driver code (if configured)
  2. Generate Docker Compose / SLURM configs
  3. Deploy services based on run_method
maybe_clone_driver_code
method
Clone driver repository to log directory if driver_code_hash is set

WizardContext

Shared context for wizard operations:
from alpasim_wizard.context import WizardContext

context = WizardContext.create(cfg)

print(context.run_uuid)           # Unique run identifier
print(context.service_manager)    # Service configuration manager
print(context.cfg)                # Parsed AlpasimConfig

ConfigurationManager

Generates service-specific configurations:
from alpasim_wizard.configuration import ConfigurationManager

config_manager = ConfigurationManager(log_dir)
config_manager.generate_all(container_set, context)

# Generates:
# - generated-network-config.yaml (service endpoints)
# - runtime-config.yaml (runtime parameters)
# - eval-config.yaml (evaluation settings)

Deployment Methods

Docker Compose

Generates docker-compose.yaml and run.sh:
python -m alpasim_wizard --config config.yaml
# Outputs:
# - ${log_dir}/docker-compose.yaml
# - ${log_dir}/run.sh

# Services start automatically with run_method: docker_compose
# Or manually:
cd ${log_dir}
./run.sh

SLURM

Generates SLURM batch scripts and submits jobs:
wizard:
  run_method: "slurm"
  slurm_gpu_partition: "gpu_short"
  slurm_cpu_partition: "cpu_short"
python -m alpasim_wizard --config config.yaml
# Submits SLURM jobs for each service
# Outputs job IDs and status

None (Config Only)

Generate configurations without running:
wizard:
  run_method: "none"
python -m alpasim_wizard --config config.yaml
# Generates configs in ${log_dir}
# Run manually: ${log_dir}/run.sh

Advanced Features

Debug Flags

Enable debugging features:
wizard:
  debug_flags:
    use_localhost: true  # Use localhost instead of container names

Driver Code Cloning

Clone specific driver version:
wizard:
  driver_code_repo: "[email protected]:company/driver.git"
  driver_code_hash: "abc123def456"
Cloned to ${log_dir}/driver_code/.

Hybrid Deployments

Run some services locally, others in containers:
wizard:
  external_services:
    driver: ["localhost:6789"]  # Driver running on host
  debug_flags:
    use_localhost: true

services:
  driver:
    instances: 0  # Don't start driver container

Scene Selection

scenes:
  test_suite_id: "validation_v1"
  suites_csv: "data/sim_suites.csv"
  scenes_csv: "data/sim_scenes.csv"

Generated Files

The wizard creates the following structure in ${log_dir}:
${log_dir}/
├── docker-compose.yaml          # Container orchestration
├── run.sh                       # Startup script
├── generated-network-config.yaml # Service endpoints
├── runtime-config.yaml          # Runtime parameters
├── eval-config.yaml             # Evaluation settings
├── driver_code/                 # Cloned driver (if configured)
├── logs/                        # Service logs
└── rollouts/                    # Simulation outputs
    ├── scene_001/
    │   ├── batch_0/
    │   │   ├── rollout_0/
    │   │   │   ├── simulation.asl
    │   │   │   └── metrics.parquet

Output Structure

Simulation results follow this hierarchy:
  • rollouts/: Top-level output directory
    • /: Per-scene directory
      • batch_/: Batch of parallel rollouts
        • rollout_/: Individual rollout
          • simulation.asl: AlpaSim Simulation Log
          • metrics.parquet: Evaluation metrics
          • *.jpg: Rendered images (if enabled)

CLI Arguments

python -m alpasim_wizard \
  --config config.yaml \
  --overrides "wizard.run_name=experiment_v2" \
              "runtime.enable_rendering=false"
--config
string
required
Path to YAML configuration file
--overrides
string...
Hydra-style configuration overrides

Example Workflows

# Use local services with debugging
python -m alpasim_wizard \
  --config configs/local_dev.yaml \
  --overrides "wizard.run_method=none" \
              "wizard.debug_flags.use_localhost=true"

# Manually start services
cd ${log_dir}
./run.sh

Environment Variables

The wizard respects these environment variables:
SLURM_JOB_ID
string
Automatically set by SLURM (used for job tracking)
SLURM_JOB_ACCOUNT
string
SLURM account for GPU partition (if not overridden in config)

CLI Commands

alpasim_wizard

Main wizard command for configuration and deployment.
alpasim_wizard +deploy=local wizard.log_dir=$PWD/tutorial
See Configuration section above for full usage.

alpasim_check_config

Validates wizard configuration without running simulation. Useful for quick config checks on login nodes.
alpasim_check_config --config wizard-config.yaml
This command:
  • Validates YAML syntax and schema
  • Checks NRE version compatibility
  • Verifies scene availability
  • Validates service configurations
  • Does not launch any services
Use alpasim_check_config before submitting large batch jobs to catch configuration errors early.

Build docs developers (and LLMs) love