Run GR00T in a Docker container with all dependencies pre-configured, including CUDA support, PyTorch, PyTorch3D, and the complete GR00T codebase.
Prerequisites
Install Docker version 20.10 or later:
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
Configure Docker to run without sudo:
sudo usermod -aG docker $USER
newgrp docker
Install the NVIDIA Container Toolkit for GPU access:
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update
sudo apt-get install -y nvidia-container-toolkit
sudo systemctl restart docker
docker run --rm --gpus all nvidia/cuda:12.0.0-base-ubuntu22.04 nvidia-smi
Building the image
The Docker image is based on NVIDIA’s PyTorch container (nvcr.io/nvidia/pytorch:25.04-py3) and includes all GR00T dependencies.
Build command
From the project root:
Make sure you are using a bash environment. The build process will take several minutes and requires several GB of disk space.
Build output
The build creates an image named gr00t-dev with:
- NVIDIA PyTorch 25.04 base
- CUDA 12.x support
- Python 3.10
- PyTorch3D
- All dependencies from
pyproject.toml
- GR00T codebase at
/workspace/gr00t/
Rebuild with no cache
Force a clean rebuild:
Running the container
Interactive shell (baked code)
Run with the code baked into the image:
docker run -it --rm --gpus all gr00t-dev /bin/bash
This starts an interactive shell in /workspace/gr00t/ with all dependencies ready.
Development mode (mounted code)
Mount your local codebase for live editing:
cd docker # Must run from docker/ directory
docker run -it --rm --gpus all \
-v $(pwd)/..:/workspace/gr00t \
gr00t-dev /bin/bash
Changes to your local GR00T code will be immediately reflected inside the container. This is ideal for development and debugging.
Custom working directory
Start in a specific directory:
docker run -it --rm --gpus all \
-w /workspace/gr00t/examples/LIBERO \
gr00t-dev /bin/bash
Running inference in Docker
Start policy server
Run the GR00T server inside the container:
docker run -it --rm --gpus all \
-p 5555:5555 \
gr00t-dev /bin/bash -c "
uv run python gr00t/eval/run_gr00t_server.py \
--embodiment-tag GR1 \
--model-path nvidia/GR00T-N1.6-3B \
--host 0.0.0.0 \
--port 5555
"
The -p 5555:5555 flag exposes the server port to the host machine.
Run inference script
Execute standalone inference:
docker run -it --rm --gpus all \
-v /path/to/data:/data \
gr00t-dev /bin/bash -c "
uv run python scripts/deployment/standalone_inference_script.py \
--model-path nvidia/GR00T-N1.6-3B \
--dataset-path /data/gr1.PickNPlace \
--embodiment-tag GR1 \
--inference-mode pytorch
"
Mount datasets
Mount external data directories:
docker run -it --rm --gpus all \
-v /path/to/datasets:/workspace/datasets \
-v /path/to/checkpoints:/workspace/checkpoints \
gr00t-dev /bin/bash
Training in Docker
Mount output directory
docker run -it --rm --gpus all \
-v /path/to/datasets:/workspace/datasets \
-v /path/to/outputs:/workspace/outputs \
gr00t-dev /bin/bash -c "
CUDA_VISIBLE_DEVICES=0 uv run python gr00t/experiment/launch_finetune.py \
--base-model-path nvidia/GR00T-N1.6-3B \
--dataset-path /workspace/datasets/my_dataset \
--embodiment-tag GR1 \
--output-dir /workspace/outputs/run_1 \
--max-steps 2000
"
Multi-GPU training
Specify GPU devices:
docker run -it --rm --gpus '"device=0,1"' \
gr00t-dev /bin/bash -c "
export NUM_GPUS=2
CUDA_VISIBLE_DEVICES=0,1 uv run python gr00t/experiment/launch_finetune.py \
--num-gpus 2 \
--base-model-path nvidia/GR00T-N1.6-3B \
--dataset-path /workspace/datasets/my_dataset
"
Docker Compose
Create docker-compose.yml for easier container management:
version: '3.8'
services:
gr00t-server:
image: gr00t-dev
runtime: nvidia
environment:
- NVIDIA_VISIBLE_DEVICES=all
ports:
- "5555:5555"
volumes:
- ./datasets:/workspace/datasets
- ./checkpoints:/workspace/checkpoints
command: >
uv run python gr00t/eval/run_gr00t_server.py
--embodiment-tag GR1
--model-path /workspace/checkpoints/gr00t-n1.6-3b
--host 0.0.0.0
--port 5555
Launch with:
Advanced usage
Persistent container
Keep a container running in the background:
docker run -d --name gr00t-dev --gpus all \
-v $(pwd)/..:/workspace/gr00t \
gr00t-dev tail -f /dev/null
Execute commands in the running container:
docker exec -it gr00t-dev bash
docker exec gr00t-dev uv run python gr00t/eval/run_gr00t_server.py --embodiment-tag GR1
Stop and remove:
docker stop gr00t-dev
docker rm gr00t-dev
Environment variables
Pass environment variables:
docker run -it --rm --gpus all \
-e WANDB_API_KEY=your_key \
-e CUDA_VISIBLE_DEVICES=0 \
gr00t-dev /bin/bash
Network configuration
Use host networking for minimal latency:
docker run -it --rm --gpus all \
--network host \
gr00t-dev /bin/bash
Host networking removes network isolation. Only use on trusted networks.
Troubleshooting
GPU not detected
Verify NVIDIA Container Toolkit:
nvidia-container-toolkit --version
Restart Docker daemon:
sudo systemctl restart docker
Test GPU access:
docker run --rm --gpus all nvidia/cuda:12.0.0-base-ubuntu22.04 nvidia-smi
If nvidia-smi fails inside the container, check:
-
NVIDIA drivers on host:
-
Docker daemon configuration:
cat /etc/docker/daemon.json
Should include:
{
"runtimes": {
"nvidia": {
"path": "nvidia-container-runtime",
"runtimeArgs": []
}
}
}
Permission errors
Add user to docker group:
sudo usermod -aG docker $USER
newgrp docker
Or run with sudo:
sudo docker run -it --rm --gpus all gr00t-dev /bin/bash
Build failures
Check disk space:
Clean Docker cache:
Rebuild without cache:
cd docker
sudo bash build.sh --no-cache
Out of memory during build
Increase Docker memory limit in Docker Desktop settings or in /etc/docker/daemon.json:
{
"default-runtime": "nvidia",
"memory": "16g",
"cpus": "8"
}
Restart Docker:
sudo systemctl restart docker
Check logs:
docker logs <container_id>
Run with verbose output:
docker run -it --rm --gpus all gr00t-dev /bin/bash -x
Best practices
Use .dockerignore
Create .dockerignore to exclude unnecessary files:
__pycache__/
*.pyc
.git/
.venv/
outputs/
checkpoints/
Tag images by version
docker build -t gr00t-dev:v1.6 .
docker build -t gr00t-dev:latest .
Resource limits
Limit container resources:
docker run -it --rm --gpus all \
--memory="32g" \
--cpus="8" \
gr00t-dev /bin/bash
Clean up unused images
Remove old images:
Deployment scenarios
Cloud deployment (AWS, GCP, Azure)
Push image to container registry:
# Tag for registry
docker tag gr00t-dev:latest your-registry.com/gr00t-dev:latest
# Push
docker push your-registry.com/gr00t-dev:latest
Run on cloud GPU instance:
docker pull your-registry.com/gr00t-dev:latest
docker run -it --rm --gpus all \
-p 5555:5555 \
your-registry.com/gr00t-dev:latest
Kubernetes deployment
Create deployment manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: gr00t-server
spec:
replicas: 1
template:
spec:
containers:
- name: gr00t
image: your-registry.com/gr00t-dev:latest
resources:
limits:
nvidia.com/gpu: 1
ports:
- containerPort: 5555
CI/CD integration
Use in automated testing:
# In CI pipeline
docker build -t gr00t-dev:ci .
docker run --rm --gpus all gr00t-dev:ci \
uv run pytest tests/