Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/getcompanion-ai/feynman/llms.txt

Use this file to discover all available pages before exploring further.

Feynman supports three compute environments for running research code and experiments: Docker for local isolated execution, Modal for serverless burst GPU workloads, and RunPod for persistent GPU pods with SSH access.

Docker

Docker runs research code inside isolated containers while Feynman stays on the host. The container receives the project files, runs the commands, and results sync back to the mounted directory. When to use Docker:
  • Running untrusted code from a paper’s repository
  • Experiments that install packages or modify system state
  • Any time you need safe, isolated local execution
  • Replication workflows in /replicate or /autoresearch

Running commands in a container

For Python research code:
docker run --rm -v "$(pwd)":/workspace -w /workspace python:3.11 bash -c "
  pip install -r requirements.txt &&
  python train.py
"
For projects with a Dockerfile:
docker build -t feynman-experiment .
docker run --rm -v "$(pwd)/results":/workspace/results feynman-experiment
For GPU workloads (requires NVIDIA Container Toolkit):
docker run --rm --gpus all -v "$(pwd)":/workspace -w /workspace pytorch/pytorch:latest bash -c "
  pip install -r requirements.txt &&
  python train.py
"

Choosing a base image

Research typeBase image
Python ML/DLpytorch/pytorch:latest or tensorflow/tensorflow:latest-gpu
Python generalpython:3.11
Node.jsnode:20
R / statisticsrocker/r-ver:4
Juliajulia:1.10
Multi-languageubuntu:24.04 with manual installs

Persistent containers

For iterative experiments, create a named container rather than using --rm:
docker create --name my-experiment -v "$(pwd)":/workspace -w /workspace python:3.11 tail -f /dev/null
docker start my-experiment
docker exec my-experiment bash -c "pip install -r requirements.txt"
docker exec my-experiment bash -c "python train.py"
This preserves installed packages across iterations. Clean up after:
docker stop my-experiment && docker rm my-experiment
Containers are network-enabled by default. Add --network none for full isolation. The mounted workspace syncs results back to the host automatically.

Modal provides serverless GPU compute. You write a decorated Python script and run it — no pod lifecycle to manage. Modal is the right choice for stateless burst workloads like training runs, inference jobs, and benchmarks. When to use Modal:
  • Burst GPU jobs (training, inference, benchmarks)
  • Stateless work where no persistent state is needed between runs
  • Jobs where you want to avoid managing instance lifecycle

Setup

pip install modal
modal setup
Set credentials as environment variables (or via modal setup):
export MODAL_TOKEN_ID=your-token-id
export MODAL_TOKEN_SECRET=your-token-secret

Commands

CommandDescription
modal run script.pyRun a script on Modal (ephemeral)
modal run --detach script.pyRun detached in the background
modal deploy script.pyDeploy persistently
modal serve script.pyServe with hot-reload (dev)
modal shell --gpu a100Interactive shell with GPU
modal app listList deployed apps

GPU types

T4, L4, A10G, L40S, A100, A100-80GB, H100, H200, B200 For multi-GPU jobs, use "H100:4" for 4× H100s.

Script pattern

import modal

app = modal.App("experiment")
image = modal.Image.debian_slim(python_version="3.11").pip_install("torch==2.8.0")

@app.function(gpu="A100", image=image, timeout=600)
def train():
    import torch
    # training code here

@app.local_entrypoint()
def main():
    train.remote()
Run it:
modal run script.py

RunPod

RunPod provides persistent GPU pods with SSH access. It is suited for long-running experiments, large dataset processing, and multi-step work where you need to SSH in between iterations. When to use RunPod:
  • Long-running experiments that need persistent state
  • Large dataset processing
  • Multi-step work where SSH access between iterations is needed
  • Experiments that cannot fit into a stateless function model

Setup

brew install runpod/runpodctl/runpodctl
runpodctl config --apiKey=YOUR_KEY

Commands

CommandDescription
runpodctl create pod --gpuType "NVIDIA A100 80GB PCIe" --imageName "runpod/pytorch:2.4.0-py3.11-cuda12.4.1-devel-ubuntu22.04" --name experimentCreate a pod
runpodctl get podList all pods
runpodctl get pod <id>Get details and connection info for a specific pod
runpodctl stop pod <id>Stop (preserves volume)
runpodctl start pod <id>Resume a stopped pod
runpodctl remove pod <id>Terminate and delete
runpodctl gpu listList available GPU types and prices
runpodctl send <file>Transfer files to/from pods
runpodctl receive <code>Receive transferred files

SSH access

ssh root@<IP> -p <PORT> -i ~/.ssh/id_ed25519
Get the IP and port from runpodctl get pod <id>. Pods must expose port 22/tcp.

Available GPU types

NVIDIA GeForce RTX 4090, NVIDIA RTX A6000, NVIDIA A40, NVIDIA A100 80GB PCIe, NVIDIA H100 80GB HBM3
Always stop or remove RunPod pods after experiments. Running pods continue to incur charges even when idle.

Choosing between Modal and RunPod

ModalRunPod
Best forBurst GPU, stateless jobsPersistent, SSH, long-running
StateEphemeral (no persistent state)Persistent volume
AccessPython function callsSSH
LifecycleManaged automaticallyYou manage start/stop
CredentialsMODAL_TOKEN_ID, MODAL_TOKEN_SECRETRUNPOD_API_KEY

Build docs developers (and LLMs) love