Skip to main content
rfx is HuggingFace Hub-native. Push and pull policies and datasets like you push code to GitHub.

Quick Start

1

Login to HuggingFace

huggingface-cli login
Or set the token in your environment:
export HF_TOKEN=hf_...
2

Push a policy

rfx push runs/my-policy rfx-community/my-policy
Or from Python:
import rfx

url = rfx.push_policy("runs/my-policy", "my-org/my-policy")
print(f"Pushed to {url}")
3

Deploy from Hub

rfx deploy hf://rfx-community/go2-walk-v1 --robot go2

Push Policies to Hub

From CLI

# Push a saved policy
rfx push runs/so101-pick-v1 my-org/so101-pick-v1

# Create a private repo
rfx push runs/my-policy my-org/my-policy --private

From Python

import rfx

# Push a policy directory
url = rfx.push_policy(
    path="runs/so101-pick-v1",
    repo_id="my-org/so101-pick-v1",
    private=False,
)

print(f"Model uploaded to {url}")
# https://huggingface.co/my-org/so101-pick-v1
The function:
  1. Creates the repo if it doesn’t exist
  2. Uploads all files in the policy directory
  3. Returns the HuggingFace Hub URL

What Gets Uploaded

All files in the policy directory:
runs/so101-pick-v1/
├── rfx_config.json       # Architecture + robot + training metadata
├── model.safetensors     # Weights
├── normalizer.json       # Observation normalizer (if present)
└── README.md             # Optional model card (if present)

Load Policies from Hub

Deploy Directly

The simplest way - deploy straight from Hub:
rfx deploy hf://rfx-community/go2-walk-v1 --robot go2
import rfx

stats = rfx.deploy(
    "hf://rfx-community/go2-walk-v1",
    robot="go2",
    duration=60.0,
)

Load for Inspection

Load without deploying:
import rfx

# Load from Hub
loaded = rfx.load_policy("hf://rfx-community/go2-walk-v1")

print(f"Policy type: {loaded.policy_type}")
print(f"Robot: {loaded.robot_config.name}")
print(f"Training info: {loaded.training_info}")

# Use the policy
obs = robot.observe()
action = loaded(obs)

Inspect Without Loading Weights

Quickly check metadata without downloading model weights:
import rfx

config = rfx.inspect_policy("hf://rfx-community/go2-walk-v1")

print(config)
# {
#   "policy_type": "mlp",
#   "robot_config": {...},
#   "training": {"total_steps": 100000},
#   "architecture": {...}
# }

Push Datasets to Hub

From Collection

Push datasets during collection:
rfx record --robot so101 --repo-id my-org/demos --episodes 10 --push
import rfx.collection

dataset = rfx.collection.collect(
    "so101",
    "my-org/pick-place-demos",
    episodes=50,
    push_to_hub=True,  # Auto-push after collection
)

Push Existing Datasets

from rfx.collection import open_dataset, push

# Open local dataset
dataset = open_dataset("my-org/demos", root="datasets")

# Push to Hub
url = push(dataset, repo_id="my-org/demos")
print(f"Dataset uploaded to {url}")
Or use the dataset’s push() method:
from rfx.collection import Dataset

dataset = Dataset.open("my-org/demos", root="datasets")
dataset.push()  # Uses the repo_id from the dataset

# Or push to a different repo
dataset.push("my-org/new-repo-name")

Pull Datasets from Hub

Download for Training

# Using huggingface-cli
huggingface-cli download my-org/demos --repo-type dataset --local-dir datasets/my-org/demos

Pull with Python API

from rfx.collection import pull, from_hub

# Pull dataset from Hub
dataset = pull("my-org/demos", root="datasets")
print(f"Downloaded {dataset.num_episodes} episodes")

# Alias: from_hub
dataset = from_hub("my-org/demos", root="datasets")

Use in LeRobot Training

LeRobot can load datasets directly from Hub:
python -m lerobot.scripts.train \
  policy=act \
  dataset_repo_id=my-org/demos \
  training.num_epochs=500

Repository Structure

Policy Repos

A policy repo on HuggingFace Hub:
my-org/so101-pick-v1/
├── rfx_config.json       # Required: policy metadata
├── model.safetensors     # Required: weights
├── normalizer.json       # Optional: observation normalizer
├── README.md             # Optional: model card
└── .gitattributes        # Auto-generated by HF

Dataset Repos

A dataset repo on HuggingFace Hub:
my-org/pick-place-demos/
├── data/
│   ├── chunk-000/
│   │   ├── observation.state.parquet
│   │   ├── action.parquet
│   │   └── observation.images.cam0/
│   │       ├── episode_000000.mp4
│   │       └── ...
│   └── ...
├── meta/
│   └── info.json
├── videos/              # Decoded videos
└── README.md            # Dataset card

Model Cards

Add a README.md to your policy directory before pushing:
---
tags:
- rfx
- robotics
- so101
- imitation-learning
license: mit
---

# SO-101 Pick and Place Policy

Trained with ACT on 50 demonstrations of pick-and-place tasks.

## Usage

```bash
rfx deploy hf://my-org/so101-pick-v1 --robot so101

Training Details

  • Architecture: ACT
  • Episodes: 50
  • Training steps: 50,000
  • Success rate: 85%

Citation

@misc{so101-pick-v1,
  author = {Your Name},
  title = {SO-101 Pick and Place Policy},
  year = {2024},
}

This README will appear on the HuggingFace model page.

## Private Repositories

Create private repos for proprietary policies:

```python
import rfx

url = rfx.push_policy(
    "runs/my-secret-policy",
    "my-org/my-secret-policy",
    private=True,  # Create private repo
)
rfx push runs/my-policy my-org/my-policy --private
Private models can still be loaded with your HF token:
export HF_TOKEN=hf_...
rfx deploy hf://my-org/my-secret-policy --robot so101

Versioning and Revisions

Use HuggingFace’s revision system for versioning:
import rfx

# Load a specific revision
loaded = rfx.load_policy(
    "hf://my-org/[email protected]"  # Revision/tag
)

# Load from a branch
loaded = rfx.load_policy(
    "hf://my-org/my-policy@experimental"
)

Create Tags

cd /tmp
git clone https://huggingface.co/my-org/my-policy
cd my-policy
git tag v1.0.0
git push origin v1.0.0

Caching and Offline Use

HuggingFace automatically caches downloaded models:
import rfx

# First call: downloads from Hub
loaded = rfx.load_policy("hf://rfx-community/go2-walk-v1")

# Second call: loads from cache (instant)
loaded = rfx.load_policy("hf://rfx-community/go2-walk-v1")
Cache location: ~/.cache/huggingface/hub/

Clear Cache

# Clear all HuggingFace cache
rm -rf ~/.cache/huggingface/hub/

# Clear specific model
rm -rf ~/.cache/huggingface/hub/models--rfx-community--go2-walk-v1

Community Models

Explore community-shared models:

Example Community Models

# Go2 walking policy
rfx deploy hf://rfx-community/go2-walk-v1 --robot go2

# SO-101 pick-place policy  
rfx deploy hf://rfx-community/so101-pick-v1 --robot so101

# G1 locomotion policy
rfx deploy hf://rfx-community/g1-walk-v1 --robot g1

Best Practices

Use descriptive repo names: {robot}-{task}-{version} (e.g., so101-pick-v1)
Include a README.md with training details, success rates, and usage instructions.
Use git tags for versions: v1.0.0, v2.0.0. Never delete old versions.
Always save policies with robot_config=... for zero-config deployment.
Add tags like rfx, robotics, robot type, and task type for discoverability.

Low-Level API

For advanced use cases, use HuggingFace Hub’s API directly:
from huggingface_hub import HfApi, snapshot_download

api = HfApi()

# Create repo
api.create_repo("my-org/my-policy", exist_ok=True, private=False)

# Upload specific files
api.upload_file(
    path_or_fileobj="runs/my-policy/rfx_config.json",
    path_in_repo="rfx_config.json",
    repo_id="my-org/my-policy",
)

# Upload entire folder
api.upload_folder(
    folder_path="runs/my-policy",
    repo_id="my-org/my-policy",
)

# Download model
path = snapshot_download("my-org/my-policy")
print(f"Downloaded to {path}")

Next Steps

Deploy Policy

Deploy policies from HuggingFace Hub to real hardware

Train Policy

Train policies using datasets from Hub

Build docs developers (and LLMs) love