Documentation Index
Fetch the complete documentation index at: https://mintlify.com/rpng/open_vins/llms.txt
Use this file to discover all available pages before exploring further.
Docker lets you run OpenVINS in a fully isolated environment with all dependencies pre-installed, without touching your host system. The OpenVINS repository ships Dockerfiles for every supported ROS version and Ubuntu combination. Your local workspace and dataset directories are mounted into the container as bind mounts, so edits made on the host are immediately reflected inside the container and vice versa. This page walks through installing Docker, building an OpenVINS image, launching the container with GUI and GPU support, and optionally wiring up JetBrains CLion for remote development.
Install Docker
The instructions below are for Linux (Ubuntu). For other platforms, see the official Get Docker guide and the ROS and Docker getting started guide.
Install Docker Engine
curl -fsSL https://download.docker.com/linux/ubuntu/gpg \
| sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] \
https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" \
| sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io
Install NVIDIA Container Toolkit (optional, for GPU access)
Skip this step if you do not have an NVIDIA GPU.distribution=$(. /etc/os-release; echo $ID$VERSION_ID) \
&& curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add - \
&& curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list \
| sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update
sudo apt-get install -y nvidia-docker2
sudo systemctl restart docker
sudo docker run --rm --gpus all nvidia/cuda:11.0-base nvidia-smi # verify install
Allow X11 connections from containers
Run this once per host session (before launching any container with a GUI):
Build an OpenVINS Docker image
OpenVINS provides separate Dockerfiles for each ROS/Ubuntu combination. Choose the one that matches your target:
| Dockerfile | ROS version | Ubuntu |
|---|
Dockerfile_ros1_20_04 | ROS 1 Noetic | 20.04 |
Dockerfile_ros1_18_04 | ROS 1 Melodic | 18.04 |
Dockerfile_ros2_22_04 | ROS 2 | 22.04 |
Create a workspace and clone OpenVINS
mkdir -p ~/workspace/catkin_ws_ov/src
cd ~/workspace/catkin_ws_ov/src
git clone https://github.com/rpng/open_vins.git
cd open_vins
Build the Docker image
Set VERSION to whichever Dockerfile you want to use:export VERSION=ros1_20_04 # ros1_20_04, ros1_18_04, ros2_22_04, etc.
docker build -t ov_$VERSION -f Dockerfile_$VERSION .
The Dockerfile installs all system dependencies but does not build the OpenVINS workspace — you do that inside the container.
If the image build fails or you want to start fresh, remove the existing image first:
docker image list
docker image rm ov_ros1_20_04 --force
Set up the launch helper alias
You must use absolute paths for the workspace and dataset directories. Relative paths will not be resolved correctly by Docker bind mounts.
Add the following to your ~/.bashrc. It creates an ov_docker alias that launches the container with GUI passthrough, GPU access, and your workspace and dataset folders mounted.
xhost + &> /dev/null
export DOCKER_CATKINWS=/home/username/workspace/catkin_ws_ov
export DOCKER_DATASETS=/home/username/datasets
alias ov_docker="docker run -it --net=host --gpus all \
--env=\"NVIDIA_DRIVER_CAPABILITIES=all\" --env=\"DISPLAY\" \
--env=\"QT_X11_NO_MITSHM=1\" --volume=\"/tmp/.X11-unix:/tmp/.X11-unix:rw\" \
--mount type=bind,source=$DOCKER_CATKINWS,target=/catkin_ws \
--mount type=bind,source=$DOCKER_DATASETS,target=/datasets \$1"
Then reload your shell:
Replace /home/username/workspace/catkin_ws_ov and /home/username/datasets with the actual absolute paths on your machine.
Build and run OpenVINS in the container
Open separate terminals on the host and use ov_docker to launch each process:# Terminal 1: ROS master
ov_docker ov_ros1_20_04 roscore
# Terminal 2: RViz
ov_docker ov_ros1_20_04 rosrun rviz rviz -d /catkin_ws/src/open_vins/ov_msckf/launch/display.rviz
To get a bash shell inside the container for building and launching:ov_docker ov_ros1_20_04 bash
Once inside:cd catkin_ws
catkin build
source devel/setup.bash
# Run the simulation example
roslaunch ov_msckf simulation.launch
# Or plot a trajectory
rosrun ov_eval plot_trajectories none src/open_vins/ov_data/sim/udel_gore.txt
Get a bash shell inside the container:ov_docker ov_ros2_22_04 bash
Once inside:cd catkin_ws
colcon build --event-handlers console_cohesion+
source install/setup.bash
# Run the simulation example
ros2 run ov_msckf run_simulation src/open_vins/config/rpng_sim/estimator_config.yaml
# Or plot a trajectory
ros2 run ov_eval plot_trajectories none src/open_vins/ov_data/sim/udel_gore.txt
Running inside Docker may not be real-time on all machines. Use the serial playback nodes (e.g., run_serial_msckf) rather than the subscribe nodes when performance is a concern.
Verify GUI passthrough
To confirm that GUI applications can render on your host, you can run a quick RViz or Gazebo test against a stock ROS image:
# Launch an interactive bash shell and then run rviz
docker run -it --net=host --gpus all \
--env="NVIDIA_DRIVER_CAPABILITIES=all" \
--env="DISPLAY" \
--env="QT_X11_NO_MITSHM=1" \
--volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \
osrf/ros:noetic-desktop-full \
bash
# Inside the container:
# rviz
CLion remote development
JetBrains CLion can connect to a running container over SSH, letting you use the IDE’s indexing and debugger against the container’s compiler and libraries.
Start the container in detached mode with SSH exposed
export DOCKER_CATKINWS=/home/username/workspace/catkin_ws_ov
export DOCKER_DATASETS=/home/username/datasets
docker run -d --cap-add sys_ptrace -p127.0.0.1:2222:22 \
--mount type=bind,source=$DOCKER_CATKINWS,target=/catkin_ws \
--mount type=bind,source=$DOCKER_DATASETS,target=/datasets \
--name clion_remote_env ov_ros1_20_04
Add a remote toolchain in CLion
- Open Settings → Build, Execution, Deployment → Toolchains.
- Add a new entry of type Remote Host.
- Click the Credentials section and fill in:
- Host:
localhost
- Port:
2222
- Username:
user
- Password:
password
- CMake:
/usr/local/bin/cmake
- Confirm that the detected CMake version is greater than 3.12.
- Create a CMake profile that uses this toolchain and set it as the active profile.
Set ROS environment variables in the CMake profile
Go to Settings → Build, Execution, Deployment → CMake → (your profile) → Environment and paste the following (adjust the ROS distro name as needed):LD_PATH_LIB=/catkin_ws/devel/lib:/opt/ros/noetic/lib;PYTHON_EXECUTABLE=/usr/bin/python3;PYTHON_INCLUDE_DIR=/usr/include/python3.8;ROS_VERSION=1;CMAKE_PREFIX_PATH=/catkin_ws/devel:/opt/ros/noetic;LD_LIBRARY_PATH=/catkin_ws/devel/lib:/opt/ros/noetic/lib;PATH=/opt/ros/noetic/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin;PKG_CONFIG_PATH=/catkin_ws/devel/lib/pkgconfig:/opt/ros/noetic/lib/pkgconfig;PYTHONPATH=/opt/ros/noetic/lib/python3/dist-packages;ROSLISP_PACKAGE_DIRECTORIES=/catkin_ws/devel/share/common-lisp;ROS_PACKAGE_PATH=/catkin_ws/src/open_vins/ov_core:/catkin_ws/src/open_vins/ov_data:/catkin_ws/src/open_vins/ov_eval:/catkin_ws/src/open_vins/ov_msckf:/opt/ros/noetic/share
When you trigger a build in CLion, docker stats will show the clion_remote_env container maxing out the CPU as it compiles. See the JetBrains CLion Docker guide for full details.