VINS-Fusion is an optimization-based multi-sensor state estimator that achieves accurate self-localization for autonomous applications including drones, ground vehicles, and AR/VR systems. It extends VINS-Mono with support for multiple visual-inertial sensor configurations and global GPS fusion.Documentation Index
Fetch the complete documentation index at: https://mintlify.com/HKUST-Aerial-Robotics/Vins-Fusion/llms.txt
Use this file to discover all available pages before exploring further.
Prerequisites
System requirements, ROS setup, and dependency installation before building
Installation
Clone the repository and build VINS-Fusion with catkin in your ROS workspace
EuRoC Example
Run VINS-Fusion on the EuRoC MAV dataset with mono, stereo, or stereo+IMU
KITTI Example
Evaluate on KITTI odometry benchmark and run GPS fusion on raw sequences
Configuration
Write a config file for your sensor setup and tune estimation parameters
Core Concepts
Understand the VIO pipeline, sliding-window optimization, and loop closure
Key features
Multiple sensor modes
Supports monocular camera + IMU, stereo cameras + IMU, and stereo-only configurations from a single codebase
Online calibration
Estimates camera-IMU spatial and temporal offsets online — no perfect pre-calibration required
Visual loop closure
DBoW2-based loop detection corrects drift over long trajectories using a persistent pose graph
GPS global fusion
Fuses VIO odometry with GPS measurements for globally consistent position estimates
Quick start
Install prerequisites
Install Ubuntu 16.04 or 18.04, ROS Kinetic or Melodic, and the Ceres Solver. See Prerequisites for full instructions.
Run on your own device
Write a config file for your cameras and IMU — see Custom Device Setup for guidance on calibration and parameter tuning.
VINS-Fusion is a research system. Hardware quality significantly affects accuracy. For best results, use global shutter cameras with hardware-synchronized IMU.