The REBUILT robot uses two complementary vision approaches: Limelight 4 cameras that report AprilTag-based pose estimates via the MegaTag protocol, and a PhotonVision-based simulation layer (Documentation Index
Fetch the complete documentation index at: https://mintlify.com/spectrum3847/2026-Spectrum/llms.txt
Use this file to discover all available pages before exploring further.
VisionSystem.java) that validates camera logic before hardware is available. Both feed pose corrections into WPILib’s swerve pose estimator so the robot knows where it is on the field at all times.
Hardware
The robot mounts Limelight 4 cameras at fixed positions defined in the robot CAD. Each Limelight has its own IP address and pipeline configuration managed through the Limelight web interface. The physical offset of each camera from the robot center — translation and rotation — must be entered in the Limelight web interface or set in code to produce accurate field-relative pose estimates.MegaTag1 vs MegaTag2
Limelight exposes two MegaTag pose formats:| Format | Method | Output type | Notes |
|---|---|---|---|
| MegaTag1 | getMegaTag1_Pose3d() | Pose3d | Raw 3D pose; rotation can be ambiguous with fewer tags visible |
| MegaTag2 | getMegaTag2_Pose2d() | Pose2d | Uses the Limelight 4’s onboard gyro for rotation; preferred for odometry fusion |
Adding vision measurements
Vision pose corrections are fed into the swerve pose estimator inside a subsystemperiodic() method using the WPILib API:
AprilTag field layout
The simulation and localization code both reference the 2026 REBUILT field layout loaded by name from the WPILib resource bundle:k2026RebuiltWelded ensures the tag positions in simulation and on the real field match exactly.
PhotonVision camera simulation
VisionSystem.java creates a VisionSystemSim instance and registers simulated cameras with calibrated noise properties, allowing you to run vision logic in simulation without physical hardware:
PhotonCameraSim lines in VisionSystem.java to activate simulated cameras when testing vision pipelines off-robot.
Pose alignment types
Tag area alignment
Tag area alignment
Coarse alignment using the detected area and position of an AprilTag in the camera frame. Useful for driving directly toward a known target without a full pose estimate.
MegaTag pose alignment
MegaTag pose alignment
Full field-relative pose correction from the Limelight MegaTag2 estimate. This is the primary mechanism used in
VisionSystem and feeds addVisionMeasurement on the swerve subsystem.QuestNav (planned)
QuestNav (planned)
Swerve and odometry
How the swerve subsystem maintains and exposes the robot pose estimator.
Simulation
How PhotonVision camera simulation integrates with the broader RobotSim setup.
