Skip to main content
AprilTag (fiducial) tracking lets you compute your robot’s absolute position on the FRC field. Limelight detects the 3D pose of one or more AprilTags and solves for where the robot must be to have seen them from that angle and distance. There are two approaches to AprilTag-based localization:
  • MegaTag1 — multi-tag pose estimation using only vision data. Works with any number of tags but can suffer from pose ambiguity when only one tag is visible.
  • MegaTag2 — gyro-fused pose estimation. Uses your robot’s gyroscope to resolve pose ambiguity. Recommended for most teams. See the MegaTag2 guide for full setup details.

Getting pose estimates

MegaTag1 uses only vision data to compute the robot pose. It is more reliable when two or more tags are visible simultaneously.
// Blue alliance frame (recommended)
LimelightHelpers.PoseEstimate mt1 =
    LimelightHelpers.getBotPoseEstimate_wpiBlue("limelight");

// Red alliance frame
LimelightHelpers.PoseEstimate mt1Red =
    LimelightHelpers.getBotPoseEstimate_wpiRed("limelight");

The PoseEstimate object

Both methods return a PoseEstimate with the following fields:
FieldTypeDescription
posePose2dRobot pose in field space
timestampSecondsdoubleTimestamp adjusted for latency (use with addVisionMeasurement)
latencydoubleTotal pipeline + capture latency in milliseconds
tagCountintNumber of tags used in this estimate
tagSpandoubleDistance between outermost detected tags in meters
avgTagDistdoubleAverage distance to all detected tags in meters
avgTagAreadoubleAverage tag area as a percentage of the image
rawFiducialsRawFiducial[]Per-tag details including ambiguity
isMegaTag2booleanWhether this estimate came from MegaTag2
Always use the WPILib Blue Alliance coordinate frame (_wpiBlue methods). The Blue Alliance origin is consistent regardless of which alliance you are on, which simplifies path planning and odometry. Match the frame you pass to addVisionMeasurement().

Filtering pose estimates

Raw pose estimates should be filtered before being fed into the pose estimator. At minimum, check:
LimelightHelpers.PoseEstimate mt1 =
    LimelightHelpers.getBotPoseEstimate_wpiBlue("limelight");

if (mt1 != null
        && mt1.tagCount >= 1
        && mt1.avgTagDist < 4.0   // meters
        && mt1.tagSpan > 0) {
    poseEstimator.addVisionMeasurement(
        mt1.pose,
        mt1.timestampSeconds
    );
}
ConditionRecommended thresholdReason
tagCount >= 1At least 1 tagNo tags means no valid estimate
avgTagDist < 4.04 metersAccuracy drops sharply at long range
tagSpan > 0Any positive valueGuards against degenerate single-tag geometry
When only one tag is visible, MegaTag1 can produce two equally valid but physically different poses (pose ambiguity). Check rawFiducials[i].ambiguity for each tag. If ambiguity > 0.1, the estimate is unreliable. Use MegaTag2 instead, which uses the gyro to eliminate this ambiguity.

Integrating with SwerveDrivePoseEstimator

1

Declare the pose estimator

SwerveDrivePoseEstimator poseEstimator = new SwerveDrivePoseEstimator(
    kinematics,
    gyro.getRotation2d(),
    modulePositions,
    new Pose2d()
);
2

Update odometry every loop

// In robotPeriodic() or your drive subsystem's periodic()
poseEstimator.update(
    gyro.getRotation2d(),
    modulePositions
);
3

Read and validate the Limelight estimate

LimelightHelpers.PoseEstimate mt1 =
    LimelightHelpers.getBotPoseEstimate_wpiBlue("limelight");

boolean validEstimate = mt1 != null
    && mt1.tagCount >= 1
    && mt1.avgTagDist < 4.0
    && mt1.tagSpan > 0;
4

Fuse vision into the estimator

if (validEstimate) {
    // Standard deviations: tighter = trust vision more
    // [x meters, y meters, heading radians]
    Matrix<N3, N1> stdDevs = VecBuilder.fill(0.5, 0.5, 999);

    poseEstimator.addVisionMeasurement(
        mt1.pose,
        mt1.timestampSeconds,
        stdDevs
    );
}

Per-tag results

Via JSON (getLatestResults)

For detailed per-tag data including full 3D pose transforms, parse the JSON output:
LimelightHelpers.LimelightResults results =
    LimelightHelpers.getLatestResults("limelight");

for (LimelightHelpers.LimelightTarget_Fiducial tag : results.targets_Fiducials) {
    System.out.println("Tag ID: " + (int) tag.fiducialID);
    System.out.println("Family: " + tag.fiducialFamily);
    System.out.println("TX: " + tag.tx + "  TY: " + tag.ty);

    // 3D transforms
    Pose3d robotInField = tag.getRobotPose_FieldSpace();
    Pose3d camInTarget  = tag.getCameraPose_TargetSpace();
    Pose3d targetInCam  = tag.getTargetPose_CameraSpace();
    Pose3d targetInRobot = tag.getTargetPose_RobotSpace();
}
Each LimelightTarget_Fiducial exposes:
Field / MethodDescription
fiducialIDTag ID (e.g., 1–22 for FRC 2024)
fiducialFamilyTag family string (e.g., "36h11")
tx, tyAngular offset from crosshair in degrees
taTarget area as % of image
getCameraPose_TargetSpace()Camera pose relative to the tag
getRobotPose_FieldSpace()Robot pose in field coordinates
getTargetPose_CameraSpace()Tag pose relative to camera
getTargetPose_RobotSpace()Tag pose relative to robot

Via NetworkTables (getRawFiducials)

For lower latency (no JSON parse), read raw fiducial data directly from NetworkTables:
LimelightHelpers.RawFiducial[] fiducials =
    LimelightHelpers.getRawFiducials("limelight");

for (LimelightHelpers.RawFiducial f : fiducials) {
    System.out.printf("ID %d  ambiguity=%.3f  distToRobot=%.2fm%n",
        f.id, f.ambiguity, f.distToRobot);
}
Each RawFiducial contains:
FieldDescription
idAprilTag ID
txncHorizontal offset from principal pixel in degrees
tyncVertical offset from principal pixel in degrees
taTarget area as % of image
distToCameraDistance from camera to tag in meters
distToRobotDistance from robot origin to tag in meters
ambiguityPose ambiguity ratio (0 = unambiguous, >0.1 = unreliable)

Filtering which tags are used

To restrict localization to specific tag IDs (for example, only the tags near the scoring area):
// Only use tags 1, 2, 3, and 4
LimelightHelpers.SetFiducialIDFiltersOverride("limelight", new int[]{1, 2, 3, 4});

3D targeting offset

To offset the target point used for 3D pose calculations:
// Offset the target point 0.1m forward, 0m side, 0.2m up
LimelightHelpers.setFiducial3DOffset("limelight", 0.1, 0.0, 0.2);

Build docs developers (and LLMs) love