LimelightResults is the deserialized form of the Limelight’s full JSON output. It is returned by LimelightHelpers.getLatestResults(limelightName), which fetches the json NetworkTables entry and parses it with Jackson.
Top-level fields
Non-empty if JSON parsing failed or the NT entry was empty. Always check this field before using any other data. Example values:
"lljson error: empty json", "lljson error: <Jackson message>".Active pipeline index at the time the frame was processed. Maps to JSON key
pID.Pipeline processing latency in milliseconds. Maps to JSON key
tl.Image capture latency in milliseconds. Maps to JSON key
cl.Time to parse the JSON string in the library, in milliseconds. Set by
getLatestResults() after deserialization — not part of the Limelight JSON payload.Limelight-side publish timestamp. Maps to JSON key
ts.RoboRIO FPGA timestamp at the moment NetworkTables captured the entry. Maps to JSON key
ts_rio.NetworkTables timestamp in microseconds. Maps to JSON key
ts_nt.System timestamp. Maps to JSON key
ts_sys.Microsecond timestamp. Maps to JSON key
ts_us.Whether the pipeline produced valid results for this frame. Maps to JSON key
v (encoded as a number: 0 or 1).Active pipeline type string, e.g.
"retro", "apriltag", "neural_detector". Maps to JSON key pTYPE.Primary target values
Horizontal offset from the crosshair to the primary target in degrees.
Vertical offset from the crosshair to the primary target in degrees.
Horizontal offset from the camera’s principal point (no crosshair correction) in degrees. Maps to JSON key
txnc.Vertical offset from the camera’s principal point in degrees. Maps to JSON key
tync.Primary target area as a percentage of the image (0–100).
Robot pose arrays
Each pose array has 6 elements:[x, y, z, roll, pitch, yaw]. Translation is in meters; angles are in degrees. Use the WPILib Blue variants for new code.
Robot pose in the field-origin coordinate system.
Robot pose in the WPILib Blue alliance coordinate system (origin at the blue alliance wall). Preferred for pose estimator integration.
Robot pose in the WPILib Red alliance coordinate system.
Number of AprilTags used to compute the MegaTag1 pose.
Distance in meters between the two outermost contributing tags.
Average distance in meters from the camera to contributing tags.
Average tag area (% of image) across all contributing tags.
MegaTag2 pose arrays
MegaTag2 robot pose in the field-origin coordinate system.
MegaTag2 robot pose in the WPILib Blue alliance coordinate system.
MegaTag2 robot pose in the WPILib Red alliance coordinate system.
Pose helper methods
LimelightResults exposes convenience methods to convert the raw arrays to WPILib geometry objects:
| Method | Returns | Source array |
|---|---|---|
getBotPose2d() | Pose2d | botpose |
getBotPose2d_wpiBlue() | Pose2d | botpose_wpiblue |
getBotPose2d_wpiRed() | Pose2d | botpose_wpired |
getBotPose3d() | Pose3d | botpose |
getBotPose3d_wpiBlue() | Pose3d | botpose_wpiblue |
getBotPose3d_wpiRed() | Pose3d | botpose_wpired |
Target arrays
Color/retroreflective pipeline targets. Each entry includes
ta, tx, ty, tx_pixels, ty_pixels, tx_nocrosshair, ty_nocrosshair, and pose methods (getCameraPose_TargetSpace, getRobotPose_FieldSpace, etc.).AprilTag pipeline targets. See LimelightTarget_Fiducial below.
Neural classifier pipeline results. Each entry includes
className (String), classID (double), confidence (double), zone (double), tx, ty, tx_pixels, ty_pixels.Neural detector pipeline results. Each entry includes
className (String), classID (double), confidence (double), ta, tx, ty, tx_pixels, ty_pixels, tx_nocrosshair, ty_nocrosshair.Barcode pipeline results. See LimelightTarget_Barcode below.
Hardware and diagnostic fields
Limelight hardware statistics for the current frame.
IMU data from the JSON payload. After parsing,
getLatestResults() calls imuResults.parseDataArray() internally, which populates robotYaw, roll, pitch, rawYaw, gyroX/Y/Z, and accelX/Y/Z from the raw data array. For direct NT access without JSON, see getIMUData.Capture rewind buffer statistics for the current frame.
Output values written by a Python snap script running on the Limelight. Maps to JSON key
PythonOut.LimelightTarget_Fiducial fields
AprilTag ID. Maps to JSON key
fID.Tag family string, e.g.
"tag36h11". Maps to JSON key fam.Tag area as a percentage of the image.
Horizontal offset from the crosshair to the tag center in degrees.
Vertical offset from the crosshair to the tag center in degrees.
Horizontal position of the tag center in pixels. Maps to JSON key
txp.Vertical position of the tag center in pixels. Maps to JSON key
typ.Horizontal offset from the camera’s principal point in degrees. Maps to JSON key
tx_nocross.Vertical offset from the camera’s principal point in degrees. Maps to JSON key
ty_nocross.LimelightTarget_Fiducial also exposes the same set of pose methods as LimelightTarget_Retro:
getCameraPose_TargetSpace(), getRobotPose_FieldSpace(), getRobotPose_TargetSpace(),
getTargetPose_CameraSpace(), getTargetPose_RobotSpace(), and their 2D counterparts.
All return Pose3d / Pose2d respectively.
LimelightTarget_Barcode fields
Barcode symbology, e.g.
"QR", "DataMatrix", "Code128". Maps to JSON key fam.Decoded text content of the barcode. Maps to JSON key
data.Horizontal offset from the crosshair to the barcode center in degrees.
Vertical offset from the crosshair to the barcode center in degrees.
Horizontal pixel position of the barcode center. Maps to JSON key
txp.Vertical pixel position of the barcode center. Maps to JSON key
typ.Barcode area as a percentage of the image.
Corner point coordinates of the barcode bounding polygon. Maps to JSON key
pts. Each element is a 2-element array [x, y] in pixel coordinates.