double[] array from a dedicated NetworkTables entry, splits it into fixed-width records, and returns a typed array. This makes them suitable for use in tight robot loops where latency and garbage collection matter.
RawFiducial
Returned byLimelightHelpers.getRawFiducials(limelightName). Reads the rawfiducials NT entry, which stores 7 double values per detected AprilTag in a flat array.
Fields
AprilTag ID. Cast from the first element of each 7-value block.
Horizontal offset from the camera’s principal point to the tag center in degrees. Positive is right.
Vertical offset from the camera’s principal point to the tag center in degrees. Positive is up.
Tag area as a percentage of the image (0–100).
3D distance from the camera lens to the tag center in meters.
3D distance from the robot origin to the tag center in meters.
Pose solve ambiguity ratio. Values near 0 indicate an unambiguous solve; values approaching 1 indicate two near-equal solutions. Reject estimates where this exceeds ~0.1 when only one tag is visible.
Array layout
Each tag occupies 7 consecutive elements:Code example
RawDetection
Returned byLimelightHelpers.getRawDetections(limelightName). Reads the rawdetections NT entry, which stores 12 double values per neural detector result in a flat array.
Fields
Neural network class index. Cast from the first element of each 12-value block.
Horizontal offset from the camera’s principal point to the detection center in degrees.
Vertical offset from the camera’s principal point to the detection center in degrees.
Detection bounding-box area as a percentage of the image (0–100).
X coordinate of bounding box corner 0 in normalized image space.
Y coordinate of bounding box corner 0 in normalized image space.
X coordinate of bounding box corner 1.
Y coordinate of bounding box corner 1.
X coordinate of bounding box corner 2.
Y coordinate of bounding box corner 2.
X coordinate of bounding box corner 3.
Y coordinate of bounding box corner 3.
Array layout
Each detection occupies 12 consecutive elements:Code example
RawTarget
Returned byLimelightHelpers.getRawTargets(limelightName). Reads the rawtargets NT entry, which stores 3 double values per ungrouped contour result. Returns up to 3 contours.
Fields
Horizontal offset from the camera’s principal point to the contour center, in degrees.
Vertical offset from the camera’s principal point to the contour center, in degrees.
Contour area as a percentage of the image (0–100).
Array layout
Each contour occupies 3 consecutive elements:Code example
IMUData
Returned byLimelightHelpers.getIMUData(limelightName). Reads the imu NT entry (a 10-element double[]) and populates an IMUData object. Returns an all-zeros object if the entry is absent or shorter than 10 elements.
Fields
Robot yaw fused by the Limelight’s localization algorithm, in degrees. This is the yaw value Limelight uses internally, which may differ from
Yaw depending on IMU mode. Array index 0.IMU roll angle in degrees. Array index 1.
IMU pitch angle in degrees. Array index 2.
Raw IMU yaw angle in degrees. Array index 3.
Gyroscope X-axis angular rate in degrees per second. Array index 4.
Gyroscope Y-axis angular rate in degrees per second. Array index 5.
Gyroscope Z-axis angular rate in degrees per second. Array index 6.
Accelerometer X-axis reading in g. Array index 7.
Accelerometer Y-axis reading in g. Array index 8.
Accelerometer Z-axis reading in g. Array index 9.