Skip to main content
The SensorSim Service renders synthetic camera images and LiDAR point clouds for autonomous vehicle simulation. It supports multiple camera models and realistic sensor effects.

Service Definition

service SensorsimService {
    rpc render_rgb (RGBRenderRequest) returns (RGBRenderReturn);
    rpc render_lidar (LidarRenderRequest) returns (LidarRenderReturn);
    rpc render_aggregated (AggregatedRenderRequest) returns (AggregatedRenderReturn);
    rpc get_version (common.Empty) returns (common.VersionId);
    rpc get_available_scenes (common.Empty) returns (common.AvailableScenesReturn);
    rpc get_available_cameras (AvailableCamerasRequest) returns (AvailableCamerasReturn);
    rpc shut_down (common.Empty) returns (common.Empty);
    rpc get_available_trajectories (AvailableTrajectoriesRequest) 
        returns (AvailableTrajectoriesReturn);
    rpc get_available_ego_masks (common.Empty) returns (AvailableEgoMasksReturn);
}

Camera Rendering

render_rgb

Render a single RGB camera image.
scene_id
string
required
Scene identifier to render
resolution_h
uint32
required
Image height in pixels
resolution_w
uint32
required
Image width in pixels
camera_intrinsics
CameraSpec
required
Camera model and parameters (see Camera Models section)
frame_start_us
uint64
required
Frame capture start time (microseconds)
frame_end_us
uint64
required
Frame capture end time (for rolling shutter simulation)
sensor_pose
PosePair
required
Camera pose at frame start and end
dynamic_objects
DynamicObject[]
Background vehicles and actors
image_format
ImageFormat
required
Output image encoding:
  • PNG: Lossless compression
  • JPEG: Lossy compression
  • RGB_UINT8_PLANAR: Raw RGB data
image_quality
float
JPEG quality (0.0 to 1.0, higher = better quality)
insert_ego_mask
bool
Whether to mask out ego vehicle (hood/mirrors)
ego_mask_id
EgoMaskId
Ego mask identifier (required if insert_ego_mask is true)
image_bytes
bytes
Encoded image data (format specified in request)

LiDAR Rendering

render_lidar

Render a LiDAR point cloud.
scene_id
string
required
Scene identifier
lidar_config
LidarSpec
required
frame_start_us
uint64
required
Scan start time (microseconds)
frame_end_us
uint64
required
Scan end time (for rotating LiDAR)
sensor_pose
PosePair
required
LiDAR pose at scan start and end
dynamic_objects
DynamicObject[]
Background vehicles to include in scan
point_xyzs
float[]
Point coordinates in end-of-spin LiDAR space [x1, y1, z1, x2, y2, z2, …]
point_intensities
float[]
Point intensities (range 0.0 to 1.0)
num_points
uint32
Total number of points
point_xyzs_buffer
bytes
Binary buffer of point coordinates (alternative to point_xyzs)
point_intensities_buffer
bytes
Binary buffer of intensities (alternative to point_intensities)

Batch Rendering

render_aggregated

Render multiple cameras and LiDARs in a single request for efficiency.
rgb_requests
RGBRenderRequest[]
List of camera render requests
lidar_requests
LidarRenderRequest[]
List of LiDAR render requests
rgb_returns
RGBRenderReturn[]
Rendered camera images (same order as requests)
lidar_returns
LidarRenderReturn[]
Rendered LiDAR scans (same order as requests)
driver_data
bytes
Optional arbitrary data for advanced driver integration

Camera Models

SensorSim supports multiple camera projection models:

OpenCV Pinhole

message OpenCVPinholeCameraParam {
    double principal_point_x = 1;
    double principal_point_y = 2;
    double focal_length_x = 3;
    double focal_length_y = 4;
    repeated double radial_coeffs = 5;      // [k1, k2, k3, k4, k5, k6]
    repeated double tangential_coeffs = 6;   // [p1, p2]
    repeated double thin_prism_coeffs = 7;   // [s1, s2, s3, s4]
}

OpenCV Fisheye

For wide field-of-view cameras (>120°).
message OpenCVFisheyeCameraParam {
    double principal_point_x = 1;
    double principal_point_y = 2;
    double focal_length_x = 3;
    double focal_length_y = 4;
    repeated double radial_coeffs = 5;  // [k1, k2, k3, k4]
    double max_angle = 6;
}

F-Theta

Polynomial fisheye model with forward/backward mappings.
message FthetaCameraParam {
    double principal_point_x = 1;
    double principal_point_y = 2;
    enum PolynomialType {
        PIXELDIST_TO_ANGLE = 1;
        ANGLE_TO_PIXELDIST = 2;
    }
    PolynomialType reference_poly = 3;
    repeated double pixeldist_to_angle_poly = 4;
    repeated double angle_to_pixeldist_poly = 5;
    double max_angle = 6;
    LinearCde linear_cde = 7;
}

Query Methods

get_available_cameras

Retrieve camera configurations for a scene.
scene_id
string
required
Scene identifier
available_cameras
AvailableCamera[]

get_available_ego_masks

Retrieve available ego vehicle masks.
ego_mask_metadata
EgoMaskMetadata[]
List of ego masks with camera and rig configuration IDs

Usage Example

from alpasim_grpc.v0 import sensorsim_pb2, sensorsim_pb2_grpc
import grpc

# Connect to sensorsim
channel = grpc.insecure_channel('localhost:50053')
sensorsim_stub = sensorsim_pb2_grpc.SensorsimServiceStub(channel)

# Get available cameras for scene
available = sensorsim_stub.get_available_cameras(
    sensorsim_pb2.AvailableCamerasRequest(scene_id="scene_001")
)
print(f"Found {len(available.available_cameras)} cameras")

# Render RGB image
request = sensorsim_pb2.RGBRenderRequest(
    scene_id="scene_001",
    resolution_h=720,
    resolution_w=1280,
    camera_intrinsics=available.available_cameras[0].intrinsics,
    frame_start_us=1000000,
    frame_end_us=1033000,
    sensor_pose=sensorsim_pb2.PosePair(
        start_pose=camera_pose,
        end_pose=camera_pose
    ),
    dynamic_objects=traffic_objects,
    image_format=sensorsim_pb2.JPEG,
    image_quality=0.95,
    insert_ego_mask=True,
    ego_mask_id=sensorsim_pb2.EgoMaskId(
        camera_logical_id="camera_front_wide_120fov",
        rig_config_id="hyperion_8.0"
    )
)

response = sensorsim_stub.render_rgb(request)

# Save image
with open("rendered.jpg", "wb") as f:
    f.write(response.image_bytes)

Shutter Types

SensorSim supports realistic shutter simulation:
  • GLOBAL: Instantaneous capture (all pixels at once)
  • ROLLING_TOP_TO_BOTTOM: Row-by-row capture from top
  • ROLLING_LEFT_TO_RIGHT: Column-by-column from left
  • ROLLING_BOTTOM_TO_TOP: Row-by-row from bottom
  • ROLLING_RIGHT_TO_LEFT: Column-by-column from right
Rolling shutter uses interpolation between start_pose and end_pose.

Build docs developers (and LLMs) love