The SensorSim Service renders synthetic camera images and LiDAR point clouds for autonomous vehicle simulation. It supports multiple camera models and realistic sensor effects.
Service Definition
service SensorsimService {
rpc render_rgb ( RGBRenderRequest ) returns ( RGBRenderReturn );
rpc render_lidar ( LidarRenderRequest ) returns ( LidarRenderReturn );
rpc render_aggregated ( AggregatedRenderRequest ) returns ( AggregatedRenderReturn );
rpc get_version ( common . Empty ) returns ( common . VersionId );
rpc get_available_scenes ( common . Empty ) returns ( common . AvailableScenesReturn );
rpc get_available_cameras ( AvailableCamerasRequest ) returns ( AvailableCamerasReturn );
rpc shut_down ( common . Empty ) returns ( common . Empty );
rpc get_available_trajectories ( AvailableTrajectoriesRequest )
returns ( AvailableTrajectoriesReturn );
rpc get_available_ego_masks ( common . Empty ) returns ( AvailableEgoMasksReturn );
}
Camera Rendering
render_rgb
Render a single RGB camera image.
Scene identifier to render
Camera model and parameters (see Camera Models section)
Frame capture start time (microseconds)
Frame capture end time (for rolling shutter simulation)
Camera pose at frame start and end Camera pose at frame_start_us
Camera pose at frame_end_us (for rolling shutter)
Background vehicles and actors Object poses at frame start and end
Output image encoding:
PNG: Lossless compression
JPEG: Lossy compression
RGB_UINT8_PLANAR: Raw RGB data
JPEG quality (0.0 to 1.0, higher = better quality)
Whether to mask out ego vehicle (hood/mirrors)
Ego mask identifier (required if insert_ego_mask is true) Camera ID (e.g., “camera_front_wide_120fov”)
Rig configuration (e.g., “hyperion_8.0”)
Encoded image data (format specified in request)
LiDAR Rendering
render_lidar
Render a LiDAR point cloud.
LiDAR model:
PANDAR128: Hesai Pandar128
AT128: Hesai AT128
Scan start time (microseconds)
Scan end time (for rotating LiDAR)
LiDAR pose at scan start and end
Background vehicles to include in scan
Point coordinates in end-of-spin LiDAR space [x1, y1, z1, x2, y2, z2, …]
Point intensities (range 0.0 to 1.0)
Binary buffer of point coordinates (alternative to point_xyzs)
Binary buffer of intensities (alternative to point_intensities)
Batch Rendering
render_aggregated
Render multiple cameras and LiDARs in a single request for efficiency.
List of camera render requests
List of LiDAR render requests
Rendered camera images (same order as requests)
Rendered LiDAR scans (same order as requests)
Optional arbitrary data for advanced driver integration
Camera Models
SensorSim supports multiple camera projection models:
OpenCV Pinhole
message OpenCVPinholeCameraParam {
double principal_point_x = 1 ;
double principal_point_y = 2 ;
double focal_length_x = 3 ;
double focal_length_y = 4 ;
repeated double radial_coeffs = 5 ; // [k1, k2, k3, k4, k5, k6]
repeated double tangential_coeffs = 6 ; // [p1, p2]
repeated double thin_prism_coeffs = 7 ; // [s1, s2, s3, s4]
}
OpenCV Fisheye
For wide field-of-view cameras (>120°).
message OpenCVFisheyeCameraParam {
double principal_point_x = 1 ;
double principal_point_y = 2 ;
double focal_length_x = 3 ;
double focal_length_y = 4 ;
repeated double radial_coeffs = 5 ; // [k1, k2, k3, k4]
double max_angle = 6 ;
}
F-Theta
Polynomial fisheye model with forward/backward mappings.
message FthetaCameraParam {
double principal_point_x = 1 ;
double principal_point_y = 2 ;
enum PolynomialType {
PIXELDIST_TO_ANGLE = 1 ;
ANGLE_TO_PIXELDIST = 2 ;
}
PolynomialType reference_poly = 3 ;
repeated double pixeldist_to_angle_poly = 4 ;
repeated double angle_to_pixeldist_poly = 5 ;
double max_angle = 6 ;
LinearCde linear_cde = 7 ;
}
Query Methods
get_available_cameras
Retrieve camera configurations for a scene.
Camera model and parameters
Camera extrinsics (rig to camera transform)
get_available_ego_masks
Retrieve available ego vehicle masks.
List of ego masks with camera and rig configuration IDs
Usage Example
from alpasim_grpc.v0 import sensorsim_pb2, sensorsim_pb2_grpc
import grpc
# Connect to sensorsim
channel = grpc.insecure_channel( 'localhost:50053' )
sensorsim_stub = sensorsim_pb2_grpc.SensorsimServiceStub(channel)
# Get available cameras for scene
available = sensorsim_stub.get_available_cameras(
sensorsim_pb2.AvailableCamerasRequest( scene_id = "scene_001" )
)
print ( f "Found { len (available.available_cameras) } cameras" )
# Render RGB image
request = sensorsim_pb2.RGBRenderRequest(
scene_id = "scene_001" ,
resolution_h = 720 ,
resolution_w = 1280 ,
camera_intrinsics = available.available_cameras[ 0 ].intrinsics,
frame_start_us = 1000000 ,
frame_end_us = 1033000 ,
sensor_pose = sensorsim_pb2.PosePair(
start_pose = camera_pose,
end_pose = camera_pose
),
dynamic_objects = traffic_objects,
image_format = sensorsim_pb2. JPEG ,
image_quality = 0.95 ,
insert_ego_mask = True ,
ego_mask_id = sensorsim_pb2.EgoMaskId(
camera_logical_id = "camera_front_wide_120fov" ,
rig_config_id = "hyperion_8.0"
)
)
response = sensorsim_stub.render_rgb(request)
# Save image
with open ( "rendered.jpg" , "wb" ) as f:
f.write(response.image_bytes)
Shutter Types
SensorSim supports realistic shutter simulation:
GLOBAL: Instantaneous capture (all pixels at once)
ROLLING_TOP_TO_BOTTOM: Row-by-row capture from top
ROLLING_LEFT_TO_RIGHT: Column-by-column from left
ROLLING_BOTTOM_TO_TOP: Row-by-row from bottom
ROLLING_RIGHT_TO_LEFT: Column-by-column from right
Rolling shutter uses interpolation between start_pose and end_pose.