Overview
3D camera capabilities:- Depth perception - Measure distance to objects
- RGB + Depth - Color images aligned with depth
- Stereo vision - Calculate depth from twin cameras
- IMU data - Accelerometer and gyroscope (D435i only)
- Point clouds - 3D reconstruction
- Obstacle detection - Identify and measure obstacles
Hardware Comparison
Intel RealSense D435i
Specifications:- Depth range: 0.3-10 meters
- RGB resolution: 424x240 @ 60fps (configurable up to 1920x1080)
- Depth resolution: 848x480 @ 60fps
- FOV: 87° × 58° (depth), 69° × 42° (RGB)
- IMU: Yes (D435i only)
- Interface: USB 3.0
- Cost: ~$200-250
- Excellent depth accuracy
- IMU for motion tracking
- Well-supported SDK
- Good documentation
- Higher power consumption
- Larger form factor
- More expensive
OAK-D (OpenCV AI Kit)
Specifications:- Depth range: 0.2-10 meters
- RGB resolution: 640x480 (up to 4K)
- Depth resolution: 640x480
- FOV: 71° × 55° (stereo)
- Processing: Intel Movidius VPU onboard
- Interface: USB 3.0
- Cost: ~$150-200
- Onboard AI processing
- Lower host CPU usage
- Compact design
- Good value
- More complex API
- Less mature ecosystem
- No IMU
Intel RealSense D435/D435i
Installation
Install librealsense
For Jetson Nano, build from source:Configuration
Configure inmyconfig.py:
donkeycar/templates/cfg_simulator.py:271-276.
RealSense435i Part
donkeycar/parts/realsense435i.py:33-120.
Native Resolution
The part always captures at native resolution and resizes:donkeycar/parts/realsense435i.py:29-31.
Depth Alignment
Depth frames are aligned to RGB:donkeycar/parts/realsense435i.py:81-99.
IMU Data (D435i only)
Capture accelerometer and gyroscope:donkeycar/parts/realsense435i.py:59-69.
Multiple Cameras
Use device serial number:Testing
Run self-test:donkeycar/parts/realsense435i.py:224-317.
OAK-D (OpenCV AI Kit)
Installation
Install DepthAI SDK:Linux USB Permissions
donkeycar/parts/oak_d.py:10-13.
Configuration
donkeycar/templates/cfg_simulator.py:277-279.
OakD Part
donkeycar/parts/oak_d.py:34-86.
Native Resolution
donkeycar/parts/oak_d.py:30-31.
Device Selection
List and select devices:donkeycar/parts/oak_d.py:89-117.
Pipeline Configuration
OAK-D uses a pipeline architecture:donkeycar/parts/oak_d.py:119-173.
Testing
Run self-test:donkeycar/parts/oak_d.py:260-381.
Using Depth Data
Depth Image Format
Depth is stored as uint16 array:donkeycar/parts/realsense435i.py:158.
Obstacle Detection
Detect obstacles in front:Depth Visualization
Convert depth to color image:donkeycar/parts/realsense435i.py:290-292 (test code).
Point Cloud Generation
Convert depth to 3D points:Recording Depth Data
Add depth to tub:Training with Depth
Stacked Input
Combine RGB and depth:Dual-Branch Model
Process RGB and depth separately:Performance Optimization
Resolution
Lower resolution for faster processing:Selective Capture
Frame Decimation
Process every Nth frame:Troubleshooting
RealSense Not Detected
OAK-D Not Found
Poor Depth Quality
- Avoid direct sunlight (interferes with IR)
- Ensure adequate lighting for RGB
- Keep lenses clean
- Avoid reflective/transparent surfaces
- Check depth range (too close or too far)
High CPU Usage
- Lower resolution
- Disable unused streams
- Use threaded=True
- Process depth at lower frequency
Best Practices
- Mount rigidly - Vibration affects stereo calibration
- Test depth range - Verify min/max distances for your use case
- Use threading - Don’t block vehicle loop
- Filter invalid depth - Check for zero values
- Align streams - Use built-in alignment
- Calibrate - Factory calibration usually sufficient
- Consider lighting - IR works in low light, RGB needs illumination
Next Steps
- Implement obstacle avoidance with depth
- Combine with lidar for 360° perception
- Use IMU for kinematics integration
- Stream depth via telemetry
- Test in simulator first
