donkey makemovie command creates a video (MP4) from recorded tub data. It can optionally overlay model predictions, user inputs, and saliency maps to visualize autonomous driving behavior.
Usage
Options
Path to the tub directory containing recorded data to convert into a movie.
Output filename for the generated movie. Default is
tub_movie.mp4.Location of config file to use. Default is
./config.py.Path to a trained model file. If provided, the video will show the model’s predictions alongside user inputs.
Model type to load (e.g.,
linear, categorical). Required if --model is specified and the type cannot be inferred.Overlay a saliency map showing which parts of the image most influence the model’s decisions. Requires
--model to be specified.Index of the first frame to process. Use this to skip the beginning of a tub.
Index of the last frame to process. Default is -1 (process until the end). Use this to limit the video length.
Scale factor to enlarge the output video frames. Default is 2x. Higher values create larger videos but increase processing time and file size.
Show user input (steering/throttle) overlaid on the video. Use
--draw-user-input=false to disable.What Gets Created
The command creates a video file showing:- Camera images from the tub, scaled by the
--scalefactor - User inputs (steering and throttle values) if
--draw-user-inputis enabled - Model predictions (if
--modelis specified) showing predicted steering and throttle - Saliency map (if
--salientis enabled) highlighting image regions affecting model decisions
Examples
Create basic movie from tub
tub_movie.mp4 with user inputs overlaid.
Specify custom output filename
Create movie with model predictions
Create movie with saliency map
Process only part of the tub
Create larger video
Model predictions without user input overlay
Specify model type explicitly
Output Example
While processing, you’ll see progress output:Video Overlays
When displaying data on the video, the overlays typically show:User Input Display
- Steering: Value from -1.0 (full left) to 1.0 (full right)
- Throttle: Value from -1.0 (full reverse) to 1.0 (full forward)
- Color-coded bars or numerical values
Model Prediction Display
- Predicted Steering: Model’s steering output
- Predicted Throttle: Model’s throttle output
- Shown alongside user inputs for comparison
Saliency Map
- Heatmap overlay on the camera image
- Red/yellow areas indicate regions strongly influencing the model
- Blue/green areas have less influence
- Helps understand what the model is “looking at”
Use Cases
Model Evaluation
Visualize how well your model’s predictions match your driving:Debugging
Identify where the model makes mistakes:Data Review
Review recorded data to identify bad frames:Presentations
Create demo videos for sharing:Training Analysis
Compare multiple models on the same data:Performance Considerations
Processing Time
- Depends on: number of frames, scale factor, model complexity, saliency computation
- Typical rate: 10-30 frames per second
- Large tubs may take several minutes
File Size
- Depends on: resolution (scale factor), duration, compression
- Typical: 1-5 MB per minute at 2x scale
- 4x scale will produce significantly larger files
Reducing Processing Time
- Use
--startand--endto process fewer frames - Reduce
--scalefactor - Disable
--salient(saliency computation is expensive) - Use a machine with better CPU/GPU
Troubleshooting
”No such file or directory” error
- Verify tub path exists and is correct
- Use absolute paths if relative paths fail
- Check that tub contains valid data (manifest.json)
Model loading errors
- Ensure model path is correct
- Verify model file is not corrupted
- Specify
--typeif model type cannot be inferred - Check that config matches model requirements
Video codec errors
- Install OpenCV with video support:
pip install opencv-python - On Linux, may need:
sudo apt-get install libavcodec-extra - Try different output formats:
.mp4,.avi
Saliency map not showing
- Requires TensorFlow/Keras model
- Not supported for all model types
- May require additional dependencies
Memory errors
- Process fewer frames using
--startand--end - Reduce
--scalefactor - Close other applications
- Use a machine with more RAM
Next Steps
After creating movies:- Analyze model behavior: Look for patterns in prediction errors
- Identify problem areas: Note where predictions diverge from user input
- Collect targeted data: Record more data for problematic scenarios
- Compare models: Create movies for different model versions
- Share results: Use videos for documentation or presentations
- Use
donkey tubplotfor quantitative prediction analysis - Use
donkey tubhistto visualize data distributions
