donkey cnnactivations command visualizes what each convolutional layer in your trained model “sees” when processing an image. This is useful for understanding and debugging your neural network.
Usage
Arguments
Path to the input image file to analyze. Can be an image from your tub data or any test image.
Path to your trained Keras model file (.h5 or .keras format).
Path to your config.py file for image preprocessing settings.
Description
This debugging tool extracts and visualizes the feature maps (activations) from each Conv2D layer in your CNN. It shows you:- What patterns each layer detects (edges, textures, shapes, etc.)
- How information transforms as it flows through the network
- Which layers are learning useful features vs. noise
- Potential issues with model architecture or training
Example Usage
How It Works
- Loads your model - Opens the trained Keras model file
- Processes the image - Preprocesses the image using your config settings
- Extracts activations - Runs the image through each Conv2D layer
- Visualizes feature maps - Displays the activation patterns in a grid
- Shows layer info - Prints the shape of each layer’s output
Understanding the Output
The command opens matplotlib windows showing:Layer 1 (Early layers)
- Detect low-level features: edges, colors, gradients
- Activations look similar to the input image
- Example: Vertical edges of lane lines, horizontal track edges
Middle Layers
- Detect mid-level features: corners, textures, patterns
- Activations become more abstract
- Example: Curved track sections, road surface texture
Deep Layers
- Detect high-level features: complex shapes and semantic concepts
- Activations are very abstract, less recognizable
- Example: “This is a left turn” or “This is a straight section”
Use Cases
1. Model Debugging
Check if your model is learning the right features:- Early layers not detecting lane lines → Need better data or augmentation
- All layers showing noise → Model hasn’t trained properly
- Some layers inactive (all black) → Dead neurons, reduce learning rate
2. Architecture Validation
Compare different model architectures:3. Understanding Predictions
See what the model focuses on for specific predictions:Interpreting Activations
Healthy activations should show:
- Clear, distinct patterns (not random noise)
- Progressive abstraction from early to late layers
- Activation of multiple feature maps (not just a few)
- Responses aligned with important image features (track, obstacles)
Requirements
This command requires:- TensorFlow/Keras - Model must be in Keras format
- matplotlib - For visualization (installed with
donkeycar[pc]) - GUI display - Cannot run headless, needs X11/display
Troubleshooting
ImportError: No module named tensorflow
ImportError: No module named tensorflow
Install TensorFlow:Or install the PC extras:
No display available
No display available
This command requires a graphical display. Options:
- Run on your local machine (not SSH)
- Use X11 forwarding:
ssh -X user@host - Use VNC to access the car with a GUI
- Export DISPLAY variable if using remote X server
Model layer not found
Model layer not found
If you get
KeyError: 'img_in':- Your model must have an input layer named ‘img_in’
- This is standard for Donkeycar models
- If using a custom model, ensure input layer naming matches
All activations are black
All activations are black
If all activations appear black or empty:
- Check image path is correct and image loads
- Verify image dimensions match model input (default 160x120x3)
- Ensure model is trained (not random weights)
- Try a different image from your training data
Advanced Usage
Comparing Multiple Images
Create a script to batch visualize:Saving Activation Plots
Modify the source code to save instead of display:Related Tools
donkey train- Train the models you’re analyzingdonkey tubplot- Plot model predictions vs ground truthdonkey makemovie- Create videos with saliency maps
Further Reading
Source Code
Implemented indonkeycar/management/base.py:366-435