Conversion Function
convert_from_keras_model()
The primary function for converting Keras models to hls4ml.python
Parameters
Keras model instance to convert. Can be a Sequential or Functional model.
Output directory for the generated HLS project.
Name of the HLS project.
Backend to use for HLS synthesis. Options: ‘Vivado’, ‘Vitis’, ‘Quartus’, ‘Catapult’.
I/O implementation type. Options: ‘io_parallel’, ‘io_stream’.
Configuration dictionary for HLS conversion. Should include:
Model: Dictionary withPrecisionandReuseFactorLayerName: Per-layer configuration (optional)LayerType: Per-layer-type configuration (optional)
Target FPGA part number (e.g., ‘xcvu9p-flgb2104-2-i’).
Clock period in nanoseconds.
Enable model-wise precision propagation with fixed-point types only.
Allow fallback to Keras v2 handlers for unsupported layers (Keras v3 only).
Complete Example
Supported Layers
The Keras frontend supports a comprehensive set of layer types:Core Layers
- Dense - Fully connected layers
- BinaryDense - Binary quantized dense layers
- TernaryDense - Ternary quantized dense layers
- Activation - Standard activation functions
- InputLayer - Input specification
- Dropout - Training-only layer (skipped during conversion)
Activation Layers
- ReLU - Rectified Linear Unit
- LeakyReLU - Leaky ReLU with configurable slope
- ELU - Exponential Linear Unit
- PReLU - Parametric ReLU
- ThresholdedReLU - Thresholded activation
- Softmax - Softmax activation
- Sigmoid - Sigmoid activation
- Tanh - Hyperbolic tangent
- HardActivation - Hard sigmoid
Convolutional Layers
- Conv1D - 1D convolution
- Conv2D - 2D convolution
- DepthwiseConv1D - Depthwise 1D convolution
- DepthwiseConv2D - Depthwise 2D convolution
- SeparableConv1D - Separable 1D convolution
- SeparableConv2D - Separable 2D convolution
Pooling Layers
- MaxPooling1D / MaxPooling2D - Max pooling
- AveragePooling1D / AveragePooling2D - Average pooling
- GlobalAveragePooling1D / GlobalAveragePooling2D - Global pooling
- GlobalMaxPooling1D / GlobalMaxPooling2D - Global max pooling
Normalization Layers
- BatchNormalization - Batch normalization
- LayerNormalization - Layer normalization (Keras v3)
Recurrent Layers
- SimpleRNN - Simple recurrent neural network
- LSTM - Long Short-Term Memory
- GRU - Gated Recurrent Unit
- Bidirectional - Bidirectional wrapper for RNNs
Merge Layers
- Add - Element-wise addition
- Subtract - Element-wise subtraction
- Multiply - Element-wise multiplication
- Average - Element-wise averaging
- Maximum - Element-wise maximum
- Minimum - Element-wise minimum
- Concatenate - Tensor concatenation
- Dot - Dot product
Reshape Layers
- Flatten - Flatten input
- Reshape - Arbitrary reshaping
- Permute - Dimension permutation
- ZeroPadding1D / ZeroPadding2D - Zero padding
- UpSampling1D / UpSampling2D - Upsampling
- RepeatVector - Vector repetition
QKeras Support
hls4ml has extensive support for QKeras quantized layers:- QDense - Quantized dense layers
- QConv1D / QConv2D - Quantized convolutions
- QActivation - Quantized activations
- QBatchNormalization - Quantized batch normalization
Framework-Specific Configuration
Data Format
Keras useschannels_last data format by default (e.g., (batch, height, width, channels)). hls4ml automatically handles this format.
python
Layer Name Configuration
Configure specific layers by name:python
Weights and Biases
Weights are automatically extracted from the Keras model:python
Troubleshooting
Unsupported Layer Type Error
Unsupported Layer Type Error
If you encounter an unsupported layer error:
- Check if the layer is in the supported layers list above
- For Keras v3, enable fallback options:
- Replace unsupported layers with supported alternatives
- Implement a custom layer handler (see Advanced Usage)
Precision Mismatch Between Keras and HLS
Precision Mismatch Between Keras and HLS
To improve accuracy:
- Increase precision:
- Use per-layer precision:
- Enable bit-exact mode for fixed-point propagation:
Model Loading Issues with H5 Files
Model Loading Issues with H5 Files
For models saved with For separate architecture and weights:
model.save():python
python
QKeras Quantization Issues
QKeras Quantization Issues
When working with QKeras models:
- Ensure QKeras is installed:
pip install qkeras - QKeras quantizers are automatically detected and converted
- Check quantizer configuration:
Keras v3 Compatibility
Keras v3 Compatibility
For Keras v3 (TensorFlow 2.16+):
- hls4ml automatically detects Keras v3
- Some layers may require fallback:
- Check compatibility with:
import keras; print(keras.__version__)
Advanced Usage
Custom Layer Handlers
Register custom handlers for unsupported layers:python
Using config_from_keras_model()
Generate optimized configuration automatically:python
Source Code Reference
The Keras converter implementation can be found at:hls4ml/converters/keras_v2_to_hls.py:348- Main conversion functionhls4ml/converters/keras/- Layer-specific handlershls4ml/converters/__init__.py:169- API entry point
Next Steps
Configuration
Learn about advanced configuration options
Optimization
Optimize your model for FPGA deployment
Backends
Explore different FPGA backend options
API Reference
Complete API documentation
