Overview
FlowMatching is a diffusion model implementation based on flow matching for generative modeling. It uses continuous normalizing flows to transform noise into data samples through learned vector fields.
References
Class Definition
Constructor
The integration method used during inference. Currently only “euler” (Euler integration) is supported.
The number of inference steps to use when sampling. More steps generally produce higher quality samples but take longer.
The dimension of the input tensor. Inherited from BaseDiffusion. Can be a single integer or a sequence of integers defining the shape.
Example
Methods
sample
The number of samples to generate in parallel.
The denoising step function that predicts the vector field. Should take keyword arguments
x (current state) and t (timestep) and return the predicted vector field.See StepFn Protocol for details.The PyTorch device to use for sampling (e.g., “cpu”, “cuda”).
Whether to return the outputs from all intermediate sampling steps.
The number of inference steps to use. If provided, this overrides
self.num_inference_steps for this sampling call.The integration method to use. If provided, this overrides
self.int_method for this sampling call.Returns
If
return_all_steps=False: Returns the final sampled tensor with shape [B, *x_dims] where B is the batch size.If return_all_steps=True: Returns a tuple of:- All sampled tensors with shape
[B, T+1, *x_dims]where T is the number of inference steps (includes initial noise) - The time steps with shape
[T+1]ranging from 0.0 to 1.0
Flow Matching Sampling Process
Flow matching generates samples by:- Initialize: Start with random noise
x ~ N(0, I)at timet=0 - Integrate: Use Euler integration to follow the learned vector field from
t=0tot=1:- At each step:
x = x + dt * v(x, t)wherevis the predicted vector field fromstep_fn - Time steps are linearly spaced:
[0.0, 1/T, 2/T, ..., 1.0]
- At each step:
- Output: Return the final state
xatt=1, which should resemble the target distribution
Usage Example
Advanced Usage
Adaptive Step Sizes
You can dynamically adjust the number of inference steps based on quality requirements:Conditional Generation
Extend the step function to support conditional generation:See Also
- BaseDiffusion - Base class documentation