Quick Start
Train your first model in minutes with a working MNIST example.
System Requirements
Supported GPUs, drivers, and platforms for Vulkan and Metal backends.
Concepts
Learn how computation graphs, e-graph optimization, and autodiff work together.
API Reference
Full reference for Graph, Session, Trainer, and all neural network layers.
Why Meganeura?
Portable
Runs on Linux, Windows, macOS, iOS, and Android via Vulkan and Metal — no CUDA required.
Zero compile time
No JIT warmup. The execution plan is compiled once at graph build time and runs instantly.
E-graph optimized
Equality saturation via egglog automatically fuses kernels like SwiGLU, MatMul+Add, and RmsNorm.
Get started
Build a training session
Meganeura runs autodiff, e-graph optimization, and GPU compilation automatically:
Built-in model support
Meganeura ships with pre-built graph definitions for popular architectures. Load HuggingFace weights directly and run inference immediately.SmolLM2
Decoder-only language model with GQA, RoPE, and SwiGLU FFN.
SmolVLM2
Vision-language model with cross-attention between image and text tokens.
Stable Diffusion UNet
Diffusion model UNet with Conv2d, GroupNorm, and cross-attention.
HuggingFace Integration
Load safetensors weights from any HuggingFace Hub repository.