Getting started
This guide will help you set up your development environment and run your first local AI application with Liquid Foundation Models (LFMs).What you need
Before you begin, ensure your system meets these requirements:System requirements
- Operating System: Linux, macOS, or Windows (WSL recommended)
- RAM: Minimum 8GB (16GB or more recommended for larger models)
- Storage: At least 5GB free space for models and dependencies
- Python: Version 3.8 or higher
Model size determines memory requirements. Smaller models like LFM2-350M can run on 4GB RAM, while larger models like LFM2-24B require 32GB or more.
Common tools
Depending on which examples you want to run, you may need:- llama.cpp: For efficient CPU and GPU inference
- uv: Fast Python package installer and environment manager
- Git: For cloning the repository
- Docker: For containerized examples (optional)
Clone the repository
Install common dependencies
Many examples use uv for fast dependency management. Install it using:Run your first example
Let’s run the Audio Transcription CLI to see LFMs in action:Download the model
Follow the instructions in the example’s README to download the LFM2-Audio-1.5B model in GGUF format.
Model download sizes vary from hundreds of MB to several GB. Ensure you have a stable internet connection and sufficient storage.
Explore more examples
Now that you’ve run your first example, explore more use cases:Invoice parser
Extract structured data from invoices using vision models
Flight search assistant
Build an AI agent with tool calling capabilities
WebGPU demos
Run models entirely in your browser without installation
Mobile apps
Deploy models on iOS and Android devices
Understanding model formats
LFMs are available in different formats depending on your deployment target:- GGUF: Optimized format for llama.cpp, ideal for CPU/GPU inference
- ONNX: Cross-platform format for mobile and web deployment
- Hugging Face: Standard format for Python-based inference and fine-tuning
Common workflows
For local desktop applications
- Choose an example from the
examples/directory - Follow the example’s README for setup
- Download the appropriate GGUF model
- Run the application with your local model
For mobile deployment
- Set up the LEAP Edge SDK
- Choose an Android (Kotlin) or iOS (Swift) example
- Follow the platform-specific setup instructions
- Build and deploy to your device
For fine-tuning
- Navigate to
finetuning/notebooks/ - Open a notebook in Google Colab
- Prepare your dataset in the required format
- Run the notebook to fine-tune your model
- Export and deploy your custom model
Next steps
Explore local AI apps
Check out the Local AI Apps section for production-ready examples with detailed documentation.
Learn about mobile deployment
Dive into Mobile Deployment to build native iOS and Android applications.
Customize models
Visit the Fine-Tuning section to learn how to train models on your own data.
Join the community
Connect with other developers on Discord to share projects and get help.
Get help
If you encounter issues:- Check the example’s README for troubleshooting tips
- Search existing issues on GitHub
- Ask questions in the
#helpchannel on Discord - Review the Liquid AI Documentation for API references