Skip to main content

Introduction

Welcome to the LFM Cookbook — your comprehensive resource for building on-device AI applications with Liquid Foundation Models (LFMs). This cookbook provides examples, tutorials, and applications to help you leverage our open-weight models and the LEAP SDK across laptops, mobile devices, and edge computing platforms.

What is the LFM Cookbook?

The LFM Cookbook is a collection of practical, ready-to-run examples and comprehensive guides that demonstrate how to build local AI applications without relying on cloud infrastructure. Whether you’re developing for desktop, mobile, or embedded systems, you’ll find production-ready code and patterns to accelerate your development.

Key features

Local AI apps

Ready-to-run applications with agentic workflows and real-time inference running entirely on local devices

Mobile deployment

Native iOS and Android examples using the LEAP Edge SDK for seamless on-device model deployment

Fine-tuning guides

Colab notebooks for supervised fine-tuning, reinforcement learning, and continued pre-training

Vision and audio

Work with multimodal models including vision-language and audio models for rich applications

Edge devices

Deploy efficient models on resource-constrained devices with optimized inference

Tool calling

Build agentic applications with function calling and structured output generation

What you’ll find in this cookbook

Local AI apps

Discover production-ready applications that showcase the power of on-device AI:
  • Invoice Parser: Extract structured data from invoice images using LFM2-VL-3B
  • Audio Transcription CLI: Real-time speech-to-text with LFM2-Audio-1.5B
  • Flight Search Assistant: Book plane tickets using LFM2.5-1.2B-Thinking with tool calling
  • Audio Car Cockpit: Voice-controlled car cockpit combining audio and tool models
  • WebGPU Demos: Run models entirely in your browser for audio and vision tasks
  • LocalCowork: On-device AI agent for file operations, security scanning, and OCR

Mobile deployment

Native examples for iOS (Swift) and Android (Kotlin) that make Small Language Model deployment as easy as calling a cloud API:
  • Chat applications with streaming and persistent history
  • Audio input/output for voice interactions
  • Structured output generation for recipes, slogans, and more
  • Vision-language model integration
  • AI agent functionality with the Koog framework

Fine-tuning

Comprehensive notebooks covering multiple fine-tuning approaches:
  • Supervised Fine-Tuning (SFT): Customize models with your data using Unsloth or TRL
  • Reinforcement Learning: Train reasoning models with GRPO for verifiable tasks
  • Continued Pre-Training: Adapt models to specific languages or domains
  • Vision-Language Models: Fine-tune VLM models on custom image-text datasets

Community projects

Explore real-world projects built by the community:
  • Image classification on edge devices
  • Chess games with small language models
  • Offline translation cameras
  • Meeting intelligence and document Q&A
  • Photo triage agents and literature review tools

Get started

Getting started

Set up your environment and run your first example

Local AI apps

Explore ready-to-run applications

Mobile deployment

Deploy models on iOS and Android

Fine-tuning

Customize models with your own data

Join the community

Discord

Join our community for support, live events, and discussions

GitHub

Contribute examples and explore the source code

Additional resources

This cookbook is constantly evolving with new examples and tutorials. Check back regularly for updates, and consider contributing your own projects to the community!

Build docs developers (and LLMs) love