Skip to main content
The Liquid AI community has built an incredible collection of open-source projects showcasing LFMs in real-world applications. From image classification to translation, fine-tuning examples to browser-based deployments, these projects demonstrate the versatility and power of running AI models locally.

Image classification and vision

Image classification on edge

End-to-end tutorial covering fine-tuning and deployment for super fast and accurate image classification using local VLMs

Food images fine-tuning

Fine-tune LFM models on food image datasets for specialized recognition tasks

Photo triage agent

Private photo library cleanup using LFM vision model to organize and categorize images

TranslatorLens

Offline translation camera for real-time text translation using vision models

Translation and language

LFM2-KoEn-Tuning

Fine-tuned LFM2 1.2B for Korean-English translation with optimized performance

LFM-2.5 JP on Web

LFM2.5 1.2B parameter Japanese language model running locally in the browser with WebGPU, using Transformers.js and ONNX Runtime on Web

Fine-tuning examples

Chess game with small LMs

End-to-end tutorial covering fine-tuning and deployment to build a chess game using Small Language Models

SFT + DPO fine-tuning

Teaching a 1.2B model to be a grumpy Italian chef using SFT and DPO fine-tuning with Unsloth

LFM2.5 mobile actions

LoRA fine-tuned LFM2.5-1.2B that translates natural language into Android OS function calls for on-device mobile action recognition

Document and text processing

Private doc Q&A

On-device document Q&A with RAG and voice input for private information retrieval

Private summarizer

100% local text summarization with multi-language support for privacy-focused workflows

LFM-Scholar

Automated literature review agent for finding and citing papers in research workflows

Meeting intelligence CLI

CLI tool for meeting transcription and analysis with AI-powered insights

Browser and mobile deployments

LFM-2.5 Thinking on Web

LFM2.5 1.2B parameter reasoning language model running locally in the browser with WebGPU, using Transformers.js and ONNX Runtime Web

Chat with LEAP SDK

LEAP SDK integration for React Native to build mobile chat applications

Advanced architectures

Tiny-MoA

Mixture of Agents on CPU with LFM2.5 Brain (1.2B) for enhanced reasoning capabilities

Share your project

Built something amazing with LFM? We’d love to feature it here! Check out our contributing guide to learn how to submit your project to the community showcase.

Join the community

Connect with other builders on Discord to share ideas, get help, and collaborate on projects

Build docs developers (and LLMs) love