
What you’ll learn
In this example, you will learn how to:- Set up local AI inference using llama.cpp to run Liquid models entirely on your machine without requiring cloud services or API keys
- Build a file monitoring system that automatically processes new files dropped into a directory
- Extract structured output from images using LFM2.5-VL-1.6B, a small vision-language model
Prerequisites
You will need:- llama.cpp to serve the Language Models locally
- uv to manage Python dependencies and run the application efficiently
- macOS
- Linux
- Windows
How to run it
Watch mode
Run it as a background service that continuously monitors a directory and automatically parses invoice images as they land in the folder:Process mode
Process specific files or folders and exit:make installed, you can run the application with:
Results
You can run the tool with a sample of images underinvoices/:
| File | Utility | Amount | Currency |
|---|---|---|---|
| water_australia.png | water | 68.46 | AUD |
| Sample-electric-Bill-2023.jpg | electricity | 28.32 | USD |
| castlewater1.png | water | 436.55 | GBP |
| british_gas.png | electricity | 81.31 | GBP |