Skip to main content
A local AI agent that reads meeting transcripts and turns them into structured outputs: action items, a markdown summary, and a follow-up email. Runs entirely on your hardware. Your personal or private data is not shared with any model provider.

What it does

Given a meeting transcript, the agent:
  1. Reads the transcript
  2. Identifies every action item (owner, due date, description)
  3. Looks up each owner in the local team directory
  4. Creates a task record for each action item (data/tasks.json)
  5. Saves a structured markdown summary (data/summaries/)
  6. Drafts and “sends” a follow-up email (data/sent_emails.log)

Setup

1

Install llama.cpp

Provides llama-server:
brew install llama.cpp
2

Clone the repository

git clone https://github.com/Liquid4All/cookbook.git
cd cookbook/examples/meeting-intelligence-agent
3

Install Python dependencies

uv sync

Usage

Interactive mode

uv run mia --model LiquidAI/LFM2-24B-A2B-GGUF:Q4_0
> Process the meeting transcript in data/sample_transcript.txt

Non-interactive mode

uv run mia --model LiquidAI/LFM2-24B-A2B-GGUF:Q4_0 -p "Process data/sample_transcript.txt and save the summary as sprint-42.md"

With an already-running llama-server

# Start the server (once)
llama-server \
  --port 8080 \
  --ctx-size 32768 \
  --n-gpu-layers 99 \
  --flash-attn on \
  --jinja \
  --temp 0.1 \
  --top-k 50 \
  --repeat-penalty 1.05 \
  -hf LiquidAI/LFM2-24B-A2B-GGUF:Q4_0

# Then run the agent (server is reused across runs)
uv run mia
> Process the meeting transcript in data/sample_transcript.txt

Benchmark

10-task suite covering easy → hard agentic scenarios, tested against LiquidAI/LFM2-24B-A2B-GGUF:Q4_0:
#DifficultyTaskPassTime
1easyRead transcript and list attendees6.7s
2easyLook up one team member5.1s
3easyCreate one explicit task6.1s
4mediumLook up three team members26.7s
5mediumCreate three tasks from a given list22.4s
6mediumRead transcript and save a structured summary19.0s
7hardFull pipeline: tasks + summary + email80.5s
8hardDetect and flag unassigned action item103.6s
9hardDefault due dates for items without explicit deadlines47.8s
10hardFull pipeline: custom filename and targeted email recipients51.7s
Score: 10/10

Run the benchmark

uv run benchmark/run.py --model LiquidAI/LFM2-24B-A2B-GGUF:Q4_0

Demo outputs

After running the agent on data/sample_transcript.txt:
cat data/tasks.json          # structured task records
cat data/summaries/*.md      # markdown meeting summary
cat data/sent_emails.log     # follow-up email log

Configuration

| Environment variable | Default | Description | |-------------------------|------------------------------|------------------------------------|| | MIA_LOCAL_BASE_URL | http://localhost:8080/v1 | llama.cpp server URL | | MIA_LOCAL_MODEL | local | Model name or HuggingFace path | | MIA_LOCAL_CTX_SIZE | 32768 | Context window size | | MIA_LOCAL_GPU_LAYERS | 99 | GPU layers to offload (0 = CPU) |

Source code

View the complete source code on GitHub.

Build docs developers (and LLMs) love