What it does
Given a meeting transcript, the agent:- Reads the transcript
- Identifies every action item (owner, due date, description)
- Looks up each owner in the local team directory
- Creates a task record for each action item (
data/tasks.json) - Saves a structured markdown summary (
data/summaries/) - Drafts and “sends” a follow-up email (
data/sent_emails.log)
Setup
Usage
Interactive mode
Non-interactive mode
With an already-running llama-server
Benchmark
10-task suite covering easy → hard agentic scenarios, tested againstLiquidAI/LFM2-24B-A2B-GGUF:Q4_0:
| # | Difficulty | Task | Pass | Time |
|---|---|---|---|---|
| 1 | easy | Read transcript and list attendees | ✓ | 6.7s |
| 2 | easy | Look up one team member | ✓ | 5.1s |
| 3 | easy | Create one explicit task | ✓ | 6.1s |
| 4 | medium | Look up three team members | ✓ | 26.7s |
| 5 | medium | Create three tasks from a given list | ✓ | 22.4s |
| 6 | medium | Read transcript and save a structured summary | ✓ | 19.0s |
| 7 | hard | Full pipeline: tasks + summary + email | ✓ | 80.5s |
| 8 | hard | Detect and flag unassigned action item | ✓ | 103.6s |
| 9 | hard | Default due dates for items without explicit deadlines | ✓ | 47.8s |
| 10 | hard | Full pipeline: custom filename and targeted email recipients | ✓ | 51.7s |
Run the benchmark
Demo outputs
After running the agent ondata/sample_transcript.txt:
Configuration
| Environment variable | Default | Description | |-------------------------|------------------------------|------------------------------------|| |MIA_LOCAL_BASE_URL | http://localhost:8080/v1 | llama.cpp server URL |
| MIA_LOCAL_MODEL | local | Model name or HuggingFace path |
| MIA_LOCAL_CTX_SIZE | 32768 | Context window size |
| MIA_LOCAL_GPU_LAYERS | 99 | GPU layers to offload (0 = CPU) |