Search Your Knowledge Base Locally
QMD combines BM25 full-text search, vector semantic search, and LLM reranking—all running on-device with GGUF models. Index your markdown notes, documentation, and knowledge bases with zero external dependencies.
Quick Start
Get QMD running in minutes with these simple steps
Generate embeddings
~/.cache/qmd/models/Explore by topic
Learn about QMD’s core features and capabilities
Collections
Search modes
Query syntax
Context management
AI agents
MCP server
Key features
Everything you need for powerful local search
Hybrid search pipeline
Combines BM25 full-text search with vector semantic search, then applies LLM reranking for optimal results
100% local execution
All models run on-device via node-llama-cpp. No API keys, no external dependencies, no data leaving your machine
Smart chunking
Markdown-aware boundary detection keeps sections intact. 900 tokens per chunk with 15% overlap
Document IDs (docid)
Every document gets a short hash ID for quick reference. Use them in search results and retrieval commands
Query document format
Write multi-line queries with typed sub-queries (lex, vec, hyde) for precise control over search strategy
Multiple output formats
Export results as JSON, CSV, Markdown, XML, or plain file lists for integration with AI agents and scripts
Ready to get started?
Install QMD and start searching your knowledge base in minutes
View Installation Guide