Documentation Index
Fetch the complete documentation index at: https://mintlify.com/Neumenon/glyph/llms.txt
Use this file to discover all available pages before exploring further.
What is GLYPH?
GLYPH is a token-efficient serialization format designed specifically for AI agents. It reduces token usage by 40-60% compared to JSON while maintaining human readability and full JSON interoperability.Why tokens matter: Every token consumes your LLM context window and costs money. With GLYPH, you can fit more data in prompts, reduce costs, and enable longer conversations.
Quick Example
Here’s the same data in JSON vs GLYPH:Why GLYPH?
JSON wastes tokens on redundant syntax. Every", :, and , consumes your context window. GLYPH eliminates the waste while remaining human-readable.
1. Massive Token Savings
40-60% fewer tokens than JSON on real-world data:| Data Type | JSON Tokens | GLYPH Tokens | Savings |
|---|---|---|---|
| LLM message | 10 | 6 | 40% |
| Tool call | 26 | 15 | 42% |
| Conversation (25 msgs) | 264 | 134 | 49% |
| Search results (25 rows) | 456 | 220 | 52% |
| Tool results (50 items) | 562 | 214 | 62% |
2. Streaming Validation
Detect errors as tokens stream, not after generation completes:Real impact: Catch bad tool names, missing params, and constraint violations as they appear. Save tokens, time, and reduce failures.
3. Auto-Tabular Mode
Homogeneous lists compress to tables automatically:4. State Fingerprinting
SHA-256 hashing prevents concurrent modification conflicts:5. JSON Interoperability
Drop-in replacement with bidirectional conversion:Token Savings Breakdown
GLYPH achieves token savings through multiple techniques:Example: Tool Definition
Key Features
Format Basics
Format Quick Reference
= not : · bare strings · t/f bools · ∅ null
Human Readable
Unlike binary formats (Protocol Buffers, MessagePack), GLYPH remains readable:Streaming Compatible
Validate structure as it’s being generated:When to Use GLYPH
✅ Use GLYPH
- LLMs reading structured data: Tool responses, state, batch data
- Streaming validation needed: Real-time error detection
- Token budgets are tight: System prompts, conversation history
- Multi-agent systems: State management and message passing
- Large datasets: Search results, embeddings, logs
⚠️ Use JSON
- LLMs generating output: They’re trained on JSON format
- Existing JSON-only integrations: External APIs that require JSON
- Browser/web contexts: Native JSON support in JavaScript
💡 Best Practice: Hybrid Approach
LLMs generate JSON (what they know) → serialize to GLYPH for storage/transmission:Use Cases
Tool Calling
Define tools in GLYPH (40% fewer tokens in system prompts), validate during streaming:Agent State
Store conversation history with 49% fewer tokens:Batch Data
Auto-tabular mode for embeddings, search results, logs:Performance
Codec Speed (Go implementation):- Canonicalization: 2M+ ops/sec
- Parsing: 1.5M+ ops/sec
- Fingerprinting: 500K+ ops/sec
Next Steps
Quickstart
Get working code in 5 minutes
Installation
Install for Python, Go, JavaScript, Rust, or C
API Reference
Language-specific API documentation
Agent Patterns
LLM integration recipes
Community
- GitHub: github.com/Neumenon/glyph
- Issues: Report bugs or request features
- Discussions: Ask questions