Configuration
Basic Usage
VoyageAI specializes in generating high-quality embeddings for semantic search and retrieval:Provider-Specific Options
Input Type
By default, VoyageAI generates general purpose vectors. However, they tailor your vectors for the task they are intended for - for search (“query”) or for retrieval (“document”):For Search / Querying
For Document Retrieval
Truncation
By default, VoyageAI truncates inputs that are over the context length. You can force it to throw an error instead by setting truncation to false:Use Cases
Semantic Search
VoyageAI embeddings excel at semantic search applications:Batch Processing
Process multiple texts efficiently:RAG (Retrieval-Augmented Generation)
Use VoyageAI embeddings for RAG applications:Available Models
Voyage 3 Series
- voyage-3 - Latest and most capable model
- voyage-3-lite - Faster and more cost-effective
Voyage 2 Series
- voyage-2 - Previous generation
- voyage-2-lite - Lighter version
Specialized Models
- voyage-code-2 - Optimized for code search
- voyage-law-2 - Optimized for legal documents
- voyage-finance-2 - Optimized for financial documents
Best Practices
Use Appropriate Input Types
Batch When Possible
Process multiple texts in a single request for better performance:Choose the Right Model
- Use voyage-3 for highest quality
- Use voyage-3-lite for speed and cost efficiency
- Use specialized models (code, law, finance) for domain-specific tasks
Features
- ✅ High-quality embeddings
- ✅ Task-specific optimization (query vs document)
- ✅ Batch processing
- ✅ Multiple specialized models
- ✅ Customizable truncation behavior
- ❌ Text generation (not supported)
- ❌ Image processing (not supported)
Performance Characteristics
- voyage-3: Best quality, higher latency
- voyage-3-lite: Good quality, lower latency
- Specialized models: Optimized for specific domains