Perplexica’s API uses standard HTTP status codes to indicate the success or failure of requests. This guide covers the error codes you may encounter and how to handle them effectively.
HTTP status codes
The API uses the following HTTP status codes:
| Status Code | Description |
|---|
200 | Success - Request completed successfully |
400 | Bad Request - Invalid or missing required parameters |
500 | Internal Server Error - An error occurred on the server |
When an error occurs, the API returns a JSON object with error details:
{
"message": "Error description"
}
For validation errors, the response may include additional details:
{
"message": "Invalid request body",
"error": [
{
"path": "chatModel.providerId",
"message": "Chat model provider id must be provided"
},
{
"path": "query",
"message": "Message content is required"
}
]
}
400 Bad Request errors
Missing required fields
Returned when required parameters are missing from the request:
{
"message": "Missing sources or query"
}
Common causes:
- Missing
sources array in request body
- Missing
query string in request body
- Empty
query string
Example fix:
{
"chatModel": { ... },
"embeddingModel": { ... }
// Missing sources and query
}
Validation errors
Returned when request parameters fail validation:
{
"message": "Invalid request body",
"error": [
{
"path": "optimizationMode",
"message": "Optimization mode must be one of: speed, balanced, quality"
}
]
}
Common validation errors:
chatModel.providerId - Chat model provider id must be provided
chatModel.key - Chat model key must be provided
embeddingModel.providerId - Embedding model provider id must be provided
embeddingModel.key - Embedding model key must be provided
optimizationMode - Must be one of: speed, balanced, quality
messageId - Message ID is required (for /api/chat endpoint)
chatId - Chat ID is required (for /api/chat endpoint)
Missing provider fields
Returned when creating a provider without required fields:
{
"message": "Missing required fields."
}
This occurs when type, name, or config are missing from a provider creation request.
500 Internal Server errors
General server error
Returned when an unexpected error occurs during processing:
{
"message": "An error has occurred."
}
Common causes:
- Model loading failure
- Database connection issues
- Search processing errors
Endpoint-specific errors
Different endpoints may return more specific error messages:
// Search API
{
"message": "Search error",
"error": { ... }
}
// Image search
{
"message": "An error occurred while searching images"
}
// Suggestions
{
"message": "An error occurred while generating suggestions"
}
// Chat processing
{
"message": "An error occurred while processing chat request"
}
Streaming errors
When using streaming mode, errors are sent as part of the stream:
{"type":"error","data":{ ... }}
After an error message, the stream will close. Your client should handle this message type:
switch (message.type) {
case 'error':
console.error('Stream error:', message.data);
// Handle error and cleanup
break;
// ... other cases
}
When an error occurs in a streaming response, the connection will be closed immediately after sending the error message. Always implement proper error handling in your stream consumer.
Best practices
Validate before sending
Validate your request data before sending it to the API:
function validateSearchRequest(data) {
if (!data.sources || data.sources.length === 0) {
throw new Error('At least one source is required');
}
if (!data.query || data.query.trim() === '') {
throw new Error('Query cannot be empty');
}
if (!data.chatModel || !data.chatModel.providerId || !data.chatModel.key) {
throw new Error('Valid chat model is required');
}
if (!data.embeddingModel || !data.embeddingModel.providerId || !data.embeddingModel.key) {
throw new Error('Valid embedding model is required');
}
const validModes = ['speed', 'balanced', 'quality'];
if (data.optimizationMode && !validModes.includes(data.optimizationMode)) {
throw new Error('Invalid optimization mode');
}
return true;
}
Handle errors gracefully
Implement proper error handling in your application:
try {
const response = await fetch('http://localhost:3000/api/search', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(requestData)
});
if (!response.ok) {
const error = await response.json();
if (response.status === 400) {
// Handle validation errors
console.error('Validation error:', error.message);
if (error.error) {
error.error.forEach(err => {
console.error(` ${err.path}: ${err.message}`);
});
}
} else if (response.status === 500) {
// Handle server errors
console.error('Server error:', error.message);
// Implement retry logic or fallback
}
throw new Error(error.message);
}
const data = await response.json();
return data;
} catch (error) {
console.error('Request failed:', error);
// Implement user-facing error handling
throw error;
}
Retry logic
Implement retry logic for transient errors:
async function searchWithRetry(requestData, maxRetries = 3) {
for (let i = 0; i < maxRetries; i++) {
try {
const response = await fetch('http://localhost:3000/api/search', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(requestData)
});
if (response.status === 500 && i < maxRetries - 1) {
// Wait before retrying (exponential backoff)
await new Promise(resolve =>
setTimeout(resolve, Math.pow(2, i) * 1000)
);
continue;
}
if (!response.ok) {
const error = await response.json();
throw new Error(error.message);
}
return await response.json();
} catch (error) {
if (i === maxRetries - 1) throw error;
}
}
}
Always implement exponential backoff when retrying failed requests to avoid overwhelming the server.
To avoid provider-related errors, always fetch available providers before making search requests:
// Get available providers and models
const providersResponse = await fetch('http://localhost:3000/api/providers');
const { providers } = await providersResponse.json();
// Use actual provider IDs and model keys
const openai = providers.find(p => p.name === 'OpenAI');
const requestData = {
chatModel: {
providerId: openai.id,
key: 'gpt-4o-mini'
},
embeddingModel: {
providerId: openai.id,
key: 'text-embedding-3-large'
},
sources: ['web'],
query: 'What is Perplexica'
};
See the Search API documentation for more details on the /api/providers endpoint.