Connection errors
Ollama connection errors
If you’re encountering an Ollama connection error, it’s likely due to the backend being unable to connect to Ollama’s API.Update API URL based on OS
The correct URL varies by operating system:
Adjust the port number if you’re using a different port than the default 11434.
Linux users: Expose Ollama to network
On Linux, you need to configure Ollama to listen on all network interfaces:Change the port number if using a different one.For more information, see the Ollama documentation.
- Edit the Ollama service file:
- Add the following line in the
[Service]section:
- Reload systemd and restart Ollama:
- Ensure port 11434 is not blocked by your firewall:
Lemonade connection errors
If you’re encountering a Lemonade connection error, the backend cannot connect to Lemonade’s API.Local OpenAI-compatible servers
If Perplexica tells you that you haven’t configured any chat model providers:Server binding address
Server binding address
Ensure your server is running on
0.0.0.0 (not 127.0.0.1) and on the same port you specified in the API URL.Many local LLM servers default to localhost only, which won’t be accessible from inside a Docker container.Model name
Model name
Verify you’ve specified the correct model name as loaded by your local LLM server. The model name must exactly match what your server expects.
API key
API key
Even if your local server doesn’t require an API key, Perplexica’s form validation requires something in the API key field. Enter any non-empty value if your server doesn’t use authentication.
SearxNG issues
SearxNG not responding
If searches are failing or timing out:Check SearxNG status
For Docker deployments with bundled SearxNG, check if it’s running:You should see:
Verify SearxNG accessibility
Test the SearxNG endpoint directly:You should receive HTML content from SearxNG.
Check JSON format
Ensure JSON format is enabled in SearxNG settings. This is required for Perplexica to parse search results.
External SearxNG connection
When using the slim image with your own SearxNG:Docker issues
Port already in use
If you see an error about port 3000 or 8080 already being in use:Volume permission issues
If you encounter permission errors with the data volume:Container won’t start
Check the container logs for specific errors:- Missing environment variables
- Port conflicts
- Volume mount problems
- Insufficient resources (RAM/CPU)
Application issues
Searches return no results
Check SearxNG connection
Check SearxNG connection
Verify that SearxNG is running and accessible. Check the Perplexica logs for connection errors.
Verify LLM provider
Verify LLM provider
Ensure your AI provider (OpenAI, Anthropic, Ollama, etc.) is properly configured with valid API keys and model names.
Test with different search modes
Test with different search modes
Try switching between Speed, Balanced, and Quality modes to see if the issue is mode-specific.
Slow search responses
- Check your internet connection: Web searches require good connectivity
- Try Speed mode: Quality mode is more thorough but slower
- Local LLM performance: If using Ollama, ensure your hardware can handle the model
- SearxNG response time: External SearxNG instances may be slower
File upload failures
Verify file size
Check if the file is within acceptable limits (exact limits depend on configuration).
Settings not persisting
If your settings reset after container restart:-v perplexica-data:/home/perplexica/data flag is essential for persistence.
Performance issues
High memory usage
Perplexica uses AI models which can be memory-intensive:- Minimum: 1GB RAM
- Recommended: 2GB+ RAM
- With local LLMs (Ollama): 4GB+ RAM depending on model size
High CPU usage
- Normal during active searches with local LLMs
- Consider using cloud API providers (OpenAI, Anthropic) to reduce local CPU load
- Ensure your Docker resource limits are appropriate
Getting help
If you’re still experiencing issues:GitHub Issues
Report bugs or search existing issues
Discord Community
Get help from the community and developers
Documentation
Review the full documentation
API Documentation
API reference for developers