Skip to main content
This guide covers common issues you might encounter when deploying or running Perplexica, along with their solutions.

Connection errors

Ollama connection errors

If you’re encountering an Ollama connection error, it’s likely due to the backend being unable to connect to Ollama’s API.
1

Verify API URL

Check that your Ollama API URL is correctly set in the Perplexica settings menu.
2

Update API URL based on OS

The correct URL varies by operating system:
http://host.docker.internal:11434
Adjust the port number if you’re using a different port than the default 11434.
3

Linux users: Expose Ollama to network

On Linux, you need to configure Ollama to listen on all network interfaces:
  1. Edit the Ollama service file:
sudo nano /etc/systemd/system/ollama.service
  1. Add the following line in the [Service] section:
Environment="OLLAMA_HOST=0.0.0.0:11434"
Change the port number if using a different one.
  1. Reload systemd and restart Ollama:
systemctl daemon-reload
systemctl restart ollama
  1. Ensure port 11434 is not blocked by your firewall:
sudo ufw allow 11434
For more information, see the Ollama documentation.

Lemonade connection errors

If you’re encountering a Lemonade connection error, the backend cannot connect to Lemonade’s API.
1

Check Lemonade API URL

Ensure the API URL is correctly configured in the Perplexica settings menu.
2

Update API URL based on OS

http://host.docker.internal:8000
Adjust the port number if using a different one than the default 8000.
3

Verify Lemonade is running

Ensure your Lemonade server is:
  • Running and accessible on the configured port (default: 8000)
  • Configured to accept connections from all interfaces (0.0.0.0), not just localhost (127.0.0.1)
  • Not blocked by firewall on the specified port

Local OpenAI-compatible servers

If Perplexica tells you that you haven’t configured any chat model providers:
Ensure your server is running on 0.0.0.0 (not 127.0.0.1) and on the same port you specified in the API URL.Many local LLM servers default to localhost only, which won’t be accessible from inside a Docker container.
Verify you’ve specified the correct model name as loaded by your local LLM server. The model name must exactly match what your server expects.
Even if your local server doesn’t require an API key, Perplexica’s form validation requires something in the API key field. Enter any non-empty value if your server doesn’t use authentication.

SearxNG issues

SearxNG not responding

If searches are failing or timing out:
1

Check SearxNG status

For Docker deployments with bundled SearxNG, check if it’s running:
docker logs perplexica | grep SearXNG
You should see:
Starting SearXNG...
SearXNG started successfully
2

Verify SearxNG accessibility

Test the SearxNG endpoint directly:
curl http://localhost:8080
You should receive HTML content from SearxNG.
3

Check JSON format

Ensure JSON format is enabled in SearxNG settings. This is required for Perplexica to parse search results.
4

Verify Wolfram Alpha

Confirm Wolfram Alpha search engine is enabled in SearxNG for mathematical queries and calculations.

External SearxNG connection

When using the slim image with your own SearxNG:
Make sure you’ve set the SEARXNG_API_URL environment variable when starting the container, and that your SearxNG instance is accessible from the Docker container.
docker run -d -p 3000:3000 -e SEARXNG_API_URL=http://your-searxng-url:8080 -v perplexica-data:/home/perplexica/data --name perplexica itzcrazykns1337/perplexica:slim-latest

Docker issues

Port already in use

If you see an error about port 3000 or 8080 already being in use:
sudo lsof -i :3000

Volume permission issues

If you encounter permission errors with the data volume:
# Check volume permissions
docker volume inspect perplexica-data

# If needed, recreate the volume
docker volume rm perplexica-data
docker volume create perplexica-data

Container won’t start

Check the container logs for specific errors:
docker logs perplexica
Common issues:
  • Missing environment variables
  • Port conflicts
  • Volume mount problems
  • Insufficient resources (RAM/CPU)

Application issues

Searches return no results

Verify that SearxNG is running and accessible. Check the Perplexica logs for connection errors.
Ensure your AI provider (OpenAI, Anthropic, Ollama, etc.) is properly configured with valid API keys and model names.
Try switching between Speed, Balanced, and Quality modes to see if the issue is mode-specific.

Slow search responses

  • Check your internet connection: Web searches require good connectivity
  • Try Speed mode: Quality mode is more thorough but slower
  • Local LLM performance: If using Ollama, ensure your hardware can handle the model
  • SearxNG response time: External SearxNG instances may be slower

File upload failures

1

Check volume mount

Ensure the uploads directory is properly mounted:
docker exec perplexica ls -la /home/perplexica/uploads
2

Verify file size

Check if the file is within acceptable limits (exact limits depend on configuration).
3

Check file format

Perplexica supports PDFs, text files, and images. Other formats may not be supported.

Settings not persisting

If your settings reset after container restart:
# Ensure you're using a named volume
docker run -d -p 3000:3000 -v perplexica-data:/home/perplexica/data --name perplexica itzcrazykns1337/perplexica:latest

# NOT a bind mount to a non-existent directory
The -v perplexica-data:/home/perplexica/data flag is essential for persistence.

Performance issues

High memory usage

Perplexica uses AI models which can be memory-intensive:
  • Minimum: 1GB RAM
  • Recommended: 2GB+ RAM
  • With local LLMs (Ollama): 4GB+ RAM depending on model size

High CPU usage

  • Normal during active searches with local LLMs
  • Consider using cloud API providers (OpenAI, Anthropic) to reduce local CPU load
  • Ensure your Docker resource limits are appropriate

Getting help

If you’re still experiencing issues:

GitHub Issues

Report bugs or search existing issues

Discord Community

Get help from the community and developers

Documentation

Review the full documentation

API Documentation

API reference for developers

Diagnostic information

When reporting issues, include:
# Perplexica version
cat package.json | grep version

# Docker version
docker --version

# Container logs (last 50 lines)
docker logs --tail 50 perplexica

# Container status
docker ps -a | grep perplexica

# System information
uname -a

Build docs developers (and LLMs) love