Prerequisites
Before you begin, ensure you have:- Docker installed and running on your system
- A compatible AI provider (Ollama for local models, or API keys for OpenAI/Claude/Groq)
Don’t have Docker? Download it from docker.com. The Docker installation includes both Perplexica and SearxNG, so no additional setup is required.
Installation
Run Perplexica with Docker
Pull and start the Perplexica container with a single command:This command:
- Downloads the latest Perplexica image (includes bundled SearxNG)
- Runs the container in detached mode (
-d) - Maps port 3000 to your local machine (
-p 3000:3000) - Creates a persistent volume for your data (
-v perplexica-data:/home/perplexica/data) - Names the container
perplexicafor easy management
The image includes both Perplexica and SearxNG, so no additional setup is required. The volume flag creates persistent storage for your data and uploaded files.
Access the web interface
Once the container is running, open your browser and navigate to:You should see the Perplexica setup screen.
Configure your AI provider
On the setup screen, configure your preferred AI provider:
- Ollama (Local)
- OpenAI
- Anthropic Claude
- Groq
Using Ollama for local LLMs:
- Ensure Ollama is running on your system
- Set the API URL based on your OS:
- Windows/Mac:
http://host.docker.internal:11434 - Linux:
http://<your-private-ip>:11434
- Windows/Mac:
- Enter the model name (e.g.,
llama2,mistral) - Put any value in the API key field (required even if not used)
Perform your first search
Once configured, you’re ready to search!
- Enter a query in the search box
- Choose your search mode:
- Speed Mode: Quick answers for simple queries
- Balanced Mode: Best for everyday searches
- Quality Mode: Deep research with comprehensive results
- Select your source:
- Web search
- Discussions (forums, Reddit, etc.)
- Academic papers
- Press Enter or click Search
What’s next?
Explore search modes
Learn how to use Speed, Balanced, and Quality modes effectively for different types of queries.
Upload files
Ask questions about your PDFs, documents, and images by uploading them directly.
Use widgets
Get instant answers for weather, calculations, stock prices, and more with built-in widgets.
Search history
Access your previous searches and continue your research where you left off.
Using your own SearxNG instance
If you already have SearxNG running, you can use the slim version of Perplexica:http://your-searxng-url:8080 with your actual SearxNG URL, then configure your AI provider settings at http://localhost:3000.
Troubleshooting
Container won't start
Container won't start
Check if port 3000 is already in use:If needed, use a different port:Then access Perplexica at
http://localhost:8080Ollama connection errors
Ollama connection errors
- Verify Ollama is running:
ollama list - Check the API URL matches your OS:
- Windows/Mac:
http://host.docker.internal:11434 - Linux:
http://<your-private-ip>:11434
- Windows/Mac:
- For Linux, ensure Ollama is exposed to the network (see Step 3)
- Verify the port (11434) is not blocked by your firewall
No search results
No search results
- Verify your AI provider configuration is correct
- Check that you have API credits (for cloud providers)
- Ensure the SearxNG service is running (bundled with Docker image)
- Check Docker logs:
docker logs perplexica
API key errors
API key errors
For local OpenAI-API-compliant servers:
- Server must run on
0.0.0.0(not127.0.0.1) - Verify the correct model name is loaded
- Put something in the API key field (required even if not used)
Need more help?
For detailed installation options, configuration, and advanced setup, see the installation guide. Join our community:- Discord: discord.gg/EFwsmQDgAu
- GitHub Issues: github.com/ItzCrazyKns/Perplexica/issues