Skip to main content
This guide will help you get Perplexica up and running quickly using Docker, the recommended installation method.

Prerequisites

Before you begin, ensure you have:
  • Docker installed and running on your system
  • A compatible AI provider (Ollama for local models, or API keys for OpenAI/Claude/Groq)
Don’t have Docker? Download it from docker.com. The Docker installation includes both Perplexica and SearxNG, so no additional setup is required.

Installation

1

Run Perplexica with Docker

Pull and start the Perplexica container with a single command:
docker run -d -p 3000:3000 -v perplexica-data:/home/perplexica/data --name perplexica itzcrazykns1337/perplexica:latest
This command:
  • Downloads the latest Perplexica image (includes bundled SearxNG)
  • Runs the container in detached mode (-d)
  • Maps port 3000 to your local machine (-p 3000:3000)
  • Creates a persistent volume for your data (-v perplexica-data:/home/perplexica/data)
  • Names the container perplexica for easy management
The image includes both Perplexica and SearxNG, so no additional setup is required. The volume flag creates persistent storage for your data and uploaded files.
2

Access the web interface

Once the container is running, open your browser and navigate to:
http://localhost:3000
You should see the Perplexica setup screen.
3

Configure your AI provider

On the setup screen, configure your preferred AI provider:
Using Ollama for local LLMs:
  1. Ensure Ollama is running on your system
  2. Set the API URL based on your OS:
    • Windows/Mac: http://host.docker.internal:11434
    • Linux: http://<your-private-ip>:11434
  3. Enter the model name (e.g., llama2, mistral)
  4. Put any value in the API key field (required even if not used)
Linux users: You need to expose Ollama to the network. Add Environment="OLLAMA_HOST=0.0.0.0:11434" to /etc/systemd/system/ollama.service, then run:
systemctl daemon-reload
systemctl restart ollama
4

Perform your first search

Once configured, you’re ready to search!
  1. Enter a query in the search box
  2. Choose your search mode:
    • Speed Mode: Quick answers for simple queries
    • Balanced Mode: Best for everyday searches
    • Quality Mode: Deep research with comprehensive results
  3. Select your source:
    • Web search
    • Discussions (forums, Reddit, etc.)
    • Academic papers
  4. Press Enter or click Search
Perplexica will analyze your query, search relevant sources, and provide an AI-generated answer with citations.

What’s next?

Explore search modes

Learn how to use Speed, Balanced, and Quality modes effectively for different types of queries.

Upload files

Ask questions about your PDFs, documents, and images by uploading them directly.

Use widgets

Get instant answers for weather, calculations, stock prices, and more with built-in widgets.

Search history

Access your previous searches and continue your research where you left off.

Using your own SearxNG instance

If you already have SearxNG running, you can use the slim version of Perplexica:
docker run -d -p 3000:3000 -e SEARXNG_API_URL=http://your-searxng-url:8080 -v perplexica-data:/home/perplexica/data --name perplexica itzcrazykns1337/perplexica:slim-latest
Make sure your SearxNG instance has:
  • JSON format enabled in the settings
  • Wolfram Alpha search engine enabled
Replace http://your-searxng-url:8080 with your actual SearxNG URL, then configure your AI provider settings at http://localhost:3000.

Troubleshooting

Check if port 3000 is already in use:
docker ps
If needed, use a different port:
docker run -d -p 8080:3000 -v perplexica-data:/home/perplexica/data --name perplexica itzcrazykns1337/perplexica:latest
Then access Perplexica at http://localhost:8080
  1. Verify Ollama is running: ollama list
  2. Check the API URL matches your OS:
    • Windows/Mac: http://host.docker.internal:11434
    • Linux: http://<your-private-ip>:11434
  3. For Linux, ensure Ollama is exposed to the network (see Step 3)
  4. Verify the port (11434) is not blocked by your firewall
  1. Verify your AI provider configuration is correct
  2. Check that you have API credits (for cloud providers)
  3. Ensure the SearxNG service is running (bundled with Docker image)
  4. Check Docker logs: docker logs perplexica
For local OpenAI-API-compliant servers:
  1. Server must run on 0.0.0.0 (not 127.0.0.1)
  2. Verify the correct model name is loaded
  3. Put something in the API key field (required even if not used)

Need more help?

For detailed installation options, configuration, and advanced setup, see the installation guide. Join our community:

Build docs developers (and LLMs) love