Skip to main content
Perplexica can be installed in two ways: using Docker (recommended) or manually without Docker. This guide covers both methods in detail. Using Docker is the recommended approach as it simplifies setup, manages dependencies automatically, and includes a bundled SearxNG instance.

Quick start with Docker

The simplest way to run Perplexica is with a single Docker command:
docker run -d -p 3000:3000 -v perplexica-data:/home/perplexica/data --name perplexica itzcrazykns1337/perplexica:latest
This command:
  • Pulls the latest Perplexica image with bundled SearxNG
  • Creates a persistent volume for your data and uploaded files
  • Exposes the application on http://localhost:3000
The image includes both Perplexica and SearxNG, so no additional configuration is required. Simply open http://localhost:3000 and configure your AI provider settings in the setup screen.

Using your own SearxNG instance

If you already have SearxNG running, use the slim version:
docker run -d -p 3000:3000 -e SEARXNG_API_URL=http://your-searxng-url:8080 -v perplexica-data:/home/perplexica/data --name perplexica itzcrazykns1337/perplexica:slim-latest
Important SearxNG requirements:
  • JSON format must be enabled in settings
  • Wolfram Alpha search engine must be enabled
Replace http://your-searxng-url:8080 with your actual SearxNG URL, then configure your AI provider settings at http://localhost:3000.

Building from source with Docker

For more control or development purposes, you can build Perplexica from source:
1

Clone the repository

git clone https://github.com/ItzCrazyKns/Perplexica.git
cd Perplexica
2

Build the Docker image

docker build -t perplexica .
The Dockerfile uses a multi-stage build process:
  • Builder stage: Installs dependencies and builds the Next.js application
  • Production stage: Sets up SearxNG and the runtime environment
3

Run the container

docker run -d -p 3000:3000 -v perplexica-data:/home/perplexica/data --name perplexica perplexica
4

Configure and start using

Access Perplexica at http://localhost:3000 and configure your settings in the setup screen.
After the containers are built, you can start Perplexica directly from Docker Desktop or Docker CLI without opening a terminal.

Using Docker Compose

For a more declarative approach, use Docker Compose:
1

Clone the repository

git clone https://github.com/ItzCrazyKns/Perplexica.git
cd Perplexica
2

Start with Docker Compose

docker-compose up -d
The docker-compose.yaml configuration:
services:
  perplexica:
    image: itzcrazykns1337/perplexica:latest
    build:
      context: .
    ports:
      - '3000:3000'
    volumes:
      - data:/home/perplexica/data
    restart: unless-stopped

volumes:
  data:
    name: 'perplexica-data'
3

Access the application

Open http://localhost:3000 in your browser and complete the setup.

Manual installation (without Docker)

For users who prefer not to use Docker or need more control over the installation:
Manual installation requires more setup and maintenance. Docker is recommended for most users.
1

Install and configure SearxNG

  1. Install SearxNG following the official documentation
  2. Enable JSON format in SearxNG settings
  3. Enable the Wolfram Alpha search engine
  4. Note your SearxNG URL (e.g., http://localhost:8080)
2

Clone the repository

git clone https://github.com/ItzCrazyKns/Perplexica.git
cd Perplexica
3

Install dependencies

Perplexica requires Node.js 24.5.0 or later. Install dependencies using npm:
npm i
Key dependencies include:
  • Next.js 16.0.7: React framework for the web interface
  • Ollama 0.6.3: Local LLM integration
  • OpenAI 6.9.0: Cloud AI provider support
  • Drizzle ORM: Database management with SQLite
  • Transformers: Hugging Face model support
4

Build the application

npm run build
This compiles the Next.js application and prepares it for production.
5

Start the application

npm run start
The application will start on http://localhost:3000.
6

Configure settings

Open http://localhost:3000 in your browser and configure:
  • AI provider (Ollama, OpenAI, Claude, Groq, etc.)
  • API keys and endpoints
  • SearxNG URL
  • Other preferences
For development, use npm run dev to start the development server with hot reloading.

AI provider configuration

Perplexica supports multiple AI providers. Configure them in the setup screen after installation.

Ollama (local LLMs)

For privacy-focused users who want to run models locally:
  1. Install Ollama from ollama.ai
  2. Pull your desired model:
    ollama pull llama2
    
  3. In Perplexica settings:
    • API URL: http://host.docker.internal:11434
    • Model: llama2 (or your chosen model)
    • API Key: Any value (required but not validated)

OpenAI

  1. Get an API key from platform.openai.com
  2. In Perplexica settings:
    • API Key: Your OpenAI API key
    • Model: gpt-4, gpt-3.5-turbo, or other available models
    • API URL: https://api.openai.com/v1

Anthropic Claude

  1. Get an API key from console.anthropic.com
  2. In Perplexica settings:
    • API Key: Your Anthropic API key
    • Model: claude-3-opus, claude-3-sonnet, claude-3-haiku
    • Configure the endpoint in settings

Groq

  1. Get an API key from console.groq.com
  2. In Perplexica settings:
    • API Key: Your Groq API key
    • Model: Available Groq models
    • Configure the endpoint in settings

Google Gemini

  1. Get an API key from ai.google.dev
  2. In Perplexica settings:
    • API Key: Your Google API key
    • Model: gemini-pro or other available models
    • Configure the endpoint in settings

Local OpenAI-compatible servers

For custom LLM servers that implement the OpenAI API:
  1. Ensure your server runs on 0.0.0.0 (not 127.0.0.1)
  2. Note the port and model name
  3. In Perplexica settings:
    • API URL: Your server URL (e.g., http://localhost:8000)
    • Model: The exact model name loaded by your server
    • API Key: Any value (required even if your server doesn’t validate it)

Troubleshooting

Ollama connection errors

Symptoms: “Failed to connect to Ollama” errorSolutions:
  1. Verify Ollama is running:
    ollama list
    
  2. Check the API URL matches your OS:
    • Windows/Mac: http://host.docker.internal:11434
    • Linux: http://<your-private-ip>:11434
  3. For Linux, ensure Ollama is exposed to the network:
    # Edit /etc/systemd/system/ollama.service
    Environment="OLLAMA_HOST=0.0.0.0:11434"
    
    # Reload and restart
    systemctl daemon-reload
    systemctl restart ollama
    
  4. Check firewall settings - port 11434 must be accessible
Symptoms: “Model not found” errorSolutions:
  1. List available models:
    ollama list
    
  2. Pull the model if missing:
    ollama pull llama2
    
  3. Ensure the model name in Perplexica settings matches exactly

Local OpenAI-compatible server errors

Symptoms: Perplexica says no providers are configuredSolutions:
  1. Server must run on 0.0.0.0, not 127.0.0.1
  2. Verify the port matches your API URL
  3. Confirm the exact model name loaded by your server
  4. Put any value in the API key field (cannot be empty)

Lemonade connection errors

Symptoms: “Failed to connect to Lemonade” errorSolutions:
  1. Check your Lemonade API URL in settings
  2. Set the correct URL based on your OS:
    • Windows/Mac: http://host.docker.internal:8000
    • Linux: http://<your-private-ip>:8000
  3. Ensure Lemonade server is running and accessible
  4. Verify Lemonade accepts connections from all interfaces (0.0.0.0), not just localhost
  5. Check that port 8000 (or your custom port) is not blocked

Docker issues

Symptoms: “Port 3000 is already allocated” errorSolutions:
  1. Check what’s using port 3000:
    docker ps
    
  2. Use a different port:
    docker run -d -p 8080:3000 -v perplexica-data:/home/perplexica/data --name perplexica itzcrazykns1337/perplexica:latest
    
    Then access at http://localhost:8080
Symptoms: Container exits immediatelySolutions:
  1. Check logs:
    docker logs perplexica
    
  2. Verify Docker has enough resources (memory, disk space)
  3. Try removing and recreating:
    docker rm perplexica
    docker run -d -p 3000:3000 -v perplexica-data:/home/perplexica/data --name perplexica itzcrazykns1337/perplexica:latest
    

Advanced configuration

Using as a search engine

Add Perplexica as a custom search engine in your browser:
  1. Open your browser’s settings
  2. Navigate to ‘Search Engines’
  3. Add a new site search:
    • URL: http://localhost:3000/?q=%s
    • (Replace localhost:3000 with your domain if hosted remotely)
  4. Set a keyword (e.g., perplexica or px)
Now you can search directly from your browser’s address bar!

Exposing to network

Perplexica runs on Next.js and works on your local network by default. For external access:
  1. Port forwarding: Configure your router to forward port 3000
  2. Reverse proxy: Use Nginx or Caddy for HTTPS and custom domains
  3. Cloud deployment: See the one-click deployment options below

One-click deployment

Deploy Perplexica to the cloud with these providers:

Next steps

API integration

Integrate Perplexica’s search engine into your applications

Architecture

Learn how Perplexica works under the hood

Contributing

Contribute to Perplexica’s development

Community

Join the Discord community for help and discussions

Build docs developers (and LLMs) love