Docker installation (recommended)
Using Docker is the recommended approach as it simplifies setup, manages dependencies automatically, and includes a bundled SearxNG instance.Quick start with Docker
The simplest way to run Perplexica is with a single Docker command:- Pulls the latest Perplexica image with bundled SearxNG
- Creates a persistent volume for your data and uploaded files
- Exposes the application on http://localhost:3000
The image includes both Perplexica and SearxNG, so no additional configuration is required. Simply open http://localhost:3000 and configure your AI provider settings in the setup screen.
Using your own SearxNG instance
If you already have SearxNG running, use the slim version:http://your-searxng-url:8080 with your actual SearxNG URL, then configure your AI provider settings at http://localhost:3000.
Building from source with Docker
For more control or development purposes, you can build Perplexica from source:Build the Docker image
- Builder stage: Installs dependencies and builds the Next.js application
- Production stage: Sets up SearxNG and the runtime environment
Configure and start using
Access Perplexica at http://localhost:3000 and configure your settings in the setup screen.
After the containers are built, you can start Perplexica directly from Docker Desktop or Docker CLI without opening a terminal.
Using Docker Compose
For a more declarative approach, use Docker Compose:Access the application
Open http://localhost:3000 in your browser and complete the setup.
Manual installation (without Docker)
For users who prefer not to use Docker or need more control over the installation:Install and configure SearxNG
- Install SearxNG following the official documentation
- Enable JSON format in SearxNG settings
- Enable the Wolfram Alpha search engine
- Note your SearxNG URL (e.g.,
http://localhost:8080)
Install dependencies
Perplexica requires Node.js 24.5.0 or later. Install dependencies using npm:Key dependencies include:
- Next.js 16.0.7: React framework for the web interface
- Ollama 0.6.3: Local LLM integration
- OpenAI 6.9.0: Cloud AI provider support
- Drizzle ORM: Database management with SQLite
- Transformers: Hugging Face model support
Start the application
Configure settings
Open http://localhost:3000 in your browser and configure:
- AI provider (Ollama, OpenAI, Claude, Groq, etc.)
- API keys and endpoints
- SearxNG URL
- Other preferences
For development, use
npm run dev to start the development server with hot reloading.AI provider configuration
Perplexica supports multiple AI providers. Configure them in the setup screen after installation.Ollama (local LLMs)
For privacy-focused users who want to run models locally:- Windows
- Mac
- Linux
- Install Ollama from ollama.ai
- Pull your desired model:
- In Perplexica settings:
- API URL:
http://host.docker.internal:11434 - Model:
llama2(or your chosen model) - API Key: Any value (required but not validated)
- API URL:
OpenAI
- Get an API key from platform.openai.com
- In Perplexica settings:
- API Key: Your OpenAI API key
- Model:
gpt-4,gpt-3.5-turbo, or other available models - API URL:
https://api.openai.com/v1
Anthropic Claude
- Get an API key from console.anthropic.com
- In Perplexica settings:
- API Key: Your Anthropic API key
- Model:
claude-3-opus,claude-3-sonnet,claude-3-haiku - Configure the endpoint in settings
Groq
- Get an API key from console.groq.com
- In Perplexica settings:
- API Key: Your Groq API key
- Model: Available Groq models
- Configure the endpoint in settings
Google Gemini
- Get an API key from ai.google.dev
- In Perplexica settings:
- API Key: Your Google API key
- Model:
gemini-proor other available models - Configure the endpoint in settings
Local OpenAI-compatible servers
For custom LLM servers that implement the OpenAI API:- Ensure your server runs on
0.0.0.0(not127.0.0.1) - Note the port and model name
- In Perplexica settings:
- API URL: Your server URL (e.g.,
http://localhost:8000) - Model: The exact model name loaded by your server
- API Key: Any value (required even if your server doesn’t validate it)
- API URL: Your server URL (e.g.,
Troubleshooting
Ollama connection errors
Connection refused
Connection refused
Symptoms: “Failed to connect to Ollama” errorSolutions:
- Verify Ollama is running:
- Check the API URL matches your OS:
- Windows/Mac:
http://host.docker.internal:11434 - Linux:
http://<your-private-ip>:11434
- Windows/Mac:
- For Linux, ensure Ollama is exposed to the network:
- Check firewall settings - port 11434 must be accessible
Model not found
Model not found
Symptoms: “Model not found” errorSolutions:
- List available models:
- Pull the model if missing:
- Ensure the model name in Perplexica settings matches exactly
Local OpenAI-compatible server errors
No chat model providers configured
No chat model providers configured
Symptoms: Perplexica says no providers are configuredSolutions:
- Server must run on
0.0.0.0, not127.0.0.1 - Verify the port matches your API URL
- Confirm the exact model name loaded by your server
- Put any value in the API key field (cannot be empty)
Lemonade connection errors
Cannot connect to Lemonade
Cannot connect to Lemonade
Symptoms: “Failed to connect to Lemonade” errorSolutions:
- Check your Lemonade API URL in settings
- Set the correct URL based on your OS:
- Windows/Mac:
http://host.docker.internal:8000 - Linux:
http://<your-private-ip>:8000
- Windows/Mac:
- Ensure Lemonade server is running and accessible
- Verify Lemonade accepts connections from all interfaces (
0.0.0.0), not just localhost - Check that port 8000 (or your custom port) is not blocked
Docker issues
Port already in use
Port already in use
Symptoms: “Port 3000 is already allocated” errorSolutions:
- Check what’s using port 3000:
- Use a different port:
Then access at
http://localhost:8080
Container won't start
Container won't start
Symptoms: Container exits immediatelySolutions:
- Check logs:
- Verify Docker has enough resources (memory, disk space)
- Try removing and recreating:
Advanced configuration
Using as a search engine
Add Perplexica as a custom search engine in your browser:- Open your browser’s settings
- Navigate to ‘Search Engines’
- Add a new site search:
- URL:
http://localhost:3000/?q=%s - (Replace
localhost:3000with your domain if hosted remotely)
- URL:
- Set a keyword (e.g.,
perplexicaorpx)
Exposing to network
Perplexica runs on Next.js and works on your local network by default. For external access:- Port forwarding: Configure your router to forward port 3000
- Reverse proxy: Use Nginx or Caddy for HTTPS and custom domains
- Cloud deployment: See the one-click deployment options below
One-click deployment
Deploy Perplexica to the cloud with these providers:- Sealos: Deploy to Sealos
- RepoCloud: Deploy to RepoCloud
- ClawCloud: Run on ClawCloud
- Hostinger: Deploy on Hostinger
Next steps
API integration
Integrate Perplexica’s search engine into your applications
Architecture
Learn how Perplexica works under the hood
Contributing
Contribute to Perplexica’s development
Community
Join the Discord community for help and discussions