Supported providers
Perplexica supports the following LLM providers:- Ollama - Local LLM server for running models on your own hardware
- OpenAI - GPT models including GPT-4, GPT-4o, and GPT-3.5 Turbo
- Anthropic - Claude models from Anthropic
- Gemini - Google’s Gemini models
- Groq - Fast LLM inference service
- LM Studio - Local LLM server with OpenAI-compatible API
- Lemonade - Self-hosted LLM server
- Transformers - Browser-based embedding models (no chat support)
Configuring providers
Providers are configured during the initial setup screen when you first launch Perplexica. You can also add or modify providers later through the settings UI.
Ollama
Run local LLM models on your own hardware using Ollama.The base URL for your Ollama serverDocker users:
- Windows/Mac:
http://host.docker.internal:11434 - Linux:
http://<your-host-ip>:11434
http://localhost:11434
Troubleshooting Ollama connection
Troubleshooting Ollama connection
If you’re encountering connection errors:
- Verify the API URL is correct in settings
- Use the correct URL format for your operating system (see above)
- Linux users: Expose Ollama to the network by adding
Environment="OLLAMA_HOST=0.0.0.0:11434"to/etc/systemd/system/ollama.service, then run: - Ensure port 11434 is not blocked by your firewall
OLLAMA_BASE_URL
OpenAI
Use OpenAI’s GPT models or OpenAI-compatible APIs.Your OpenAI API key
The base URL for the OpenAI APIDefault:
https://api.openai.com/v1For OpenAI-compatible servers, use your custom URL.OPENAI_API_KEYOPENAI_BASE_URL
- GPT-3.5 Turbo
- GPT-4, GPT-4 Turbo, GPT-4o, GPT-4o Mini
- GPT-4.1, GPT-4.1 Mini, GPT-4.1 Nano
- GPT-5 series (Nano, Mini, Pro, 5.1, 5.2, 5.2 Pro)
- o1, o3, o3 Mini, o4 Mini
- Text Embedding 3 Small/Large (embeddings)
Anthropic
Use Claude models from Anthropic.Your Anthropic API key
ANTHROPIC_API_KEY
Anthropic models are fetched dynamically from the API. The provider automatically retrieves available Claude models when configured.
Gemini
Use Google’s Gemini models.Your Google AI API key for Gemini
GEMINI_API_KEY
Gemini supports both chat and embedding models. Available models are fetched automatically from the Gemini API.
Groq
Fast LLM inference with Groq.Your Groq API key
GROQ_API_KEY
Groq only supports chat models, not embeddings. Available models are fetched from the Groq API.
LM Studio
Local LLM server with OpenAI-compatible API.The base URL for your LM Studio serverDefault:
http://localhost:1234The /v1 suffix is added automatically if not present.LM_STUDIO_BASE_URL
Lemonade
Self-hosted LLM server.The base URL for your Lemonade serverExample:
https://api.lemonade.ai/v1Your Lemonade API key (optional)
LEMONADE_BASE_URLLEMONADE_API_KEY(optional)
Troubleshooting Lemonade connection
Troubleshooting Lemonade connection
If you’re encountering connection errors:
- Verify the API URL in settings
- Use the correct URL format for your OS:
- Windows/Mac (Docker):
http://host.docker.internal:8000 - Linux (Docker):
http://<your-host-ip>:8000
- Windows/Mac (Docker):
- Ensure Lemonade server is running and accessible
- Verify Lemonade accepts connections from all interfaces (
0.0.0.0) - Check that port 8000 is not blocked by firewall
Transformers
Browser-based embedding models using Transformers.js.Transformers requires no configuration and provides embedding models only (no chat support).
- all-MiniLM-L6-v2 (
Xenova/all-MiniLM-L6-v2) - mxbai-embed-large-v1 (
mixedbread-ai/mxbai-embed-large-v1) - nomic-embed-text-v1 (
Xenova/nomic-embed-text-v1)
Adding custom models
You can add custom models to any configured provider:Add model
Click “Add Custom Model” and enter:
- Model Name: Display name for the model
- Model Key: The actual model identifier used by the API
- Type: Chat or Embedding
Managing providers
Add a new provider
You can add multiple instances of the same provider type (e.g., two different OpenAI configurations):- Open settings and navigate to the providers section
- Click “Add Provider”
- Select the provider type
- Configure the required parameters
- Give the provider a descriptive name
- Save the configuration
Update provider settings
To modify an existing provider:- Navigate to settings
- Find the provider you want to update
- Click “Edit”
- Update the configuration parameters
- Save your changes
Remove a provider
To remove a provider:- Navigate to settings
- Find the provider you want to remove
- Click “Delete” or “Remove”
- Confirm the deletion
Environment variable configuration
Instead of configuring providers through the UI, you can set them using environment variables. When environment variables are detected, providers are automatically configured on startup.Example Docker configuration
Example Docker configuration