Skip to main content
Perplexica offers one-click deployment options on several cloud platforms. These deployments handle all configuration automatically, getting you up and running in minutes.

Available platforms

Sealos

Deploy Perplexica on Sealos cloud platform: Deploy to Sealos Features:
  • Automatic container orchestration
  • Built-in SearxNG instance
  • Persistent storage for your data
  • Easy scaling options

RepoCloud

Deploy on RepoCloud with automated setup: Deploy to RepoCloud Features:
  • Managed Docker hosting
  • Automatic updates available
  • Monitoring and logging
  • Custom domain support

ClawCloud

Run Perplexica on ClawCloud infrastructure: Run on ClawCloud Features:
  • Fast deployment from templates
  • Pre-configured environment
  • Reliable uptime

Hostinger VPS

Deploy on Hostinger VPS with Docker: Deploy on Hostinger Features:
  • VPS with full control
  • Docker Compose deployment
  • Scalable resources
  • Root access for customization

Post-deployment configuration

After deploying to any platform:
1

Access the web interface

Navigate to the URL provided by your cloud platform (usually shown after deployment completes).
2

Complete initial setup

Configure your AI provider settings:
  • Add API keys for OpenAI, Anthropic, Groq, or other providers
  • Or configure Ollama for local LLM usage
  • Select your preferred models
3

Test the deployment

Run a test search to ensure everything is working correctly. The bundled SearxNG instance should be automatically configured.

Platform-specific considerations

Perplexica runs on Next.js and handles all API requests internally. It works immediately on the same network and remains accessible even with port forwarding enabled.Most cloud platforms automatically handle port mapping and provide you with a public URL.
All platforms use persistent volumes to store:
  • Your application settings and API keys
  • Search history
  • Uploaded files
  • Database records
Your data is preserved across restarts and updates.
Most platforms support custom domain configuration. Check your platform’s documentation for specific instructions on:
  • Adding a custom domain
  • Configuring SSL/TLS certificates
  • DNS settings
Cloud platforms may have different resource allocations. Perplexica requires:
  • Minimum 1GB RAM (2GB recommended)
  • 2GB storage for the application
  • Additional storage for user data and uploads

Using Perplexica on the cloud

Once deployed to a cloud platform, you can:

Access from anywhere

Your Perplexica instance is accessible from any device with internet access. The cloud platform provides a stable URL.

Set as default search engine

Add Perplexica as a search engine in your browser:
  1. Open your browser settings
  2. Navigate to the ‘Search Engines’ section
  3. Add a new site search with URL: https://your-instance-url.com/?q=%s
  4. Save and use Perplexica directly from your browser’s address bar

Share with your team

Cloud deployments make it easy to share Perplexica with others by simply sharing the URL.
Authentication is not yet implemented in Perplexica. If you deploy on a public cloud platform, anyone with the URL can access your instance. Consider using platform-level access controls or firewall rules if you need to restrict access.

Updating cloud deployments

Update procedures vary by platform:
  • Sealos: Use the platform’s update mechanism to pull the latest image
  • RepoCloud: Enable automatic updates or manually trigger updates from the dashboard
  • ClawCloud: Redeploy from the template to get the latest version
  • Hostinger: SSH into your VPS and pull the latest Docker image
See the updating guide for detailed instructions.

Cost considerations

While Perplexica itself is free and open-source, cloud hosting incurs costs:
  • Platform fees: Each cloud provider has different pricing models
  • Resource usage: Costs scale with CPU, RAM, and storage
  • API costs: Your chosen LLM provider (OpenAI, Anthropic, etc.) may charge per API call
  • Bandwidth: Some platforms charge for data transfer
Using local LLMs via Ollama on a cloud VPS can help reduce API costs, but requires more powerful server resources.

Next steps

Configuration

Configure AI providers and settings

Troubleshooting

Resolve common deployment issues

Build docs developers (and LLMs) love