Requirements: Docker Engine and Docker Compose 2.24.0+ must be installed. For standard mode, 10 GB of Docker memory and 32 GB of free disk space are recommended. A lite mode is available for lower-resource machines (4 GB RAM, 16 GB disk).
One-line install
The fastest way to deploy Onyx is with the install script. It handles Docker setup, downloads configuration files, and starts all services:- Verify Docker and Docker Compose are installed (and install them on Linux if needed)
- Check system resources
- Ask which deployment mode you want (lite or standard)
- Download
docker-compose.ymlandenv.templatefrom the Onyx GitHub repository - Generate a secure
USER_AUTH_SECRET - Pull Docker images and start all containers
- Wait for the service to become healthy and print the access URL
Install script options
Manual Docker Compose setup
If you prefer to manage the deployment yourself, follow these steps.Create your .env file from the template
Copy the environment template and open it for editing:The most important variables to configure:
Start all services
docker-compose.yml:| Service | Image | Role |
|---|---|---|
api_server | onyxdotapp/onyx-backend | FastAPI backend, port 8080 (internal) |
background | onyxdotapp/onyx-backend | Celery workers (supervisord) |
web_server | onyxdotapp/onyx-web-server | Next.js frontend |
nginx | nginx:1.25.5-alpine | Reverse proxy, ports 80 and 3000 |
relational_db | postgres:15.2-alpine | PostgreSQL database |
index | vespaengine/vespa:8.609.39 | Vespa vector database |
opensearch | opensearchproject/opensearch:3.4.0 | Full-text search index |
cache | redis:7.4-alpine | Redis for task queues and caching |
inference_model_server | onyxdotapp/onyx-model-server | Embedding and inference model server |
indexing_model_server | onyxdotapp/onyx-model-server | Dedicated model server for indexing |
minio | minio/minio | S3-compatible object storage (file uploads) |
code-interpreter | onyxdotapp/code-interpreter | Sandboxed code execution |
api_server runs database migrations automatically on startup (alembic upgrade head) before serving traffic.Wait for services to become healthy
Services have health checks configured. Monitor startup with:The
api_server health check polls http://localhost:8080/health every 30 seconds. Full startup — including Vespa schema deployment and model loading — typically takes 2–5 minutes.Open Onyx in your browser
Once all containers are healthy, open:orBoth map to port 80 on the nginx container. Port 3000 is a convenience alias.
In development mode (using
docker-compose.dev.yml), the API server is also exposed directly at http://localhost:8080.Create your admin account
On first visit, Onyx will direct you to create an account:
- Navigate to
http://localhost/auth/signup - Register with your email and a password
- The first user to register automatically receives admin privileges
http://localhost/admin. From there you can:- Configure LLM providers (OpenAI, Anthropic, Ollama, etc.)
- Add connectors to your knowledge sources
- Manage users and roles
- Create custom Agents
Deployment modes
Standard mode (recommended)
Standard mode (recommended)
Full deployment with all services running: Vespa vector database, Redis, model servers, and all Celery background workers. Enables connectors, RAG search, Deep Research, and hybrid retrieval.Minimum resources: 10 GB Docker memory, 32 GB free disk.
Lite mode
Lite mode
A minimal deployment without Vespa, Redis, or model servers. Suitable for low-resource environments or quick evaluation.Lite mode disables: connectors, RAG search, and embedding-based retrieval.Lite mode still supports: LLM chat, tools, user file uploads, Projects, Agent knowledge, code interpreter, and image generation.Minimum resources: 4 GB Docker memory, 16 GB free disk.
Onyx Craft (optional)
Onyx Craft (optional)
Enables AI-powered web app building within Onyx. Requires the
craft-latest image tag and sets ENABLE_CRAFT=true. Craft is incompatible with lite mode.Environment variable reference
The.env file (copied from env.template) controls all runtime configuration. Key sections:
env.template in the repository.
Next steps
Docker deployment guide
Production hardening, SSL/TLS setup, and Nginx configuration.
Architecture
Learn how the services, workers, and data flow fit together.
LLM providers
Connect Onyx to OpenAI, Anthropic, Ollama, and others.
Connectors
Index documents from Slack, Confluence, GitHub, and 40+ sources.
