Requirements
Software
Software
- Docker 20.10 or later
- Docker Compose v2 (the
docker composeplugin, not the standalonedocker-composebinary)
Recommended hardware
Recommended hardware
| Resource | Minimum | Recommended |
|---|---|---|
| CPU | 4 cores | 8+ cores |
| RAM | 16 GB | 32 GB |
| Disk | 50 GB | 200 GB (SSD) |
Quickstart
The guided install script is the fastest way to get Onyx running. It downloads the Compose files, prompts for basic settings, and starts all services.onyx_data/ directory for configuration files. Application data (chats, users, indexed documents) is stored in named Docker volumes managed by Docker itself.
Manage the deployment after install:
| Action | Command |
|---|---|
| Shut down without data loss | ./install.sh --shutdown |
| Delete all data | ./install.sh --delete-data |
| Upgrade to latest | ./install.sh --shutdown then re-run ./install.sh |
Manual setup
Use these steps if you prefer to manage the Compose files directly.Edit .env
Open
.env and configure the values relevant to your deployment. The most important variables are near the top of the file:Start all services
Open Onyx
Navigate to http://localhost:3000. The nginx reverse proxy also listens on port 80.
Services
The following services are defined indocker-compose.yml:
| Service | Image | Role |
|---|---|---|
api_server | onyxdotapp/onyx-backend:latest | FastAPI backend; runs Alembic migrations on startup |
background | onyxdotapp/onyx-backend:latest | Celery workers (document fetching, indexing, pruning) |
web_server | onyxdotapp/onyx-web-server:latest | Next.js frontend |
inference_model_server | onyxdotapp/onyx-model-server:latest | Serves embedding/re-rank models for search |
indexing_model_server | onyxdotapp/onyx-model-server:latest | Dedicated model server for the indexing pipeline |
relational_db | postgres:15.2-alpine | Primary relational database |
index | vespaengine/vespa:8.609.39 | Vector and keyword search engine |
opensearch | opensearchproject/opensearch:3.4.0 | Full-text search (keyword) index |
cache | redis:7.4-alpine | Celery broker and application cache |
minio | minio/minio:RELEASE.2025-07-23T15-54-02Z-cpuv1 | S3-compatible file store (profile: s3-filestore) |
nginx | nginx:1.25.5-alpine | Reverse proxy; exposes ports 80 and 3000 |
code-interpreter | onyxdotapp/code-interpreter:latest | Sandboxed Python execution for Onyx Craft |
minio only starts when COMPOSE_PROFILES=s3-filestore is set in your .env. Set FILE_STORE_BACKEND=postgres and remove s3-filestore from COMPOSE_PROFILES to use PostgreSQL for file storage instead, which eliminates the MinIO dependency.Common commands
Deployment variants
Lite mode
docker-compose.onyx-lite.yml is a Compose overlay that disables the resource-heavy services:
- Vespa (
index) and both model servers are moved to thevectordb/inferenceprofiles. - Redis (
cache) is moved to theredisprofile; PostgreSQL handles caching instead. - OpenSearch is moved to the
opensearchprofile. - MinIO is moved to the
s3-filestoreprofile; PostgreSQL handles file storage instead. - The
backgroundCelery worker is moved to thebackgroundprofile; the API server handles background tasks directly via FastAPIBackgroundTasks.
Production setup
docker-compose.prod.yml adds TLS termination via Let’s Encrypt. Before using it:
Configure your domain
Set
DOMAIN in .env.nginx to your fully qualified domain name (e.g., onyx.example.com). Ensure DNS is pointing to the host’s public IP.Harden the environment file
Change default Postgres credentials, set a strong
USER_AUTH_SECRET, and set AUTH_TYPE to a production method such as oidc.Remove internal port exposures
In production, only nginx should be reachable from outside. The ports for
api_server, relational_db, index, cache, and minio are commented out in docker-compose.yml by default — keep them that way.Upgrading
Onyx follows SemVer and maintains backwards compatibility across minor versions.IMAGE_TAG in .env before pulling.
GPU support
The model servers support NVIDIA GPUs via thenvidia-container-toolkit. To enable, uncomment the deploy.resources.reservations block in the inference_model_server and indexing_model_server service definitions in docker-compose.yml:
