Quick Start
Deploy Onyx and start chatting in minutes with a single command.
Architecture
Understand how Onyx’s backend, search, and AI layers fit together.
Core Features
Explore Chat, Agents, RAG, Deep Research, and more.
Connectors
Connect Onyx to Slack, Confluence, GitHub, Google Drive, and 40+ more sources.
What Onyx can do
Chat with any LLM
Works with OpenAI, Anthropic, Gemini, Ollama, vLLM, and any LiteLLM-compatible provider.
RAG over your data
Hybrid search plus knowledge graph retrieval across millions of indexed documents.
Custom Agents
Build AI Agents with custom instructions, knowledge bases, and tool access.
Deep Research
Multi-step agentic research that synthesizes answers from across your knowledge base.
MCP & Actions
Give agents the ability to act on external systems via Model Context Protocol.
Enterprise Security
SSO (OIDC/SAML/OAuth2), RBAC, document-level permissions, and encrypted credentials.
Get started
Deploy Onyx
Run one command to spin up the full stack with Docker Compose. See the Quick Start guide.
Connect your knowledge sources
Add connectors to Slack, Confluence, Google Drive, GitHub, and more from the Admin panel. See Connectors.
Configure your LLM
Point Onyx at your preferred LLM provider — cloud or self-hosted. See LLM Providers.
Deploy anywhere
Docker Compose
The fastest way to get Onyx running locally or on a single server.
Kubernetes
Production-grade deployment with Helm charts for large teams.
Terraform
Infrastructure-as-code deployment for teams already using Terraform.
Cloud Providers
Step-by-step guides for AWS EKS, Azure, GCP, and more.
