Checking service health
Start here whenever something isn’t working.backend/log/<service>_debug.log.
Common issues
Connector not syncing / stuck in 'in progress'
Connector not syncing / stuck in 'in progress'
- Go to Admin → Connectors and check the connector’s sync status and last error message.
- Check the background worker logs:
- Verify the credentials are still valid — re-authenticate if the token expired.
- If the connector is stuck for more than 30 minutes, restart the background service:
Chat returns an error or no response
Chat returns an error or no response
- Go to Admin → LLM Providers and confirm your API key is saved and the model is active.
- Test the LLM connection directly from the admin panel using the Test button.
- Check
api_serverlogs for error details: - If using a self-hosted model (Ollama, vLLM), confirm the model server is reachable from within the Docker network.
Search returns no results
Search returns no results
- Confirm at least one connector has completed a sync and indexed documents.
- Check Vespa health:
- Check the
indexcontainer logs for indexing errors: - Verify the embedding model server is running:
Authentication / login not working
Authentication / login not working
- For OAuth (Google, GitHub), confirm the callback URL in your OAuth app settings matches your Onyx deployment URL exactly.
- For OIDC/SAML, check that
AUTH_TYPE,OAUTH_CLIENT_ID, and related env vars are set in.env. - Check
api_serverlogs for auth-related errors on login attempts. - For invite-only deployments, confirm the user’s email was invited before they tried to sign up.
Container keeps restarting
Container keeps restarting
Check the exit code and last log lines:Common causes:
- Postgres not ready:
api_serverandbackgroundwait for Postgres, but may time out if the DB is slow to start. Restart the affected service. - Missing env var: A required variable like
AUTH_SECRETorPOSTGRES_PASSWORDis not set in.env. - Port conflict: Another process is using port 80, 8080, or 5432. Check with
ss -tulnp.
Out of memory / OOM kills
Out of memory / OOM kills
Vespa and the embedding model server are the most memory-intensive services.
- Increase Docker Desktop’s memory limit (Mac/Windows) or the host’s available RAM.
- Use Lite mode to reduce resource usage:
- Reduce
VESPA_SEARCHER_THREADSin.env(default: 2 per node). - Use a smaller embedding model via Admin → Embeddings.
Postgres connection errors
Postgres connection errors
Connect directly to debug:Check active connections:If connections are exhausted, increase
POSTGRES_POOL_SIZE in .env or restart the API server.Getting support
GitHub Issues
Report bugs or search existing issues.
Discord Community
Ask questions and get help from the community.
