Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/AsyncFuncAI/deepwiki-open/llms.txt

Use this file to discover all available pages before exploring further.

Docker is the easiest way to deploy DeepWiki Open in a reproducible environment. The project ships a multi-stage Dockerfile that builds the Python backend and Next.js frontend into a single image, and a docker-compose.yml that wires up environment variables, port mappings, and persistent volume mounts for you.
Repository data, embeddings, and generated wiki cache are stored in ~/.adalflow on your host machine. This directory is mounted into the container so your data survives container restarts and upgrades.

Deployment options

Docker Compose is the recommended method for most deployments. It reads your .env file automatically and configures logging, health checks, and memory limits.1. Clone the repository and create a .env file:
git clone https://github.com/AsyncFuncAI/deepwiki-open.git
cd deepwiki-open
GOOGLE_API_KEY=your_google_api_key
OPENAI_API_KEY=your_openai_api_key

# Optional providers
OPENROUTER_API_KEY=your_openrouter_api_key
AZURE_OPENAI_API_KEY=your_azure_openai_api_key
AZURE_OPENAI_ENDPOINT=your_azure_openai_endpoint
AZURE_OPENAI_VERSION=your_azure_openai_version
OLLAMA_HOST=your_ollama_host

# Optional: embedder type (openai | google | ollama)
DEEPWIKI_EMBEDDER_TYPE=google

# Optional: logging
LOG_LEVEL=INFO
LOG_FILE_PATH=api/logs/application.log
2. Start the stack:
docker-compose up
To run in the background:
docker-compose up -d
Open http://localhost:3000 once the container is healthy.
Full docker-compose.yml reference:
services:
  deepwiki:
    build:
      context: .
      dockerfile: Dockerfile
    ports:
      - "${PORT:-8001}:${PORT:-8001}"  # API port
      - "3000:3000"  # Next.js port
    env_file:
      - .env
    environment:
      - PORT=${PORT:-8001}
      - NODE_ENV=production
      - SERVER_BASE_URL=http://localhost:${PORT:-8001}
      - LOG_LEVEL=${LOG_LEVEL:-INFO}
      - LOG_FILE_PATH=${LOG_FILE_PATH:-api/logs/application.log}
    volumes:
      - ~/.adalflow:/root/.adalflow      # Persist repository and embedding data
      - ./api/logs:/app/api/logs          # Persist log files across container restarts
    mem_limit: 6g
    mem_reservation: 2g
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:${PORT:-8001}/health"]
      interval: 60s
      timeout: 10s
      retries: 3
      start_period: 30s

Volume mounts explained

The two volume mounts used in both docker-compose.yml and the docker run examples serve distinct purposes:
Host pathContainer pathContents
~/.adalflow/root/.adalflowCloned repos (repos/), vector indexes (databases/), and wiki cache (wikicache/)
./api/logs/app/api/logsApplication log files
Without the ~/.adalflow mount, every container restart triggers a full re-clone and re-index of every repository you have processed. The logs mount is optional but useful for auditing and debugging production deployments.

Health check

The docker-compose.yml configures an HTTP health check against the backend API:
curl -f http://localhost:8001/health
The container is considered healthy after the first successful check. Docker Compose waits up to 30 seconds before starting health checks, then retries up to 3 times at 60-second intervals.

Build docs developers (and LLMs) love