Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/AsyncFuncAI/deepwiki-open/llms.txt

Use this file to discover all available pages before exploring further.

Docker is the fastest way to run DeepWiki Open. A single image bundles the Python API server and the Next.js frontend together, so you only need to supply your API keys and run one command. All cloned repositories, embeddings, and cached wiki content are stored in ~/.adalflow on your host machine, so your data persists across container restarts.
Docker Compose is the recommended approach. It reads your API keys from a .env file and handles port mapping, volume mounts, and health checks automatically.
1

Clone the repository

git clone https://github.com/AsyncFuncAI/deepwiki-open.git
cd deepwiki-open
2

Create your .env file

Create a .env file in the project root with your API keys:
.env
GOOGLE_API_KEY=your_google_api_key
OPENAI_API_KEY=your_openai_api_key
# Optional: Use Google AI embeddings instead of OpenAI
DEEPWIKI_EMBEDDER_TYPE=google
# Optional: OpenRouter models
OPENROUTER_API_KEY=your_openrouter_api_key
# Optional: Azure OpenAI models
AZURE_OPENAI_API_KEY=your_azure_openai_api_key
AZURE_OPENAI_ENDPOINT=your_azure_openai_endpoint
AZURE_OPENAI_VERSION=your_azure_openai_version
# Optional: External Ollama server
OLLAMA_HOST=your_ollama_host
3

Start the stack

docker-compose up
Docker Compose will build the image (first run only), start the container, and expose the API on port 8001 and the frontend on port 3000. Open http://localhost:3000 once the health check passes.The docker-compose.yml pre-configures the ~/.adalflow volume mount for data persistence and binds ./api/logs so log files survive restarts.

The ~/.adalflow volume

The -v ~/.adalflow:/root/.adalflow mount is what makes your data persist between container runs. DeepWiki writes three categories of data to this path:
PathContents
~/.adalflow/repos/Cloned repository source code
~/.adalflow/databases/Embeddings and vector indexes
~/.adalflow/wikicache/Cached generated wiki pages
Without this volume mount the container starts fresh on every run — all previously generated wikis and their embeddings are lost when the container stops.
Embeddings are model-specific. If you switch DEEPWIKI_EMBEDDER_TYPE from openai to google (or to ollama), the existing vectors in ~/.adalflow/databases/ are incompatible with the new model and must be regenerated. Delete the databases directory or re-generate wikis for any repository you want to keep using.

Next steps

Environment variables reference

Full list of all supported environment variables and their defaults.

Use Ollama locally

Run DeepWiki entirely offline with a local Ollama model for generation and embeddings.

Build docs developers (and LLMs) love