Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/AsyncFuncAI/deepwiki-open/llms.txt

Use this file to discover all available pages before exploring further.

This guide covers the most common problems encountered when running DeepWiki Open, along with step-by-step solutions. If a problem is not listed here, check the API server logs first — they are the fastest way to identify the root cause. Logs are written to api/logs/application.log by default and can be tailed while the server is running.
# Follow API server logs in real time
tail -f api/logs/application.log

API Key Issues

DeepWiki could not find one or more required API keys in the environment.Causes and fixes:
  1. The .env file is missing or is not in the project root directory. Create it there:
    # Project root (same level as package.json and api/)
    GOOGLE_API_KEY=your_google_api_key
    OPENAI_API_KEY=your_openai_api_key
    
  2. The .env file exists but the API server was started before it was populated. Stop the server, verify the file, and restart:
    python -m api.main
    
  3. When running with Docker, the env file is not mounted. Either use --env-file or mount the file explicitly:
    docker run --env-file .env -p 8001:8001 -p 3000:3000 \
      ghcr.io/asyncfuncai/deepwiki-open:latest
    
Required keys by embedder type:
  • DEEPWIKI_EMBEDDER_TYPE=openai (default): OPENAI_API_KEY
  • DEEPWIKI_EMBEDDER_TYPE=google: GOOGLE_API_KEY
  • DEEPWIKI_EMBEDDER_TYPE=ollama: no key required
  • DEEPWIKI_EMBEDDER_TYPE=bedrock: AWS credentials
The API key was found but the provider rejected it.Fixes:
  • Copy the key directly from the provider’s console — no extra spaces, newlines, or quotation marks in the .env value:
    # Correct
    GOOGLE_API_KEY=AIzaSyABC123...
    
    # Incorrect (extra quotes)
    GOOGLE_API_KEY="AIzaSyABC123..."
    
  • Confirm the key is active and has not been revoked. Regenerate it if needed from:
  • Verify the key has permission for the specific model being used (some keys are scoped to specific projects or tiers).
OpenRouter returned an error, usually an authentication failure or a credit exhaustion issue.Fixes:
  1. Confirm OPENROUTER_API_KEY is set correctly in .env.
  2. Check your OpenRouter credit balance at openrouter.ai. The free tier has per-day limits.
  3. Verify the model string you selected exists on OpenRouter (e.g. openai/gpt-4o not gpt-4o).
  4. If you recently generated a new key, restart the API server so the new value is picked up from .env.
Azure OpenAI requires three environment variables — any of them missing or wrong will produce an authentication error.Fixes:
  1. Verify all three variables are set:
    AZURE_OPENAI_API_KEY=your_api_key
    AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com/
    AZURE_OPENAI_VERSION=2024-02-01
    
  2. Confirm the endpoint URL ends with a trailing / and matches exactly what is shown in the Azure Portal under your OpenAI resource.
  3. Confirm the API version string (AZURE_OPENAI_VERSION) matches a version supported by your deployed model. Check the Azure OpenAI documentation for current version strings.
  4. Verify the model you are using (e.g. gpt-4o) has been deployed in your Azure OpenAI resource. Azure requires explicit model deployment — having an API key alone is not enough.

Connection Problems

The frontend cannot reach the backend API server.Fixes:
  1. Confirm the API server is running. You should see output like Uvicorn running on http://0.0.0.0:8001 in the terminal:
    python -m api.main
    
  2. Confirm the server is listening on port 8001 (the default). If you changed the port with the PORT environment variable, update SERVER_BASE_URL to match:
    PORT=9001
    SERVER_BASE_URL=http://localhost:9001
    
  3. If using Docker, confirm port 8001 is mapped in the run command:
    docker run -p 8001:8001 -p 3000:3000 ghcr.io/asyncfuncai/deepwiki-open:latest
    
  4. Check whether a firewall or security group is blocking port 8001 if the frontend and backend are on different machines.
The browser is blocking the API request due to a cross-origin policy violation.About CORS in DeepWiki: The API is already configured to allow all origins (allow_origins=["*"]), so a genuine CORS error from DeepWiki itself is unusual.Common causes and fixes:
  1. A reverse proxy (nginx, Caddy, etc.) in front of the API is stripping or overriding the CORS headers. Check your proxy configuration and ensure it forwards the Access-Control-Allow-Origin response header from the upstream.
  2. The frontend’s SERVER_BASE_URL environment variable is pointing to the wrong address. For local development both should be http://localhost:8001:
    # In the frontend .env.local or .env
    NEXT_PUBLIC_SERVER_BASE_URL=http://localhost:8001
    
  3. The API server crashed after startup (check its terminal). A crashed server returns no headers at all, which the browser reports as a CORS error. Restart the server and check the logs.
The Next.js frontend started successfully but wiki generation never begins.Fixes:
  1. Open the browser developer tools (F12) → Console and Network tabs. Look for failed requests to localhost:8001.
  2. Confirm both servers are running simultaneously — they are separate processes:
    # Terminal 1: API server
    python -m api.main
    
    # Terminal 2: Frontend
    npm run dev
    
  3. Check whether the WebSocket connection to ws://localhost:8001/ws/chat is being established. In the Network tab, filter by “WS” to see WebSocket frames.
  4. Verify NEXT_PUBLIC_SERVER_BASE_URL (or SERVER_BASE_URL) points to the correct API address. The frontend reads this at build time for Next.js static rendering — if you changed it, rebuild:
    npm run build && npm run start
    

Wiki Generation Issues

Generation fails or times out on very large repositories.Fixes:
  1. Test with a small repository first to confirm the setup is working correctly.
  2. Use the excluded_dirs and excluded_files filters to reduce the number of files indexed. These can be sent per-request via the WebSocket message or set globally in api/config/repo.json.
  3. Choose a faster or higher-context model. For large repositories, Gemini 2.5 Flash or GPT-4o are better suited than smaller models with limited context windows.
  4. Break the repository into logical subsets by using the included_dirs filter to focus generation on one area at a time.
  5. Check the repository.max_size_mb limit in api/config/repo.json. The default is 50000 MB (50 GB) — if your repo reports an oversize error, check whether the repo itself is unusually large.
DeepWiki could not parse the URL or identifier you entered.Fixes:Use a full HTTPS URL in one of these formats:
https://github.com/owner/repo
https://gitlab.com/owner/repo
https://bitbucket.org/owner/repo
Do not use SSH URLs (git@github.com:...), short owner/repo slugs without the host, or URLs with trailing slashes or query strings.
DeepWiki could not clone or read the private repository.Fixes:
  1. Click ”+ Add access tokens” in the UI and enter a valid personal access token (PAT) with repo scope (GitHub) or read_repository scope (GitLab).
  2. Confirm the token has not expired. GitHub fine-grained tokens and GitLab tokens both have configurable expiry dates.
  3. Verify the token belongs to an account that has at least read access to the repository.
  4. For Bitbucket, use an App Password rather than a personal account password.
  5. If the API server is behind a firewall or outbound traffic is restricted, it may not be able to reach GitHub/GitLab/Bitbucket. Confirm the server has outbound HTTPS access to the VCS host.
A diagram in the generated wiki does not render and shows a syntax error or a broken frame.About diagram errors: DeepWiki automatically attempts to repair malformed Mermaid syntax before rendering. Most errors are handled without user intervention.If the auto-fix does not work:
  1. Refresh the page — the repair logic runs on each render attempt.
  2. Regenerate the specific wiki page. The LLM occasionally produces subtly invalid Mermaid syntax that varies between runs.
  3. If diagrams consistently fail for a repository, try a different model provider. Some models produce more consistent Mermaid output than others.
  4. Check the browser console for the specific Mermaid parse error message to identify which construct is failing.

Embedding Issues

After changing DEEPWIKI_EMBEDDER_TYPE, previously indexed repositories return low-quality or irrelevant results.Cause: Different embedding models produce vectors in different spaces. Embeddings generated by text-embedding-3-small (OpenAI) cannot be compared to embeddings generated by gemini-embedding-001 (Google) — they are numerically incompatible.Fix: Delete the cached embeddings for the affected repository and allow DeepWiki to re-index it with the new embedder. Embeddings are stored under ~/.adalflow/databases/:
# Remove all cached databases (will re-index on next generation)
rm -rf ~/.adalflow/databases/
To preserve other repositories, identify the specific database directory for the affected repo and remove only that subdirectory.
Wiki generation fails with a connection error when DEEPWIKI_EMBEDDER_TYPE=ollama.Fixes:
  1. Confirm Ollama is running:
    ollama serve
    
  2. Confirm the nomic-embed-text model is pulled:
    ollama pull nomic-embed-text
    
  3. If Ollama is running on a different machine or a non-default port, set OLLAMA_HOST:
    export OLLAMA_HOST=http://192.168.1.100:11434
    
  4. Verify the Ollama server is reachable from the machine running DeepWiki:
    curl http://localhost:11434/api/tags
    
    A successful response lists the available models. If this fails, Ollama is not running or the host/port is wrong.

Common Solutions

The most effective first step for any issue is restarting both the API server and the frontend in the correct order:
# Stop both processes (Ctrl+C in each terminal), then:

# Terminal 1 — restart API server
python -m api.main

# Terminal 2 — restart frontend (wait for API to be ready first)
npm run dev
When using Docker Compose:
docker-compose down && docker-compose up
Open the browser console (F12 → Console) and look for:
  • Red error messages from JavaScript
  • Failed network requests (switch to the Network tab and filter by failed/errored requests)
  • WebSocket connection errors (filter by “WS”)
Copy the full error message when reporting issues — it identifies exactly which request failed and what error was returned.
Enable debug logging to get more detail from the API server:
export LOG_LEVEL=DEBUG
python -m api.main
Or in Docker Compose, add LOG_LEVEL=DEBUG to your .env file and restart. Debug logs include model request details, embedding batch sizes, cache read/write attempts, and file traversal decisions.Log files are written to api/logs/application.log by default. You can redirect them to a different path with LOG_FILE_PATH (the path must be within api/logs/ for security reasons):
export LOG_FILE_PATH=api/logs/debug.log
If a wiki is displaying stale or incorrect content, delete the server-side cache and regenerate:
# Via the API (no auth mode)
curl -X DELETE \
  "http://localhost:8001/api/wiki_cache?owner=<owner>&repo=<repo>&repo_type=github&language=en"

# Or delete the cache file directly
rm ~/.adalflow/wikicache/deepwiki_cache_github_<owner>_<repo>_en.json
After deleting the cache, reload the DeepWiki UI and click Generate Wiki again.

Getting help

If the steps above do not resolve your issue, the following resources are available:
  • GitHub Issues: Search existing issues or open a new one at github.com/AsyncFuncAI/deepwiki-open/issues. Include the error message, the API server log output, and the steps to reproduce.
  • Discord: Join the community at the Discord server linked in the README for real-time help from maintainers and other users.
When filing an issue, include:
  1. The full error message (from the browser console and/or API logs)
  2. Your operating system and Python version
  3. Which provider and model you are using
  4. Whether you are running locally or with Docker
  5. The DEEPWIKI_EMBEDDER_TYPE setting in use

Build docs developers (and LLMs) love