This guide covers the most common problems encountered when running DeepWiki Open, along with step-by-step solutions. If a problem is not listed here, check the API server logs first — they are the fastest way to identify the root cause. Logs are written toDocumentation Index
Fetch the complete documentation index at: https://mintlify.com/AsyncFuncAI/deepwiki-open/llms.txt
Use this file to discover all available pages before exploring further.
api/logs/application.log by default and can be tailed while the server is running.
API Key Issues
"Missing environment variables" error on startup
"Missing environment variables" error on startup
-
The
.envfile is missing or is not in the project root directory. Create it there: -
The
.envfile exists but the API server was started before it was populated. Stop the server, verify the file, and restart: -
When running with Docker, the env file is not mounted. Either use
--env-fileor mount the file explicitly:
DEEPWIKI_EMBEDDER_TYPE=openai(default):OPENAI_API_KEYDEEPWIKI_EMBEDDER_TYPE=google:GOOGLE_API_KEYDEEPWIKI_EMBEDDER_TYPE=ollama: no key requiredDEEPWIKI_EMBEDDER_TYPE=bedrock: AWS credentials
"API key not valid" error
"API key not valid" error
-
Copy the key directly from the provider’s console — no extra spaces, newlines, or quotation marks in the
.envvalue: -
Confirm the key is active and has not been revoked. Regenerate it if needed from:
- Google: Google AI Studio
- OpenAI: OpenAI Platform
- Verify the key has permission for the specific model being used (some keys are scoped to specific projects or tiers).
"OpenRouter API error"
"OpenRouter API error"
- Confirm
OPENROUTER_API_KEYis set correctly in.env. - Check your OpenRouter credit balance at openrouter.ai. The free tier has per-day limits.
- Verify the model string you selected exists on OpenRouter (e.g.
openai/gpt-4onotgpt-4o). - If you recently generated a new key, restart the API server so the new value is picked up from
.env.
"Azure OpenAI API error"
"Azure OpenAI API error"
-
Verify all three variables are set:
-
Confirm the endpoint URL ends with a trailing
/and matches exactly what is shown in the Azure Portal under your OpenAI resource. -
Confirm the API version string (
AZURE_OPENAI_VERSION) matches a version supported by your deployed model. Check the Azure OpenAI documentation for current version strings. -
Verify the model you are using (e.g.
gpt-4o) has been deployed in your Azure OpenAI resource. Azure requires explicit model deployment — having an API key alone is not enough.
Connection Problems
"Cannot connect to API server"
"Cannot connect to API server"
-
Confirm the API server is running. You should see output like
Uvicorn running on http://0.0.0.0:8001in the terminal: -
Confirm the server is listening on port 8001 (the default). If you changed the port with the
PORTenvironment variable, updateSERVER_BASE_URLto match: -
If using Docker, confirm port 8001 is mapped in the run command:
- Check whether a firewall or security group is blocking port 8001 if the frontend and backend are on different machines.
CORS error in the browser console
CORS error in the browser console
allow_origins=["*"]), so a genuine CORS error from DeepWiki itself is unusual.Common causes and fixes:-
A reverse proxy (nginx, Caddy, etc.) in front of the API is stripping or overriding the CORS headers. Check your proxy configuration and ensure it forwards the
Access-Control-Allow-Originresponse header from the upstream. -
The frontend’s
SERVER_BASE_URLenvironment variable is pointing to the wrong address. For local development both should behttp://localhost:8001: - The API server crashed after startup (check its terminal). A crashed server returns no headers at all, which the browser reports as a CORS error. Restart the server and check the logs.
Frontend not reaching the backend (blank page or infinite spinner)
Frontend not reaching the backend (blank page or infinite spinner)
-
Open the browser developer tools (F12) → Console and Network tabs. Look for failed requests to
localhost:8001. -
Confirm both servers are running simultaneously — they are separate processes:
-
Check whether the WebSocket connection to
ws://localhost:8001/ws/chatis being established. In the Network tab, filter by “WS” to see WebSocket frames. -
Verify
NEXT_PUBLIC_SERVER_BASE_URL(orSERVER_BASE_URL) points to the correct API address. The frontend reads this at build time for Next.js static rendering — if you changed it, rebuild:
Wiki Generation Issues
"Error generating wiki" for large repositories
"Error generating wiki" for large repositories
- Test with a small repository first to confirm the setup is working correctly.
-
Use the
excluded_dirsandexcluded_filesfilters to reduce the number of files indexed. These can be sent per-request via the WebSocket message or set globally inapi/config/repo.json. - Choose a faster or higher-context model. For large repositories, Gemini 2.5 Flash or GPT-4o are better suited than smaller models with limited context windows.
-
Break the repository into logical subsets by using the
included_dirsfilter to focus generation on one area at a time. -
Check the
repository.max_size_mblimit inapi/config/repo.json. The default is50000MB (50 GB) — if your repo reports an oversize error, check whether the repo itself is unusually large.
"Invalid repository format" error
"Invalid repository format" error
git@github.com:...), short owner/repo slugs without the host, or URLs with trailing slashes or query strings."Could not fetch repository structure" for private repos
"Could not fetch repository structure" for private repos
-
Click ”+ Add access tokens” in the UI and enter a valid personal access token (PAT) with
reposcope (GitHub) orread_repositoryscope (GitLab). - Confirm the token has not expired. GitHub fine-grained tokens and GitLab tokens both have configurable expiry dates.
- Verify the token belongs to an account that has at least read access to the repository.
- For Bitbucket, use an App Password rather than a personal account password.
- If the API server is behind a firewall or outbound traffic is restricted, it may not be able to reach GitHub/GitLab/Bitbucket. Confirm the server has outbound HTTPS access to the VCS host.
Mermaid diagram rendering error
Mermaid diagram rendering error
- Refresh the page — the repair logic runs on each render attempt.
- Regenerate the specific wiki page. The LLM occasionally produces subtly invalid Mermaid syntax that varies between runs.
- If diagrams consistently fail for a repository, try a different model provider. Some models produce more consistent Mermaid output than others.
- Check the browser console for the specific Mermaid parse error message to identify which construct is failing.
Embedding Issues
Embeddings are wrong or retrieval quality is poor after switching embedder
Embeddings are wrong or retrieval quality is poor after switching embedder
DEEPWIKI_EMBEDDER_TYPE, previously indexed repositories return low-quality or irrelevant results.Cause: Different embedding models produce vectors in different spaces. Embeddings generated by text-embedding-3-small (OpenAI) cannot be compared to embeddings generated by gemini-embedding-001 (Google) — they are numerically incompatible.Fix: Delete the cached embeddings for the affected repository and allow DeepWiki to re-index it with the new embedder. Embeddings are stored under ~/.adalflow/databases/:Ollama embedder fails to connect
Ollama embedder fails to connect
DEEPWIKI_EMBEDDER_TYPE=ollama.Fixes:-
Confirm Ollama is running:
-
Confirm the
nomic-embed-textmodel is pulled: -
If Ollama is running on a different machine or a non-default port, set
OLLAMA_HOST: -
Verify the Ollama server is reachable from the machine running DeepWiki:
A successful response lists the available models. If this fails, Ollama is not running or the host/port is wrong.
Common Solutions
Restart both servers
Restart both servers
Check browser developer tools
Check browser developer tools
- Red error messages from JavaScript
- Failed network requests (switch to the Network tab and filter by failed/errored requests)
- WebSocket connection errors (filter by “WS”)
Increase log verbosity
Increase log verbosity
LOG_LEVEL=DEBUG to your .env file and restart. Debug logs include model request details, embedding batch sizes, cache read/write attempts, and file traversal decisions.Log files are written to api/logs/application.log by default. You can redirect them to a different path with LOG_FILE_PATH (the path must be within api/logs/ for security reasons):Clear the wiki cache
Clear the wiki cache
Getting help
If the steps above do not resolve your issue, the following resources are available:- GitHub Issues: Search existing issues or open a new one at github.com/AsyncFuncAI/deepwiki-open/issues. Include the error message, the API server log output, and the steps to reproduce.
- Discord: Join the community at the Discord server linked in the README for real-time help from maintainers and other users.
- The full error message (from the browser console and/or API logs)
- Your operating system and Python version
- Which provider and model you are using
- Whether you are running locally or with Docker
- The
DEEPWIKI_EMBEDDER_TYPEsetting in use