Overview
Documentation generation is an asynchronous process managed by BullMQ. After creating a project, you can monitor its progress using job status endpoints or real-time SSE streaming.Generation Process
When you create a project, it goes through these stages:- Queued - Added to generation queue
- Scanning - Repository files are being scanned
- Analyzing - Code structure is being analyzed
- Generating - Documentation is being generated
- Ready - Documentation complete and available
- Failed - Generation encountered an error
Get Job Status
Retrieve the current status of a generation job.Job ID returned when creating a project
Job identifier
Current job state:
waiting- In queueactive- Currently processingcompleted- Successfully finishedfailed- Job faileddelayed- Delayed for retry
Progress percentage (0-100) or progress object
Error message if job failed
Stream Generation Progress (SSE)
Receive real-time updates using Server-Sent Events.Project unique identifier
Event: status
Emitted when project status changes.
Event: log
Emitted for generation logs and progress messages.
Cancel Generation
Request cancellation of an active generation job.Project unique identifier
Success message: “Cancellation requested.”
Cancellation is a request and may not take effect immediately. The generation process will stop at the next checkpoint.
Smart Caching
WhatDoc uses commit-based caching to optimize regeneration:- When creating a project, the latest commit SHA is fetched from GitHub
- If the repository hasn’t changed (same commit), regeneration may use cached data
- This significantly speeds up documentation updates for unchanged repositories
commitHash field in the project object stores the commit SHA used for generation:
Queue Priority
Generation jobs are prioritized based on plan tier:- Pro users: Priority
1(highest) - Free users: Priority
10(lower)
Bring Your Own Key (BYOK)
To use your own LLM API key, include these headers when creating a project:Your LLM provider API key (must be at least 20 characters)
Target model identifier:
- For Gemini:
gemini-2.5-flash-lite,gemini-pro, etc. - For OpenAI:
gpt-4,gpt-3.5-turbo, etc.
Using your own API key allows you to:
- Use preferred models not available on shared infrastructure
- Have more control over generation costs
- Bypass potential rate limits on shared keys
Generation Errors
Common generation errors:Rate Limit Exceeded
Plan Limit Reached
Repository Access Error
If the repository is private and GitHub isn’t connected:Invalid Repository
owner/repo and the repository exists.
