Basic usage
Command relationships
The diagram below shows how PDD commands relate to each other. Entry points at the top feed into thesync workflow, which orchestrates the full development cycle.
Key concepts:
- Entry points —
pdd connect(web UI), direct CLI, or the GitHub App. - Start —
pdd generate <url>scaffolds architecture, prompts, and.pddrcfrom a PRD GitHub issue. - Core loop —
pdd syncruns the full auto-deps → generate → example → crash → verify → test → fix → update cycle for each module. - Health check —
pdd checkup <url>identifies what needs attention next. - Defect path —
test <url>orbug <url>surfaces failing tests →fix <url>resolves them. - Feature path —
change <url>implements the feature →sync <url>re-runs sync across affected modules.
Global options
These options are accepted by every PDD command. Place them betweenpdd and the subcommand name.
Skip all interactive prompts, including file overwrite confirmations and API key requests. Use in CI/automation pipelines.
Set the AI model strength (0.0–1.0).
0.0— cheapest available model0.5— default base model1.0— most powerful model (highest ELO rating)
Controls reasoning token allocation for models that support it (0.0–1.0).
- For models with token limits (e.g., 64k),
1.0uses the maximum available tokens. - For models with discrete effort levels,
1.0corresponds to the highest effort. - Values between 0.0 and 1.0 scale proportionally.
Output randomness. Higher values increase diversity but reduce determinism. Lower values produce focused, repeatable outputs.
Increase output verbosity. Includes token counts and context window usage for each LLM call.
Minimal output. Suppresses all non-error messages.
Path to a CSV file for cost tracking. Records timestamp, model, command, cost (USD), input files, and output files for each operation.Alternatively, set
PDD_OUTPUT_COST_PATH as an environment variable for a persistent default.Review and optionally exclude few-shot examples before command execution. For each candidate example, you can accept, exclude, or skip it.
Run locally instead of in the cloud. Requires API keys for at least one supported LLM provider (OpenAI, Anthropic, Google, etc.).
Capture a debug bundle for this run so it can be replayed and analyzed later. PDD records the full command, logs, prompts, generated code, and key metadata. The bundle path is printed at the end of the run.Attach the bundle when filing a bug report with
pdd report-core.Override automatic context detection and use the named context from
.pddrc.List all available contexts defined in the nearest
.pddrc file, then exit. No commands or auto-update checks run.Context selection
PDD reads the nearest.pddrc file (searching upward from the current directory) and selects a context automatically based on the current directory path.
--list-contextsprints available context names and exits immediately (status 0).--context CONTEXT_NAMEis validated early; an unknown name causes aUsageError(exit code 2).- Configuration precedence: CLI options >
.pddrccontext > environment variables > built-in defaults.
Cost tracking
Enable cost tracking on any command:| Column | Description |
|---|---|
timestamp | Date and time of the execution |
model | Model used |
command | PDD command |
cost | Estimated cost in USD |
input_files | Input files involved |
output_files | Output files generated or modified |
Cloud vs local execution
By default, all commands run in cloud mode using GitHub SSO for authentication. Cloud mode provides:- No local API key management
- Access to powerful models
- Shared community examples and improvements
- Automatic updates and cost optimization
--local and set the appropriate API key:
PDD’s local mode uses LiteLLM for model interaction. See the model configuration documentation for details on the
llm_model.csv configuration file.