Skip to main content

Basic usage

pdd [GLOBAL OPTIONS] COMMAND [OPTIONS] [ARGS]...
Every PDD command accepts a set of global options that control model selection, verbosity, and runtime behavior. Pass them before the subcommand name.

Command relationships

The diagram below shows how PDD commands relate to each other. Entry points at the top feed into the sync workflow, which orchestrates the full development cycle. Key concepts:
  • Entry pointspdd connect (web UI), direct CLI, or the GitHub App.
  • Startpdd generate <url> scaffolds architecture, prompts, and .pddrc from a PRD GitHub issue.
  • Core looppdd sync runs the full auto-deps → generate → example → crash → verify → test → fix → update cycle for each module.
  • Health checkpdd checkup <url> identifies what needs attention next.
  • Defect pathtest <url> or bug <url> surfaces failing tests → fix <url> resolves them.
  • Feature pathchange <url> implements the feature → sync <url> re-runs sync across affected modules.

Global options

These options are accepted by every PDD command. Place them between pdd and the subcommand name.
--force
flag
Skip all interactive prompts, including file overwrite confirmations and API key requests. Use in CI/automation pipelines.
--strength
float
default:"0.5"
Set the AI model strength (0.0–1.0).
  • 0.0 — cheapest available model
  • 0.5 — default base model
  • 1.0 — most powerful model (highest ELO rating)
--time
float
default:"0.25"
Controls reasoning token allocation for models that support it (0.0–1.0).
  • For models with token limits (e.g., 64k), 1.0 uses the maximum available tokens.
  • For models with discrete effort levels, 1.0 corresponds to the highest effort.
  • Values between 0.0 and 1.0 scale proportionally.
--temperature
float
default:"0.0"
Output randomness. Higher values increase diversity but reduce determinism. Lower values produce focused, repeatable outputs.
--verbose
flag
Increase output verbosity. Includes token counts and context window usage for each LLM call.
--quiet
flag
Minimal output. Suppresses all non-error messages.
--output-cost
string
Path to a CSV file for cost tracking. Records timestamp, model, command, cost (USD), input files, and output files for each operation.Alternatively, set PDD_OUTPUT_COST_PATH as an environment variable for a persistent default.
--review-examples
flag
Review and optionally exclude few-shot examples before command execution. For each candidate example, you can accept, exclude, or skip it.
--local
flag
Run locally instead of in the cloud. Requires API keys for at least one supported LLM provider (OpenAI, Anthropic, Google, etc.).
--core-dump
flag
Capture a debug bundle for this run so it can be replayed and analyzed later. PDD records the full command, logs, prompts, generated code, and key metadata. The bundle path is printed at the end of the run.
pdd --core-dump sync factorial_calculator
pdd --core-dump crash prompts/calc_python.prompt src/calc.py examples/run_calc.py crash_errors.log
Attach the bundle when filing a bug report with pdd report-core.
--context
string
Override automatic context detection and use the named context from .pddrc.
pdd --context backend sync calculator
--list-contexts
flag
List all available contexts defined in the nearest .pddrc file, then exit. No commands or auto-update checks run.

Context selection

PDD reads the nearest .pddrc file (searching upward from the current directory) and selects a context automatically based on the current directory path.
  • --list-contexts prints available context names and exits immediately (status 0).
  • --context CONTEXT_NAME is validated early; an unknown name causes a UsageError (exit code 2).
  • Configuration precedence: CLI options > .pddrc context > environment variables > built-in defaults.

Cost tracking

Enable cost tracking on any command:
pdd --output-cost costs.csv sync factorial_calculator
The CSV records:
ColumnDescription
timestampDate and time of the execution
modelModel used
commandPDD command
costEstimated cost in USD
input_filesInput files involved
output_filesOutput files generated or modified

Cloud vs local execution

By default, all commands run in cloud mode using GitHub SSO for authentication. Cloud mode provides:
  • No local API key management
  • Access to powerful models
  • Shared community examples and improvements
  • Automatic updates and cost optimization
To run locally, pass --local and set the appropriate API key:
export OPENAI_API_KEY=your_key_here
pdd --local generate my_module_python.prompt
PDD’s local mode uses LiteLLM for model interaction. See the model configuration documentation for details on the llm_model.csv configuration file.

Getting help

# List all commands
pdd --help

# Help for a specific command
pdd sync --help
pdd generate --help

Build docs developers (and LLMs) love