Web interface
Easiest. Launch a browser UI with one command.
Issue-driven CLI
Point PDD at a GitHub issue and let it implement automatically.
Hello example
Learn PDD fundamentals with a step-by-step walkthrough.
Complete installation before proceeding. Run
pdd --version to confirm PDD is installed.Option 1: Web interface
The web interface is the recommended way to start. It gives you a visual browser-based dashboard for running commands, browsing files, and accessing your session remotely via PDD Cloud.Run setup
Launch the web interface
localhost:9876 where you can:- Run commands visually (
pdd change,pdd bug,pdd fix,pdd sync) - Browse and edit prompts, code, and tests in your project
- Access your session from any browser via PDD Cloud (use
--local-onlyto disable remote access)
Option 2: Issue-driven CLI
Usepdd change and pdd bug to implement GitHub issues directly from the command line. PDD runs a multi-step agentic workflow and opens a draft PR automatically.
Prerequisites
Usage
Option 3: Hello example
This walkthrough takes you through a manual prompt workflow from scratch. It’s the best way to understand how PDD turns a.prompt file into running code.
What you’ll build
A Python script generated entirely from a prompt file that printshello to the terminal.
Steps
Generate code from the prompt
--force flag skips interactive confirmation prompts, allowing overwrites without input. PDD reads hello_python.prompt and generates hello.py.What just happened
PDD read the prompt filehello_python.prompt and used a language model to generate hello.py. The prompt file — not the generated code — is the source of truth. When requirements change, you update the prompt and regenerate.
To run the full automated workflow (dependencies, generation, examples, tests, fixes) for any module, use pdd sync:
Global options
These flags work with any PDD command:| Flag | Description |
|---|---|
--force | Skip all interactive prompts. Use in CI/CD or when you want unattended execution. |
--strength FLOAT | Set model strength from 0.0 (cheapest) to 1.0 (highest ELO). Default is 0.5. |
--local | Run locally using your own API keys instead of PDD Cloud. |
--verbose | Show detailed output including token counts and context window usage for each LLM call. |
--time FLOAT | Scale reasoning token allocation for models that support it. Range 0.0–1.0, default 0.25. |
--temperature FLOAT | Set model temperature. Default is 0.0 (deterministic). |
Next steps
Issue-driven development
Full tutorial on implementing GitHub issues with
pdd change and pdd bugPrompt workflow guide
Learn how to write prompt files and use
pdd sync for the full development cycleCore concepts
Understand how PDD treats prompts as the source of truth
Command reference
Every command, flag, and option documented