Quickstart Guide
Get started with Stanzo by running your first live debate fact-check session. This guide walks you through authentication, creating a debate, and reviewing verified claims.You’ll need a GitHub account to sign in. Stanzo uses GitHub OAuth for authentication via Convex Auth.
Sign in with GitHub
Navigate to the Stanzo application and click Sign in with GitHub. You’ll be redirected to GitHub to authorize the application.After authorization, you’ll be redirected back to Stanzo with an active session. Your GitHub profile information (name, email, avatar) will be stored in the
users table.Create your first debate
Click Get Started or navigate to
/debates/new to create a new debate session.You’ll see a form with two fields:- Speaker A: Name of the first speaker (default: “Speaker A”)
- Speaker B: Name of the second speaker (default: “Speaker B”)
When you click Start Debate, Stanzo creates a new debate record in Convex, establishes a WebSocket connection to Deepgram for live transcription, and begins capturing audio from your microphone.
Understand the live debate interface
Once your debate starts, you’ll see a two-panel layout:The transcript automatically scrolls as new chunks arrive, so you can follow along in real-time.
Left Panel: Live Transcript
The transcript panel displays real-time speech-to-text with:- Speaker labels (Speaker A or Speaker B) based on diarization
- Timestamps for each utterance
- Interim results (gray text) that update as the speaker talks
- Final transcript chunks (black text) after each utterance ends
Right Panel: Claims Feed
The claims sidebar shows extracted factual statements with:- Claim count badge at the top
- Status for each claim:
pending,checking,true,false,mixed, orunverifiable - Verdict explanation once fact-checking completes
- Correction text if the claim is false or partially false
- Source citations linking to authoritative sites (e.g., bls.gov, fbi.gov)
Claims appear with a “pending” status immediately after extraction, then update to “checking” while Perplexity Sonar verifies them. Final verdicts appear within 5-10 seconds.
Top Bar: Debate Controls
The header shows:- Back button to return to your debates list
- LIVE badge with a pulsing red dot while recording
- End button to stop the debate and save results
Test the fact-checking pipeline
To see Stanzo in action, try speaking a verifiable claim into your microphone:Example claims to test:
- “The unemployment rate in the US is currently around 4 percent”
- “There are 50 states in the United States”
- “The average temperature on Earth has increased by 1.1 degrees Celsius since pre-industrial times”
- “California has a GDP of approximately 3.9 trillion dollars”
- Transcription (instant): Your words appear in the transcript panel with speaker diarization
- Extraction (1-3 seconds): After a pause, the claim appears in the claims sidebar with “pending” status
- Fact-checking (5-10 seconds): Status updates to “checking”, then shows final verdict with sources
The extraction engine uses Gemini 2.0 Flash with full conversation history, so it won’t re-extract claims from previous turns and can resolve pronouns using context. See Claim Extraction for details.
End the debate and review results
When you’re done testing, click End in the top-right corner. Stanzo will:
- Stop the microphone and close the Deepgram WebSocket connection
- Mark the debate as “ended” in the database
- Redirect you to
/debates/[debateId]to review the full session
Review Page
The review page shows the complete debate with:- Full transcript with all chunks and timestamps
- All claims with final verdicts, corrections, and sources
- Debate metadata (speakers, start/end times, duration)
/debates to see all your past debates.Debates are automatically ended if you close the tab or navigate away. An
onBeforeUnload event handler ensures the debate status is updated even if you don’t click “End” manually.What happens under the hood
The quickstart workflow triggers Stanzo’s three-stage pipeline: 1. RecordDeepgram’s
nova-3 model transcribes audio in real-time with speaker diarization. Transcript chunks stream to Convex via WebSocket and are inserted into the transcriptChunks table.
2. ExtractWhen unprocessed chunks accumulate (after utterance pauses), a Convex mutation triggers a Gemini 2.0 Flash extraction session. The AI analyzes the new segment using full conversation history and returns JSONL-formatted claims, which are inserted with
pending status.
3. VerifyEach new claim triggers an async Convex action that calls Perplexity Sonar. The fact-checker searches authoritative sources, returns a verdict with citations, and updates the claim record. The UI reactively updates via Convex subscriptions. See the Architecture page for a detailed explanation of the pipeline, including transcript batching, extraction sessions, and fact-check retries.
Next steps
Architecture Deep Dive
Understand how Stanzo processes audio through transcription, extraction, and fact-checking stages
Debates API
Explore Convex mutations and queries for creating, listing, and managing debates
Troubleshooting
No audio is being transcribed
Check:- Microphone permissions are granted in your browser
- You’re using a supported browser (Chrome, Edge, Firefox)
- Your microphone is selected in system settings
Claims aren’t appearing
Check:- You’ve paused for at least 1.5 seconds after speaking
- You’re making factual claims (not opinions or questions)
- Environment variables
GEMINI_API_KEYis set in Convex dashboard
Fact-checking takes too long
Check:- Environment variable
PERPLEXITY_API_KEYis set correctly - You have sufficient API quota remaining
- The claim is specific and verifiable (vague claims may timeout)