Prerequisites
Before you begin, ensure you have the following installed:- Node.js 20+ and npm
- Anaconda or Miniconda (or Python 3.11+) for hand tracking
- Anthropic API key (get one here)
- AWS credentials and S3 bucket (optional, for document uploads)
- Webcam (optional, for hand tracking navigation)
You can skip AWS setup if you don’t need document upload functionality. Hand tracking is also optional—you can navigate the graph with mouse/keyboard.
Installation
Set up the backend API
Navigate to the backend directory and install dependencies:Create a Apply the database schema:You should see output confirming the creation of
.env file in sprout-backend/ with your configuration:The backend uses SQLite for local storage. The database file will be created automatically when you run migrations.
sprout.db.Set up hand tracking (optional)
The hand tracking service uses Python with OpenCV and MediaPipe to detect hand landmarks via your webcam.Stay in the Create and activate the conda environment:Install the computer vision dependencies:This installs:
sprout-backend/ directory and verify conda is installed:mediapipe==0.10.14- Hand landmark detectionopencv-python==4.13.0.92- Video capture and processingwebsockets==12.0- WebSocket server for real-time streamingnumpy==2.4.2- Numerical operations
Keep this terminal open with the
sprout-cv environment activated—you’ll need it to run the hand tracking server.Running Sprout
You’ll need three separate terminal sessions to run all services:Health Check
Verify the backend is running:Create Your First Topic
Now that all services are running, let’s create your first learning pathway:Open the app
Navigate to http://localhost:3000 in your browser.
Create a new topic
Click “Add Topic” or “Create Branch” and enter a learning goal:
- “Linear Algebra for Machine Learning”
- “Fauna from Darkest Peru”
- “Introduction to React Hooks”
Topics are called “branches” in the API, representing the root of your learning tree.
Upload documents (optional)
If you have PDFs, lecture notes, or course materials, upload them to provide context. Sprout will extract relevant sections for each concept.The upload functionality requires AWS S3 configuration in your backend
.env file.Watch the agents work
The UI streams real-time progress via Server-Sent Events as agents:
- Topic Agent generates 6-10 concepts with prerequisite relationships
- Subconcept Bootstrap Agents run in parallel (max 3 concurrent) to create 8-12 subconcepts per concept
- Generate Diagnostic Agents create mixed-format assessment questions
Set
NEXT_PUBLIC_SMALL_AGENTS=true in frontend .env.local to use cheaper testing mode: 1-2 concepts with 2-3 subconcepts each.Explore the 3D graph
Once generation completes, you’ll see an interactive 3D knowledge graph. Navigate using:
- Mouse drag - Rotate the graph
- Scroll - Zoom in/out
- Click nodes - Select concepts to view details
- Hand tracking - Toggle in bottom-right corner (requires Python backend running)
Enable hand tracking navigation
If you started the Python hand tracking server (
python backend.py), click the “Hand Tracking” toggle in the bottom-right corner.Grant camera permissions when prompted, then use natural hand movements:- Index finger - Move the cursor
- Pinch (thumb + index) - Zoom the graph
- Open palm (hold 3s) - Enter grab mode to drag nodes
Take diagnostic assessments
Click on a concept node to view diagnostic questions. These assess your current understanding with:
- Multiple choice questions - Quick comprehension checks
- Open-ended questions - Deeper reasoning evaluation
- Grade your responses via the Grade Answers Agent
- Analyze your performance against historical data
- Restructure the subconcept graph to add remediation or remove mastered content
- Validate the graph for integrity (no orphans, no broken edges)
Learn with the tutor
Dive into a subconcept to start an interactive tutoring session. The Tutor Agent will:The tutor follows one rule: guide you to the answer, don’t provide it directly.
- Check prerequisite mastery before starting
- Break the concept into 3-6 digestible chunks
- Present worked examples and create practice exercises
- Track your mastery score and persist progress
API Endpoints
Here are the key endpoints you’ll interact with:| Endpoint | Method | Purpose |
|---|---|---|
/api/health | GET | Health check |
/api/branches | GET/POST | List or create topics |
/api/branches/:id | GET/PATCH/DELETE | Manage a topic |
/api/agents/topics/:topicNodeId/run | POST | Generate concepts (SSE) |
/api/agents/concepts/:conceptNodeId/run | POST | Run diagnostics or refinement (SSE) |
/api/chat/sessions/:sessionId/tutor | POST | Interactive tutoring |
/api/nodes/:nodeId/documents | POST | Upload documents |
All agent endpoints stream Server-Sent Events (SSE) for real-time progress. Check the API Reference for detailed request/response schemas.
Troubleshooting
Hand tracking not working
- Ensure
python backend.pyis running in thesprout-cvconda environment - Verify WebSocket connection at
ws://localhost:8765 - Grant camera permissions in your browser
- Check the browser console for connection errors
Document uploads failing
SSE stream stops or doesn’t connect
- The frontend bypasses the Next.js proxy for SSE endpoints
- Verify
NEXT_PUBLIC_BACKEND_ORIGINmatches your backend URL (default:http://localhost:8000) - Check that CORS is enabled in the backend (it is by default in
src/index.ts) - Ensure the backend is running before the frontend connects
Database errors
Port conflicts
If ports 3000, 8000, or 8765 are already in use:Next Steps
Now that Sprout is running, explore:Architecture
Learn how the seven-agent system works
Hand Tracking Deep Dive
Configure and customize hand gesture controls
API Reference
Integrate Sprout into your own applications
Agent Workflows
Understand the Observe-Reason-Act-Verify loops
Development Scripts
You’re all set! Open http://localhost:3000 and start building your personalized knowledge graph.