Skip to main content

Prerequisites

Before you begin, ensure you have the following installed:
  • Node.js 20+ and npm
  • Anaconda or Miniconda (or Python 3.11+) for hand tracking
  • Anthropic API key (get one here)
  • AWS credentials and S3 bucket (optional, for document uploads)
  • Webcam (optional, for hand tracking navigation)
You can skip AWS setup if you don’t need document upload functionality. Hand tracking is also optional—you can navigate the graph with mouse/keyboard.

Installation

1

Clone the repository

git clone https://github.com/izhukau/sprout.git
cd sprout
2

Set up the backend API

Navigate to the backend directory and install dependencies:
cd sprout-backend
npm install
Create a .env file in sprout-backend/ with your configuration:
# Required
ANTHROPIC_API_KEY=your_anthropic_api_key

# Optional - for document uploads
AWS_ACCESS_KEY_ID=your_aws_key
AWS_SECRET_ACCESS_KEY=your_aws_secret
AWS_REGION=us-east-1
AWS_S3_BUCKET=your-bucket-name

# Optional - customize defaults
DB_PATH=./sprout.db  # SQLite database location
PORT=8000            # Backend server port
The backend uses SQLite for local storage. The database file will be created automatically when you run migrations.
Apply the database schema:
npm run db:migrate
You should see output confirming the creation of sprout.db.
3

Set up hand tracking (optional)

The hand tracking service uses Python with OpenCV and MediaPipe to detect hand landmarks via your webcam.Stay in the sprout-backend/ directory and verify conda is installed:
conda --version
If conda is not found, install Miniconda before continuing.
Create and activate the conda environment:
conda create -n sprout-cv python=3.11 -y
conda activate sprout-cv
Install the computer vision dependencies:
pip install -r requirements.txt
This installs:
  • mediapipe==0.10.14 - Hand landmark detection
  • opencv-python==4.13.0.92 - Video capture and processing
  • websockets==12.0 - WebSocket server for real-time streaming
  • numpy==2.4.2 - Numerical operations
Keep this terminal open with the sprout-cv environment activated—you’ll need it to run the hand tracking server.
4

Set up the frontend

Open a new terminal and navigate to the frontend directory:
cd sprout-frontend
npm install
Create a .env.local file in sprout-frontend/:
# Backend API configuration
NEXT_PUBLIC_BACKEND_ORIGIN=http://localhost:8000
NEXT_PUBLIC_BACKEND_PROXY_PREFIX=/backend-api

# Optional - use cheaper Claude models for testing
NEXT_PUBLIC_SMALL_AGENTS=false
The NEXT_PUBLIC_BACKEND_PROXY_PREFIX matches the Next.js rewrite configuration. SSE endpoints bypass the proxy and connect directly to NEXT_PUBLIC_BACKEND_ORIGIN.

Running Sprout

You’ll need three separate terminal sessions to run all services:
cd sprout-backend
npm run dev

# Expected output:
# Default user seeded.
# Sprout backend running on http://localhost:8000
Start the backend before the frontend to ensure API connectivity. Hand tracking can be started anytime but must be running before you toggle it in the UI.

Health Check

Verify the backend is running:
curl http://localhost:8000/api/health
Expected response:
{
  "status": "ok",
  "timestamp": "2026-02-28T12:34:56.789Z"
}

Create Your First Topic

Now that all services are running, let’s create your first learning pathway:
1

Open the app

Navigate to http://localhost:3000 in your browser.
2

Create a new topic

Click “Add Topic” or “Create Branch” and enter a learning goal:
  • “Linear Algebra for Machine Learning”
  • “Fauna from Darkest Peru”
  • “Introduction to React Hooks”
Topics are called “branches” in the API, representing the root of your learning tree.
3

Upload documents (optional)

If you have PDFs, lecture notes, or course materials, upload them to provide context. Sprout will extract relevant sections for each concept.The upload functionality requires AWS S3 configuration in your backend .env file.
4

Watch the agents work

The UI streams real-time progress via Server-Sent Events as agents:
  1. Topic Agent generates 6-10 concepts with prerequisite relationships
  2. Subconcept Bootstrap Agents run in parallel (max 3 concurrent) to create 8-12 subconcepts per concept
  3. Generate Diagnostic Agents create mixed-format assessment questions
You’ll see SSE events in the activity panel:
agent_start: {"agent":"topic"}
agent_reasoning: {"text":"Analyzing the topic scope..."}
tool_call: {"tool":"save_concept","input":{...}}
node_created: {"node":{"id":"...","title":"Vectors and Matrices"}}
edge_created: {"edge":{"sourceNodeId":"...","targetNodeId":"..."}}
agent_done: {"agent":"topic"}
Set NEXT_PUBLIC_SMALL_AGENTS=true in frontend .env.local to use cheaper testing mode: 1-2 concepts with 2-3 subconcepts each.
5

Explore the 3D graph

Once generation completes, you’ll see an interactive 3D knowledge graph. Navigate using:
  • Mouse drag - Rotate the graph
  • Scroll - Zoom in/out
  • Click nodes - Select concepts to view details
  • Hand tracking - Toggle in bottom-right corner (requires Python backend running)
6

Enable hand tracking navigation

If you started the Python hand tracking server (python backend.py), click the “Hand Tracking” toggle in the bottom-right corner.Grant camera permissions when prompted, then use natural hand movements:
  • Index finger - Move the cursor
  • Pinch (thumb + index) - Zoom the graph
  • Open palm (hold 3s) - Enter grab mode to drag nodes
The hand tracking uses MediaPipe for landmark detection with exponential moving average smoothing:
# backend.py - Hand tracking configuration
SEND_INTERVAL = 1 / 60      # 60fps stream
SMOOTH_ALPHA = 0.35          # EMA smoothing weight
PALM_HOLD_SECONDS = 3.0      # Hold duration to grab
7

Take diagnostic assessments

Click on a concept node to view diagnostic questions. These assess your current understanding with:
  • Multiple choice questions - Quick comprehension checks
  • Open-ended questions - Deeper reasoning evaluation
Submit your answers to trigger the Concept Refinement Agent, which will:
  1. Grade your responses via the Grade Answers Agent
  2. Analyze your performance against historical data
  3. Restructure the subconcept graph to add remediation or remove mastered content
  4. Validate the graph for integrity (no orphans, no broken edges)
8

Learn with the tutor

Dive into a subconcept to start an interactive tutoring session. The Tutor Agent will:
  • Check prerequisite mastery before starting
  • Break the concept into 3-6 digestible chunks
  • Present worked examples and create practice exercises
  • Track your mastery score and persist progress
Example tutor interaction:
POST /api/chat/sessions/:sessionId/tutor
{
  "message": "What's the difference between a vector and a matrix?"
}
The tutor follows one rule: guide you to the answer, don’t provide it directly.

API Endpoints

Here are the key endpoints you’ll interact with:
EndpointMethodPurpose
/api/healthGETHealth check
/api/branchesGET/POSTList or create topics
/api/branches/:idGET/PATCH/DELETEManage a topic
/api/agents/topics/:topicNodeId/runPOSTGenerate concepts (SSE)
/api/agents/concepts/:conceptNodeId/runPOSTRun diagnostics or refinement (SSE)
/api/chat/sessions/:sessionId/tutorPOSTInteractive tutoring
/api/nodes/:nodeId/documentsPOSTUpload documents
All agent endpoints stream Server-Sent Events (SSE) for real-time progress. Check the API Reference for detailed request/response schemas.

Troubleshooting

Hand tracking not working

  • Ensure python backend.py is running in the sprout-cv conda environment
  • Verify WebSocket connection at ws://localhost:8765
  • Grant camera permissions in your browser
  • Check the browser console for connection errors

Document uploads failing

# Verify AWS credentials are set in sprout-backend/.env
AWS_ACCESS_KEY_ID=your_key
AWS_SECRET_ACCESS_KEY=your_secret
AWS_REGION=us-east-1
AWS_S3_BUCKET=your-bucket
Without AWS config, document upload requests will fail. This is optional for basic functionality.

SSE stream stops or doesn’t connect

  • The frontend bypasses the Next.js proxy for SSE endpoints
  • Verify NEXT_PUBLIC_BACKEND_ORIGIN matches your backend URL (default: http://localhost:8000)
  • Check that CORS is enabled in the backend (it is by default in src/index.ts)
  • Ensure the backend is running before the frontend connects

Database errors

# Reset the database (deletes all data)
cd sprout-backend
rm sprout.db
npm run db:migrate

Port conflicts

If ports 3000, 8000, or 8765 are already in use:
# Backend: Change PORT in sprout-backend/.env
PORT=8001

# Frontend: Update NEXT_PUBLIC_BACKEND_ORIGIN in sprout-frontend/.env.local
NEXT_PUBLIC_BACKEND_ORIGIN=http://localhost:8001

# Hand tracking: Edit backend.py line 170
websockets.serve(track_hands, "localhost", 8766)

Next Steps

Now that Sprout is running, explore:

Architecture

Learn how the seven-agent system works

Hand Tracking Deep Dive

Configure and customize hand gesture controls

API Reference

Integrate Sprout into your own applications

Agent Workflows

Understand the Observe-Reason-Act-Verify loops

Development Scripts

cd sprout-backend

# Development server with auto-reload
npm run dev

# Database operations
npm run db:migrate   # Apply migrations
npm run db:generate  # Generate migration files
npm run db:push      # Push schema changes

# Production build
npm run build
npm start

You’re all set! Open http://localhost:3000 and start building your personalized knowledge graph.

Build docs developers (and LLMs) love