Skip to main content

Backend Overview

The EduMate backend is built with FastAPI and provides:
  • RESTful API for document upload, chunking, and question generation
  • User authentication with JWT tokens
  • PostgreSQL integration for user and assessment data
  • Redis Queue (RQ) for background job processing
  • Integration with Qdrant, Ollama, and Gemini API
The backend runs on port 8000 using Uvicorn as the ASGI server.

Prerequisites

Before deploying the backend, ensure the following services are running:
  • PostgreSQL (port 5432)
  • Redis (port 6379)
  • Qdrant (port 6333)
  • Ollama (port 11434)

Python Environment Setup

1

Install Python

EduMate requires Python 3.9 or higher:
python3 --version
# Expected: Python 3.9.x or higher
If not installed:
sudo apt update
sudo apt install python3 python3-pip python3-venv
2

Create Virtual Environment

Navigate to the project directory and create a virtual environment:
cd /path/to/edumate
python3 -m venv venv
3

Activate Virtual Environment

source venv/bin/activate
Your prompt should now show (venv).

Install Dependencies

The backend dependencies are listed in requirements.txt:
pip install --upgrade pip
pip install -r requirements.txt

Key Dependencies

From requirements.txt:
requirements.txt
# FastAPI and server
fastapi==0.124.4
fastapi-cli==0.0.16
uvicorn  # ASGI server
python-multipart==0.0.22  # File uploads

# Database
psycopg2-binary==2.9.11  # PostgreSQL adapter

# Authentication
PyJWT==2.8.0
passlib==1.7.4
bcrypt==5.0.0

# AI and LangChain
langchain==1.2.0
langchain-ollama==1.0.1
langchain-qdrant==1.1.0
langchain-text-splitters==1.1.0
langchain-google-genai==4.1.2
google-genai==1.56.0

# Vector DB and embeddings
qdrant-client==1.16.2
ollama==0.6.1
openai==2.11  # For Gemini API compatibility

# Document processing
pypdf==6.4.2

# Queue
redis==7.1.0
rq==2.6.1

# Utilities
pydantic==2.12.5
numpy==2.4.0
Installation may take 5-10 minutes depending on your internet connection and system performance.

Configuration

Database Configuration

The database connection is configured in backend/database.py:
backend/database.py
SQLALCHEMY_DATABASE_URL = "postgresql://edumate_user:edumate_pass@localhost:5432/edumate"
For production, change the default password and consider using environment variables:
import os
SQLALCHEMY_DATABASE_URL = os.getenv(
    "DATABASE_URL",
    "postgresql://edumate_user:edumate_pass@localhost:5432/edumate"
)

Environment Variables

Create a .env file in the project root:
.env
# Gemini API (Required for question generation)
GEMINI_API_KEY=your_gemini_api_key_here

# Optional: Override defaults
DATABASE_URL=postgresql://edumate_user:edumate_pass@localhost:5432/edumate
REDIS_HOST=localhost
REDIS_PORT=6379
QDRANT_URL=http://localhost:6333
OLLAMA_BASE_URL=http://localhost:11434
Obtain a Gemini API key from Google AI Studio.

JWT Secret Key

The JWT secret is configured in backend/server.py:
backend/server.py
SECRET_KEY = "super_secret_edumate_key"  # Change this in production!
ALGORITHM = "HS256"
ACCESS_TOKEN_EXPIRE_MINUTES = 1440  # 24 hours
Production Security: Generate a strong random secret key:
python -c "import secrets; print(secrets.token_urlsafe(32))"
Use this value for SECRET_KEY in production.

Start Redis Queue Worker

EduMate uses Redis Queue (RQ) for background processing of document chunking and question generation.
1

Install Redis

sudo apt install redis-server
sudo systemctl start redis
sudo systemctl enable redis
2

Start RQ Worker

In a separate terminal (with venv activated), start the worker:
rq worker
Expected output:
15:30:45 Worker rq:worker:hostname.12345 started
15:30:45 Subscribing to channel rq:worker:heartbeat
15:30:45 default: Job OK (default)
The worker listens on Redis (localhost:6379) for jobs from the /chunking and /chat endpoints.
The RQ configuration is in backend/client/rq_client.py:
backend/client/rq_client.py
from redis import Redis
from rq import Queue

queue = Queue(
    connection=Redis(
        host='localhost',
        port="6379",
    )
)

Run the Backend Server

1

Navigate to Backend Directory

cd /path/to/edumate/backend
2

Start FastAPI Server

The entry point is backend/main.py:
backend/main.py
from .server import app
import uvicorn

def main():
    uvicorn.run(app, port=8000, host="0.0.0.0")

main()
Run the server:
# Option 1: Run main.py directly
python -m backend.main

# Option 2: Use uvicorn directly
uvicorn backend.server:app --host 0.0.0.0 --port 8000 --reload
The --reload flag enables auto-reload during development. Remove it for production.
3

Verify Server is Running

The server should start on port 8000:
curl http://localhost:8000/
Expected response (if frontend is not built):
{
  "status": "Server is running",
  "frontend": "not built (run: cd frontend && npm run build)"
}

Test API Endpoints

Health Check

curl http://localhost:8000/

Create User Account

curl -X POST http://localhost:8000/api/signup \
  -H "Content-Type: application/json" \
  -d '{
    "name": "Test User",
    "email": "[email protected]",
    "password": "testpassword123"
  }'
Expected response:
{
  "id": 1,
  "name": "Test User",
  "email": "[email protected]"
}

Login

curl -X POST http://localhost:8000/api/login \
  -H "Content-Type: application/x-www-form-urlencoded" \
  -d "[email protected]&password=testpassword123"
Expected response:
{
  "access_token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...",
  "token_type": "bearer"
}

Upload Document

TOKEN="your_access_token_here"

curl -X POST http://localhost:8000/chunking \
  -H "Authorization: Bearer $TOKEN" \
  -F "[email protected]"
Expected response:
{
  "status": "queued",
  "job_id": "abc123...",
  "collection_name": "edu_mate_def456..."
}

Check Chunking Status

curl "http://localhost:8000/chunking/status?job_id=abc123..."
Responses:
// While processing
{"status": "started"}

// When complete
{
  "status": "chunked",
  "result": {
    "stored": true,
    "chunks": 42,
    "source": "/path/to/sample.pdf",
    "collection_name": "edu_mate_def456..."
  }
}

Generate Questions

curl -X POST "http://localhost:8000/chat" \
  -H "Authorization: Bearer $TOKEN" \
  -d "query=Generate MCQs on this topic" \
  -d "collection_name=edu_mate_def456..." \
  -d "blooms_requirements=5 remember, 3 understand, 4 apply, 3 analyze, 2 evaluate, 3 create"
Expected response:
{
  "status": "queued",
  "job_id": "xyz789..."
}

Database Schema Initialization

On first run, FastAPI automatically creates database tables using SQLAlchemy:
backend/server.py
from .database import engine
from . import models

# Create tables
models.Base.metadata.create_all(bind=engine)
Verify tables were created:
psql -U edumate_user -d edumate -c "\dt"
Expected output:
          List of relations
 Schema |    Name     | Type  |    Owner
--------+-------------+-------+--------------
 public | assessments | table | edumate_user
 public | users       | table | edumate_user

Production Deployment

Using Gunicorn

For production, use Gunicorn with Uvicorn workers:
# Install Gunicorn
pip install gunicorn

# Run with 4 worker processes
gunicorn backend.server:app \
  --workers 4 \
  --worker-class uvicorn.workers.UvicornWorker \
  --bind 0.0.0.0:8000 \
  --access-logfile - \
  --error-logfile -

Using Systemd Service

Create a systemd service file:
/etc/systemd/system/edumate-backend.service
[Unit]
Description=EduMate FastAPI Backend
After=network.target postgresql.service redis.service

[Service]
Type=notify
User=edumate
Group=edumate
WorkingDirectory=/path/to/edumate
Environment="PATH=/path/to/edumate/venv/bin"
ExecStart=/path/to/edumate/venv/bin/gunicorn backend.server:app \
  --workers 4 \
  --worker-class uvicorn.workers.UvicornWorker \
  --bind 0.0.0.0:8000
Restart=always

[Install]
WantedBy=multi-user.target
Enable and start:
sudo systemctl daemon-reload
sudo systemctl enable edumate-backend
sudo systemctl start edumate-backend
sudo systemctl status edumate-backend

RQ Worker Service

Create a systemd service for the RQ worker:
/etc/systemd/system/edumate-worker.service
[Unit]
Description=EduMate RQ Worker
After=network.target redis.service

[Service]
Type=simple
User=edumate
Group=edumate
WorkingDirectory=/path/to/edumate
Environment="PATH=/path/to/edumate/venv/bin"
ExecStart=/path/to/edumate/venv/bin/rq worker
Restart=always

[Install]
WantedBy=multi-user.target
Enable and start:
sudo systemctl daemon-reload
sudo systemctl enable edumate-worker
sudo systemctl start edumate-worker

Troubleshooting

Port 8000 Already in Use

# Find process using port 8000
sudo lsof -i :8000

# Kill the process
sudo kill -9 <PID>

# Or use a different port
uvicorn backend.server:app --port 8001

Database Connection Error

# Check PostgreSQL is running
sudo systemctl status postgresql

# Test connection manually
psql -U edumate_user -d edumate -h localhost

# Verify connection string in backend/database.py

Redis Connection Error

# Check Redis is running
redis-cli ping

# Restart Redis
sudo systemctl restart redis

Import Errors

# Ensure virtual environment is activated
source venv/bin/activate

# Reinstall dependencies
pip install -r requirements.txt

# Run from project root, not backend/
cd /path/to/edumate
python -m backend.main

RQ Worker Not Processing Jobs

# Check worker is running
ps aux | grep "rq worker"

# Check Redis queue
redis-cli
> KEYS *
> LLEN rq:queue:default

# Restart worker
pkill -f "rq worker"
rq worker

Logging

Enable detailed logging for debugging:
backend/main.py
import logging

logging.basicConfig(
    level=logging.DEBUG,
    format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)

uvicorn.run(app, port=8000, host="0.0.0.0", log_level="debug")

Next Steps

With the backend running, deploy the frontend:
The backend serves the built React frontend from / when the frontend/dist directory exists. See backend/server.py:344-351 for the static file mounting logic.

Build docs developers (and LLMs) love