Skip to main content

Prerequisites

Before you begin, ensure you have the following installed on your system:

Node.js

Version 20 or higher

Bun

Latest version for package management

Python

Version 3.9 or higher

Quick Start

GitaChat consists of two main components that run independently:
  • Frontend: Next.js application (port 3000)
  • Backend: FastAPI server (port 8000)

Frontend Setup

The frontend is built with Next.js, TypeScript, and uses Clerk for authentication, Supabase for query history, and TanStack Query for data fetching.
1

Navigate to frontend directory

cd frontend
2

Install dependencies

bun install
3

Configure environment variables

Create a .env.local file in the frontend/ directory:
BACKEND_URL=http://localhost:8000
NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY=your_clerk_publishable_key
CLERK_SECRET_KEY=your_clerk_secret_key
NEXT_PUBLIC_SUPABASE_URL=your_supabase_url
SUPABASE_SERVICE_ROLE_KEY=your_supabase_service_role_key
See Environment Variables for detailed configuration.
4

Start the development server

bun run dev
The frontend will be available at http://localhost:3000

Backend Setup

The backend uses FastAPI with Python, Sentence Transformers for embeddings, and Pinecone as the vector database.
1

Navigate to backend directory

cd backend
2

Create a virtual environment (recommended)

python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate
3

Install dependencies

pip install -r requirements.txt
This will install:
  • FastAPI & Uvicorn: Web framework and ASGI server
  • Sentence Transformers: BGE-base-en-v1.5 for embeddings (768-dim)
  • Pinecone: Vector database client
  • OpenAI: For generating contextual commentary
  • SlowAPI: Rate limiting
4

Configure environment variables

Create a .env file in the backend/ directory:
PINECONE_API_KEY=your_pinecone_api_key
PINECONE_INDEX=your_pinecone_index_name
GPT_KEY=your_openai_api_key
See Environment Variables for detailed configuration.
5

Start the development server

uvicorn main:app --reload
The backend API will be available at http://localhost:8000

Running Both Services Concurrently

You can use the provided Makefile to run both frontend and backend simultaneously:
make dev
This command runs both services in parallel using the configuration from the Makefile.

Verify Your Setup

Frontend Health Check

Open http://localhost:3000 in your browserYou should see the GitaChat homepage

Backend Health Check

Visit http://localhost:8000/healthYou should receive: {"status": "ok"}

Test the Query Endpoint

Once both services are running, you can test the semantic search functionality:
curl -X POST http://localhost:8000/api/query \
  -H "Content-Type: application/json" \
  -d '{"query": "How do I find inner peace?"}'
You should receive a response with a relevant verse from the Bhagavad Gita.

Project Structure

gitachat/
├── frontend/               # Next.js frontend application
│   ├── app/               # Next.js App Router pages
│   ├── components/        # React components
│   ├── lib/              # Utility functions and API clients
│   ├── package.json      # Frontend dependencies
│   └── .env.local        # Frontend environment variables

├── backend/              # FastAPI backend application
│   ├── main.py          # FastAPI application and endpoints
│   ├── model.py         # Embedding and search logic
│   ├── clients.py       # Pinecone and model clients
│   ├── config.py        # Environment configuration
│   ├── utils.py         # Helper functions
│   ├── requirements.txt # Python dependencies
│   └── .env            # Backend environment variables

└── Makefile            # Commands for running both services

Troubleshooting

Port Already in Use

If port 3000 or 8000 is already in use:
# Find process using port
lsof -i :3000  # or :8000

# Kill the process
kill -9 <PID>

Python Module Not Found

Ensure your virtual environment is activated:
source venv/bin/activate
pip install -r requirements.txt

CORS Errors

Verify that BACKEND_URL in frontend .env.local matches your backend URL (default: http://localhost:8000)

Model Download Issues

The first backend startup downloads the BGE-base-en-v1.5 model (~400MB). Ensure you have:
  • Stable internet connection
  • Sufficient disk space
  • Patience (may take a few minutes)

Next Steps

Environment Variables

Learn about all required API keys and configuration options

Deployment Guide

Deploy GitaChat to Vercel and Railway

Build docs developers (and LLMs) love