Skip to main content

Prerequisites

Before you begin, ensure you have the following installed on your system:

Docker

Docker 20.10+ for container management

Docker Compose

Docker Compose v2.0+ for orchestration

Git

Git for cloning the repository

Node.js (Optional)

Node 20+ if developing outside Docker
For GPU acceleration with Ollama (optional), you’ll need:
  • NVIDIA GPU with 8GB+ VRAM
  • NVIDIA Docker runtime installed

Installation

1

Clone the Repository

Clone the SmartEat AI repository to your local machine:
git clone https://github.com/SmartEatAI/smart-eat-ai.git
cd smart-eat-ai
2

Configure Environment Variables

Copy the example environment file and configure your settings:
cp .env.example .env
Edit the .env file with your preferred settings:
.env
# Database Configuration
POSTGRES_USER=smarteatai
POSTGRES_PASSWORD=smarteatai
POSTGRES_DB=smarteatai
DATABASE_URL=postgresql://smarteatai:smarteatai@db:5432/smarteatai

# JWT Configuration
SECRET_KEY=your-super-secret-key-change-this-in-production-min-32-chars
ALGORITHM=HS256
ACCESS_TOKEN_EXPIRE_MINUTES=30

# Backend Configuration
BACKEND_URL=http://localhost:8000
FRONTEND_URL=http://localhost:3000

# Ollama Configuration
OLLAMA_MODEL=llama3.1
OLLAMA_BASE_URL=http://ollama:11434
CHROMA_EMBEDDING_MODEL=llama3
CHROMA_DB=backend/app/data/chroma_db_recipes
Make sure to change the SECRET_KEY to a secure random string in production environments. The key must be at least 32 characters long.
3

Build Docker Containers

Build all the Docker images for the application:
docker compose build
This will build the following services:
  • Backend - FastAPI application with Python dependencies
  • Frontend - Next.js application with Node.js
  • PostgreSQL - Database server
  • Ollama - LLM inference engine
  • Adminer - Database management interface
4

Start the Application

Launch all services using Docker Compose:
docker compose up
Use docker compose up -d to run services in detached mode (background).
Wait for all services to start. You should see logs indicating:
  • PostgreSQL is ready to accept connections
  • FastAPI server is running
  • Next.js has compiled successfully
5

Download the Ollama Model

In a new terminal, download the required LLM model inside the Ollama container:
docker exec -it smarteatai_ollama ollama pull llama3.1
Verify the model was downloaded:
docker exec -it smarteatai_ollama ollama list
The first download may take several minutes depending on your internet connection. The model is approximately 4-5GB.
6

Run Database Migrations

Apply database migrations to set up the schema:
docker exec -it smarteatai_backend alembic upgrade head
This creates all necessary tables for users, recipes, meal plans, and more.
7

Seed the Database (Optional)

Populate the database with initial recipe data:
docker exec -it smarteatai_backend python -m app.seeders.run_seed
This will insert:
  • Sample users
  • Recipe categories
  • Nutritional recipes from the dataset
  • Example meal plans
The seeding process may take several minutes as it processes thousands of recipes.

Accessing the Application

Once all services are running, you can access SmartEat AI through the following URLs:

Frontend Application

http://localhost:3000Main user interface for SmartEat AI

Backend API

http://localhost:8000FastAPI backend service

API Documentation

http://localhost:8000/docsInteractive Swagger UI for API testing

Database Admin

http://localhost:8080Adminer interface for database management

API Endpoints

Test the backend health:
curl http://localhost:8000/health

Using SmartEat AI

1

Create an Account

Navigate to http://localhost:3000 and register a new account with your email and password.
2

Complete Your Profile

After logging in, you’ll be prompted to complete your profile with:
  • Biometric data (weight, height, age, gender)
  • Activity level (sedentary, moderate, active)
  • Nutritional goals (weight loss, maintenance, muscle gain)
  • Dietary preferences and restrictions (vegan, allergies, etc.)
  • Number of meals per day
3

Generate Your Meal Plan

Use the Chat feature to interact with Smarty and request a personalized meal plan:
“Create a weekly meal plan for me based on my profile”
The AI will generate a complete weekly plan with breakfast, lunch, dinner, and snacks tailored to your needs.
4

View Your Dashboard

The Dashboard displays today’s meals. Mark meals as consumed to track your daily nutritional progress with real-time updates.
5

Explore Your Plan

Navigate to My Plan to see your complete weekly meal plan with:
  • Average daily calories and macronutrients
  • Individual recipe details with nutritional breakdowns
  • Meal swap functionality powered by the KNN recommendation model

Docker Container Management

Viewing Logs

Monitor logs for specific services:
# All services
docker compose logs -f

# Specific service
docker compose logs -f backend
docker compose logs -f frontend
docker compose logs -f ollama

Stopping the Application

Stop all services:
# Stop and remove containers
docker compose down

# Stop and remove containers + volumes (removes database data)
docker compose down -v

Restarting Services

Restart a specific service without rebuilding:
docker compose restart backend
docker compose restart frontend

Development Workflow

Backend Development

If you prefer to develop the backend outside Docker:
cd backend

# Install dependencies
pip install -r ../docker/backend/requirements.txt

# Run migrations
alembic upgrade head

# Start the dev server
uvicorn app.main:app --reload

Frontend Development

For frontend development outside Docker:
cd frontend

# Install dependencies
npm install

# Start the dev server
npm run dev
When developing outside Docker, update your .env file to point to localhost instead of Docker service names:
  • DATABASE_URL=postgresql://smarteatai:smarteatai@localhost:5432/smarteatai
  • OLLAMA_BASE_URL=http://localhost:11434

Troubleshooting

If you don’t have an NVIDIA GPU or proper drivers, comment out the deploy section in docker-compose.yml under the ollama service to use CPU-only mode.
Ensure PostgreSQL is fully started before running migrations. Wait for the log message:
database system is ready to accept connections
If ports 3000, 8000, 8080, 5432, or 11434 are already in use, modify the port mappings in docker-compose.yml:
ports:
  - "3001:3000"  # Change host port
If ollama pull fails, check your internet connection and try:
docker restart smarteatai_ollama
docker exec -it smarteatai_ollama ollama pull llama3.1

Next Steps

Now that you have SmartEat AI running, explore these resources:

API Reference

Explore the complete API documentation

Architecture

Learn about the system architecture

Machine Learning

Understand the ML recommendation pipeline

GitHub Repository

View source code and contribute
For additional help, refer to the detailed README files in the backend/ and frontend/ directories, or check the project presentation.

Build docs developers (and LLMs) love