Skip to main content

Prerequisites

Ensure you have the following installed:
  • Docker 20.10+ - Container runtime
  • Docker Compose 2.0+ - Multi-container orchestration
  • (Optional) NVIDIA Docker Runtime - For GPU acceleration with Ollama
Docker provides a consistent development environment and simplifies dependency management. All services run in isolated containers.

Quick Start

1

Clone the repository

git clone https://github.com/SmartEatAI/smart-eat-ai.git
cd smart-eat-ai
2

Configure environment

Copy the example environment file:
cp .env.example .env
The default values work for Docker, but you can customize as needed.
3

Build containers

Build all Docker images:
docker compose build
This builds the backend and frontend images from their respective Dockerfiles.
4

Start services

Launch all containers:
docker compose up
Add -d flag to run in detached mode (background):
docker compose up -d

Service Overview

SmartEat AI runs five Docker services:
backend
FastAPI Application
Container: smarteatai_backendPort: 8000Purpose: REST API providing authentication, user management, meal plans, and AI recommendationsAccess: http://localhost:8000Docs: http://localhost:8000/docs
frontend
Next.js Application
Container: smarteatai_frontendPort: 3000Purpose: User interface for SmartEat AI platformAccess: http://localhost:3000Features: Dashboard, chat, profile management, meal planning
db
PostgreSQL Database
Container: smarteatai_dbPort: 5432Image: postgres:15Purpose: Primary database storing users, recipes, profiles, and meal plansCredentials: Configured via environment variablesVolume: postgres_data (persists data)
ollama
LLM Server
Container: smarteatai_ollamaPort: 11434Image: ollama/ollama:latestPurpose: Local LLM server for AI chat and embeddingsModel: llama3.1 (configurable via OLLAMA_MODEL)Volume: ollama_data (persists models)GPU: Automatically uses NVIDIA GPU if available
adminer
Database Admin
Container: smarteatai_adminerPort: 8080Image: adminerPurpose: Web-based database management interfaceAccess: http://localhost:8080Server: db (PostgreSQL service name)

Accessing Services

Frontend Application

Open http://localhost:3000 in your browser to access the SmartEat AI web interface.

Backend API

curl http://localhost:8000/health

Database Management

Access Adminer at http://localhost:8080 with these credentials:
FieldValue
SystemPostgreSQL
Serverdb
Usernamesmarteatai
Passwordsmarteatai
Databasesmarteatai
These are default development credentials. Change them in production!

Ollama LLM Service

Ollama runs on http://localhost:11434. Download the required model:
# Access the Ollama container
docker exec -it smarteatai_ollama bash

# Pull the model (llama3.1 by default)
ollama pull llama3.1

# Verify installation
ollama list

Docker Compose Commands

# Start all services
docker compose up

# Start in background
docker compose up -d

# Start specific service
docker compose up backend

Container Management

Execute Commands in Containers

# Access backend shell
docker exec -it smarteatai_backend bash

# Run migrations
docker exec smarteatai_backend alembic upgrade head

# Seed database
docker exec smarteatai_backend python -m app.seeders.run_seed

GPU Acceleration (Optional)

If you have an NVIDIA GPU, Ollama will automatically use it for faster inference.

Requirements

  1. NVIDIA GPU with CUDA support
  2. NVIDIA Docker Runtime installed

Install NVIDIA Container Toolkit

# Ubuntu/Debian
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | \
  sudo tee /etc/apt/sources.list.d/nvidia-docker.list

sudo apt-get update
sudo apt-get install -y nvidia-container-toolkit
sudo systemctl restart docker

Disable GPU (CPU-only)

If you don’t have a GPU or want to use CPU only, comment out the deploy section in docker-compose.yml:
ollama:
  image: ollama/ollama:latest
  # ... other config ...
  # deploy:
  #   resources:
  #     reservations:
  #       devices:
  #         - driver: nvidia
  #           count: all
  #           capabilities: [ gpu ]

Volumes and Data Persistence

Docker volumes persist data between container restarts:
VolumePurposeData
postgres_dataDatabase storageUsers, recipes, plans
ollama_dataLLM modelsDownloaded Ollama models

Managing Volumes

docker volume ls

Development Workflow

Hot Reload

Both backend and frontend support hot reload:
  • Backend: FastAPI auto-reloads on file changes in backend/app/
  • Frontend: Next.js hot-reloads on file changes in frontend/

Making Code Changes

  1. Edit files in backend/ or frontend/ directories
  2. Changes are automatically synced to containers via volume mounts
  3. Services reload automatically
  4. Refresh your browser to see changes
If you modify requirements.txt or package.json, rebuild the containers:
docker compose build backend
docker compose up -d backend

Troubleshooting

Port Already in Use

# Find process using port
lsof -ti:8000 | xargs kill -9

# Or change port in docker-compose.yml
ports:
  - "8001:8000"  # External:Internal

Container Won’t Start

# Check logs
docker compose logs backend

# Restart specific service
docker compose restart backend

# Rebuild and restart
docker compose up -d --build backend

Database Connection Issues

# Verify database is running
docker compose ps db

# Check database logs
docker compose logs db

# Recreate database
docker compose down
docker volume rm smart-eat-ai_postgres_data
docker compose up -d

Ollama Model Issues

# Check if model is downloaded
docker exec smarteatai_ollama ollama list

# Pull model manually
docker exec smarteatai_ollama ollama pull llama3.1

# Check Ollama logs
docker compose logs ollama

Clean Start

Reset everything:
# Stop all containers
docker compose down

# Remove volumes (WARNING: Deletes all data!)
docker compose down -v

# Remove images
docker compose down --rmi all

# Rebuild from scratch
docker compose build --no-cache
docker compose up -d

Next Steps

Build docs developers (and LLMs) love