Prerequisites
Ensure you have the following installed:- Docker 20.10+ - Container runtime
- Docker Compose 2.0+ - Multi-container orchestration
- (Optional) NVIDIA Docker Runtime - For GPU acceleration with Ollama
Docker provides a consistent development environment and simplifies dependency management. All services run in isolated containers.
Quick Start
Configure environment
Copy the example environment file:The default values work for Docker, but you can customize as needed.
Build containers
Build all Docker images:This builds the backend and frontend images from their respective Dockerfiles.
Service Overview
SmartEat AI runs five Docker services:Container:
smarteatai_backendPort: 8000Purpose: REST API providing authentication, user management, meal plans, and AI recommendationsAccess: http://localhost:8000Docs: http://localhost:8000/docsContainer:
smarteatai_frontendPort: 3000Purpose: User interface for SmartEat AI platformAccess: http://localhost:3000Features: Dashboard, chat, profile management, meal planningContainer:
smarteatai_dbPort: 5432Image: postgres:15Purpose: Primary database storing users, recipes, profiles, and meal plansCredentials: Configured via environment variablesVolume: postgres_data (persists data)Container:
smarteatai_ollamaPort: 11434Image: ollama/ollama:latestPurpose: Local LLM server for AI chat and embeddingsModel: llama3.1 (configurable via OLLAMA_MODEL)Volume: ollama_data (persists models)GPU: Automatically uses NVIDIA GPU if availableContainer:
smarteatai_adminerPort: 8080Image: adminerPurpose: Web-based database management interfaceAccess: http://localhost:8080Server: db (PostgreSQL service name)Accessing Services
Frontend Application
Open http://localhost:3000 in your browser to access the SmartEat AI web interface.Backend API
- Base URL: http://localhost:8000
- Health Check: http://localhost:8000/health
- Swagger UI: http://localhost:8000/docs
- ReDoc: http://localhost:8000/redoc
Database Management
Access Adminer at http://localhost:8080 with these credentials:| Field | Value |
|---|---|
| System | PostgreSQL |
| Server | db |
| Username | smarteatai |
| Password | smarteatai |
| Database | smarteatai |
These are default development credentials. Change them in production!
Ollama LLM Service
Ollama runs on http://localhost:11434. Download the required model:Docker Compose Commands
Container Management
Execute Commands in Containers
GPU Acceleration (Optional)
If you have an NVIDIA GPU, Ollama will automatically use it for faster inference.Requirements
- NVIDIA GPU with CUDA support
- NVIDIA Docker Runtime installed
Install NVIDIA Container Toolkit
Disable GPU (CPU-only)
If you don’t have a GPU or want to use CPU only, comment out thedeploy section in docker-compose.yml:
Volumes and Data Persistence
Docker volumes persist data between container restarts:| Volume | Purpose | Data |
|---|---|---|
postgres_data | Database storage | Users, recipes, plans |
ollama_data | LLM models | Downloaded Ollama models |
Managing Volumes
Development Workflow
Hot Reload
Both backend and frontend support hot reload:- Backend: FastAPI auto-reloads on file changes in
backend/app/ - Frontend: Next.js hot-reloads on file changes in
frontend/
Making Code Changes
- Edit files in
backend/orfrontend/directories - Changes are automatically synced to containers via volume mounts
- Services reload automatically
- Refresh your browser to see changes
If you modify
requirements.txt or package.json, rebuild the containers:Troubleshooting
Port Already in Use
Container Won’t Start
Database Connection Issues
Ollama Model Issues
Clean Start
Reset everything:Next Steps
- Review Environment Variables for configuration options
- Check the Installation guide for local development without Docker
- Explore the API documentation at http://localhost:8000/docs
