Skip to main content

Overview

The Water Quality Backend API can be deployed in multiple ways:
  1. Docker: Containerized deployment for consistency
  2. Docker Compose: Full stack with monitoring (Prometheus, Grafana)
  3. Production Python: Direct deployment with Uvicorn
All deployment methods use Uvicorn as the ASGI server for high-performance async request handling.

Prerequisites

Before deployment, ensure you have:
  • Python 3.12+ (for non-Docker deployments)
  • Docker and Docker Compose (for containerized deployments)
  • Firebase project with credentials
  • Environment variables configured (see Environment Variables)

Docker Deployment

The recommended deployment method using a containerized environment.

Dockerfile

The application uses a multi-stage optimized Dockerfile:
FROM python:3.12-slim

WORKDIR /app

ENV PYTHONDONTWRITEBYTECODE=1 \
    PYTHONUNBUFFERED=1

# Install system dependencies for scientific libraries
RUN apt-get update && \
    apt-get install -y --no-install-recommends \
    gcc \
    g++ \
    build-essential \
    libffi-dev \
    libssl-dev \
    libblas-dev \
    liblapack-dev \
    gfortran \
    libjpeg62-turbo \
    zlib1g \
    python3-dev && \
    rm -rf /var/lib/apt/lists/*

# Install Python dependencies
COPY requirements.txt .
RUN pip install --upgrade pip && \
    pip install --no-cache-dir -r requirements.txt

# Copy application code
COPY . .

# Create non-root user for security
RUN groupadd -r appuser && useradd -r -g appuser appuser && \
    chown -R appuser:appuser /app

USER appuser

EXPOSE 8000

CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "8000"]

Build Docker Image

1

Clone Repository

git clone https://github.com/your-org/water-quality-backend.git
cd water-quality-backend
2

Create Environment File

Create a .env file with your configuration:
cp .env.example .env
# Edit .env with your credentials
See Environment Variables for required variables.
3

Build Image

docker build -t water-quality-api:latest .
This creates an optimized production image (~800MB including dependencies).
4

Run Container

docker run -d \
  --name water-quality-api \
  -p 8000:8000 \
  --env-file .env \
  water-quality-api:latest
5

Verify Deployment

curl http://localhost:8000/
# Response: {"message": "API"}

Docker Security Features

The Dockerfile follows security best practices:
  • Runs as non-root user (appuser)
  • Minimal base image (python:3.12-slim)
  • No cache layers for sensitive data
  • Proper file permissions

Docker Compose Deployment

Deploy the full stack with monitoring and observability.

docker-compose.yml

version: '3.8'

services:
  prometheus:
    image: prom/prometheus:latest
    container_name: prometheus
    volumes:
      - ./prometheus.yml:/etc/prometheus/prometheus.yml
    ports:
      - "9090:9090"

  node_exporter:
    image: prom/node-exporter:latest
    container_name: node_exporter
    ports:
      - "9100:9100"

  app:
    build: .
    container_name: backend_api
    env_file:
      - .env
    ports:
      - "8000:8000"
      
  grafana:
    image: grafana/grafana
    ports:
      - "3000:3000"

Services Included

Purpose: Main FastAPI applicationPort: 8000Configuration: Loads environment variables from .env fileHealth Check: GET / returns {"message": "API"}
Purpose: Scrapes and stores metrics from the APIPort: 9090Configuration: Requires prometheus.yml configuration fileMetrics Endpoint: API exposes metrics at /metrics (via monitoring feature)
Purpose: Exports hardware and OS metricsPort: 9100Metrics: CPU, memory, disk, network statistics
Purpose: Dashboard for visualizing Prometheus metricsPort: 3000Default Credentials: admin/admin (change on first login)

Deploy with Docker Compose

1

Create Prometheus Configuration

Create prometheus.yml in the project root:
global:
  scrape_interval: 15s

scrape_configs:
  - job_name: 'water-quality-api'
    static_configs:
      - targets: ['app:8000']
  
  - job_name: 'node'
    static_configs:
      - targets: ['node_exporter:9100']
2

Configure Environment

Ensure .env file exists with all required variables.
3

Start Stack

docker-compose up -d
This starts all services in detached mode.
4

Verify Services

# Check API
curl http://localhost:8000/

# Check Prometheus
curl http://localhost:9090/

# Check Grafana
curl http://localhost:3000/
5

View Logs

# All services
docker-compose logs -f

# Specific service
docker-compose logs -f app

Managing the Stack

# Start services
docker-compose up -d

# Stop services
docker-compose down

# Rebuild after code changes
docker-compose up -d --build

# View service status
docker-compose ps

# Restart specific service
docker-compose restart app

# Remove volumes (CAUTION: deletes data)
docker-compose down -v
Production Note: The default Docker Compose configuration is for development. For production:
  • Use production-grade databases (not in-memory)
  • Add restart policies (restart: always)
  • Configure resource limits
  • Use Docker secrets for sensitive data
  • Add health checks

Production Python Deployment

Deploy directly on a server without Docker.

Installation Steps

1

Install Python 3.12

# Ubuntu/Debian
sudo apt update
sudo apt install python3.12 python3.12-venv python3.12-dev

# macOS (using Homebrew)
brew install [email protected]
2

Install System Dependencies

Required for scientific libraries (pandas, numpy, scikit-learn):
sudo apt install -y \
  gcc g++ build-essential \
  libffi-dev libssl-dev \
  libblas-dev liblapack-dev \
  gfortran libjpeg62-turbo zlib1g
3

Clone and Setup

git clone https://github.com/your-org/water-quality-backend.git
cd water-quality-backend

# Create virtual environment
python3.12 -m venv venv
source venv/bin/activate

# Install dependencies
pip install --upgrade pip
pip install -r requirements.txt
4

Configure Environment

cp .env.example .env
# Edit .env with your production credentials
nano .env
5

Run Application

python main.py
This starts Uvicorn on http://0.0.0.0:8000

Production Configuration

For production, use Uvicorn with additional options:
uvicorn app:app \
  --host 0.0.0.0 \
  --port 8000 \
  --workers 4 \
  --log-level info \
  --access-log \
  --no-use-colors
Options Explained:
  • --workers 4: Run 4 worker processes (adjust based on CPU cores)
  • --log-level info: Set logging verbosity
  • --access-log: Enable HTTP access logging
  • --no-use-colors: Disable ANSI colors in logs (better for file logs)

Process Management with systemd

Create a systemd service for automatic startup and restart:
# /etc/systemd/system/water-quality-api.service
[Unit]
Description=Water Quality Backend API
After=network.target

[Service]
Type=notify
User=www-data
Group=www-data
WorkingDirectory=/opt/water-quality-backend
Environment="PATH=/opt/water-quality-backend/venv/bin"
EnvironmentFile=/opt/water-quality-backend/.env
ExecStart=/opt/water-quality-backend/venv/bin/uvicorn app:app \
  --host 0.0.0.0 \
  --port 8000 \
  --workers 4 \
  --log-level info
ExecReload=/bin/kill -s HUP $MAINPID
KillMode=mixed
KillSignal=SIGQUIT
Restart=always
RestartSec=5

[Install]
WantedBy=multi-user.target
Enable and start the service:
sudo systemctl daemon-reload
sudo systemctl enable water-quality-api
sudo systemctl start water-quality-api
sudo systemctl status water-quality-api

Reverse Proxy (Nginx)

Use Nginx as a reverse proxy for SSL termination and load balancing.

Nginx Configuration

server {
    listen 80;
    server_name api.example.com;
    
    # Redirect HTTP to HTTPS
    return 301 https://$server_name$request_uri;
}

server {
    listen 443 ssl http2;
    server_name api.example.com;
    
    # SSL Configuration
    ssl_certificate /etc/letsencrypt/live/api.example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/api.example.com/privkey.pem;
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers HIGH:!aNULL:!MD5;
    
    # Proxy to Uvicorn
    location / {
        proxy_pass http://127.0.0.1:8000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        
        # WebSocket support for Socket.IO
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
    }
    
    # Socket.IO specific
    location /socket.io/ {
        proxy_pass http://127.0.0.1:8000/socket.io/;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
    }
    
    # Monitoring endpoints (optional: restrict access)
    location /metrics {
        proxy_pass http://127.0.0.1:8000/metrics;
        allow 10.0.0.0/8;  # Internal network only
        deny all;
    }
}
Enable the configuration:
sudo ln -s /etc/nginx/sites-available/water-quality-api /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl reload nginx

Environment Variables

All deployment methods require environment variables to be configured.
See the complete Environment Variables Guide for all required and optional variables.

Critical Variables

# Firebase
FIREBASE_ADMIN_CREDENTIALS='{...}'  # JSON credentials
FIREBASE_API_KEY='your-api-key'
FIREBASE_REALTIME_URL='https://your-project.firebaseio.com'

# Authentication
SECRET_KEY='your-jwt-secret-key'
STATE_SECRET='your-oauth-state-secret'

# GitHub OAuth
GITHUB_CLIENT_ID='your-github-client-id'
GITHUB_CLIENT_SECRET='your-github-client-secret'
GITHUB_CALLBACK_URL='https://api.example.com/auth/github/callback'
FRONTEND_ORIGIN='https://app.example.com/oauth/callback'

Health Checks

Monitor application health using built-in endpoints.

Basic Health Check

curl http://localhost:8000/
# Response: {"message": "API"}

Prometheus Metrics

If monitoring is configured:
curl http://localhost:8000/metrics
# Returns Prometheus-formatted metrics

Docker Health Check

Add to Dockerfile:
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
  CMD curl -f http://localhost:8000/ || exit 1

Monitoring Setup

The Docker Compose stack includes Prometheus and Grafana for monitoring.

Access Monitoring Tools

Configure Grafana Dashboard

1

Login to Grafana

Navigate to http://localhost:3000 and login with admin/admin
2

Add Prometheus Data Source

  • Go to Configuration → Data Sources
  • Add Prometheus: http://prometheus:9090
3

Import Dashboard

  • Go to Dashboards → Import
  • Use dashboard ID 1860 for Node Exporter metrics
  • Create custom dashboard for API metrics

Scaling Considerations

Horizontal Scaling

For high-traffic deployments:
  1. Multiple Workers: Increase Uvicorn workers based on CPU cores
    uvicorn app:app --workers $(nproc)
    
  2. Load Balancer: Use Nginx, HAProxy, or cloud load balancers
  3. Multiple Instances: Deploy multiple API instances behind a load balancer

Vertical Scaling

Optimize resource allocation:
# docker-compose.yml
services:
  app:
    deploy:
      resources:
        limits:
          cpus: '2.0'
          memory: 4G
        reservations:
          cpus: '1.0'
          memory: 2G

Troubleshooting

Common Issues

Error: Address already in useSolution:
# Find process using port 8000
sudo lsof -i :8000
# Kill process
sudo kill -9 <PID>
Error: Failed to initialize FirebaseSolution:
  • Verify FIREBASE_ADMIN_CREDENTIALS is valid JSON
  • Check Firebase project permissions
  • Ensure FIREBASE_REALTIME_URL is correct
Error: ModuleNotFoundError: No module named 'X'Solution:
pip install --upgrade -r requirements.txt
Error: Build failures with numpy/pandasSolution: Ensure system dependencies are installed in Dockerfile:
RUN apt-get install -y gcc g++ gfortran libblas-dev liblapack-dev

Viewing Logs

# Docker
docker logs -f water-quality-api

# Docker Compose
docker-compose logs -f app

# systemd
sudo journalctl -u water-quality-api -f

# Direct Python (logs to stdout)
python main.py 2>&1 | tee app.log

Security Checklist

Before deploying to production:
  • Change default passwords (Grafana, databases)
  • Use HTTPS/TLS for all connections
  • Secure environment variables (use Docker secrets or vault)
  • Enable firewall rules (only expose necessary ports)
  • Set up rate limiting
  • Configure CORS for specific origins (not *)
  • Use non-root user in containers
  • Regularly update dependencies
  • Enable audit logging
  • Implement backup strategy for Firebase data

Next Steps

Environment Variables

Configure all required environment variables

Monitoring

Learn about built-in monitoring features

Build docs developers (and LLMs) love