Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/AngelAmoSanchez/TFG-RaspberryPi-BLE/llms.txt

Use this file to discover all available pages before exploring further.

This guide walks through deploying every component of the BLE People Counter system to a production environment. The three components — the Raspberry Pi scanner agent, the cloud backend, and the frontend dashboard — can be deployed independently, but they must all be configured to talk to each other.

Raspberry Pi scanner

Installation

Run install.sh from the raspberry-pi/ directory on each Pi. The script installs system dependencies, creates a Python virtual environment, grants the required Linux capabilities to the Python binary, and writes a systemd service unit:
cd /home/pi/TFG-RaspberryPi-BLE/raspberry-pi
./install.sh
The script grants BLE scanning permissions without requiring sudo at runtime:
sudo setcap cap_net_raw,cap_net_admin+eip "$PROJECT_ROOT/venv/bin/python3"
After installation, edit the .env file with your production settings before starting the service.

Configuration

Copy .env.example to .env and fill in at minimum:
DEVICE_ID=pi-entrance-01
COMMUNICATION_MODE=http
HTTP_BASE_URL=https://your-backend.fly.dev
HTTP_API_KEY=your-api-key
Key environment variables from src/config.py:
VariableDefaultDescription
DEVICE_ID(required)Unique identifier for this Pi (min 3 chars)
COMMUNICATION_MODEmqtthttp or mqtt
HTTP_BASE_URLhttp://localhost:8000Backend URL
HTTP_API_KEYAPI key for backend authentication
NEAR_THRESHOLD-60RSSI boundary for NEAR zone (dBm)
MEDIUM_THRESHOLD-75RSSI boundary for MEDIUM zone (dBm)
SCAN_DURATION10Seconds per scan (1–60)
SCAN_INTERVAL30Seconds between scans (min 5)
LOG_LEVELINFODEBUG, INFO, WARNING, ERROR

systemd auto-start

install.sh writes the service unit to /etc/systemd/system/ble-scanner.service. Enable it to start on boot:
sudo systemctl enable ble-scanner
sudo systemctl start ble-scanner
The service is configured to restart automatically:
[Service]
Restart=always
RestartSec=15
StartLimitBurst=10
StartLimitInterval=200
Check the current status:
sudo systemctl status ble-scanner
Follow logs in real time:
sudo journalctl -u ble-scanner -f
Or use the bundled status script:
./scripts/check_status.sh

Log rotation

The agent writes logs to ./logs/iot-agent.log with automatic rotation. The RotatingFileHandler keeps up to 3 files of 5 MB each (15 MB total on disk):
rotating_handler = RotatingFileHandler(
    os.path.join(log_dir, "iot-agent.log"), maxBytes=5 * 1024 * 1024, backupCount=3
)
No additional log rotation configuration is needed.

OTA updates

The ota/ota_update.py module provides over-the-air updates via Git. When enabled, it checks origin/main on the configured interval and runs git pull when a new commit is available:
class OTAUpdater:
    def __init__(
        self,
        repo_path: str = "/home/pi/TFG-RaspberryPi-BLE/raspberry-pi",
        version_file: str = "ota/version.json",
        check_interval: int = 3600,  # default: check every hour
        auto_restart: bool = True,
    ):
With auto_restart=True, the updater calls systemctl restart iot-agent after a successful pull. The current deployed commit hash is tracked in ota/version.json.
The OTA updater requires that the Pi has network access to the GitHub repository and that git is installed (handled by install.sh).

Backend

Docker Compose (self-hosted)

The docker-compose.yml in backend-cloud/ defines a PostgreSQL 14 database and the FastAPI application:
services:
  postgres:
    image: postgres:14-alpine
    environment:
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: postgres
      POSTGRES_DB: tfg_detections
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres"]
      interval: 10s
      timeout: 5s
      retries: 5

  backend:
    build: .
    environment:
      DATABASE_URL: postgresql+asyncpg://postgres:postgres@postgres:5432/tfg_detections
      ENVIRONMENT: development
    ports:
      - "8000:8000"
    depends_on:
      postgres:
        condition: service_healthy
For production, override the environment variables with secure values. Create a docker-compose.prod.yml or use an .env file:
SECRET_KEY=a-long-random-string-at-least-32-chars
DATABASE_URL=postgresql+asyncpg://user:password@postgres:5432/tfg_detections
ENVIRONMENT=production
DEBUG=false
API_KEY=your-pi-api-key
DEVICES_PER_PERSON=1.5
Start the stack:
docker compose up -d

Environment variables

Key backend settings from backend-cloud/src/config.py:
VariableDefaultDescription
SECRET_KEYdev-secret-key-change-in-productionChange this in production
DATABASE_URLpostgresql+asyncpg://postgres:postgres@localhost:5432/tfg_detectionsPostgreSQL connection string
ENVIRONMENTdevelopmentSet to production in prod
DEBUGtrueSet to false in prod
API_KEYShared secret used by Pi agents
DEVICES_PER_PERSON1.5Divisor for people estimate
NEAR_THRESHOLD-60RSSI boundary for NEAR zone
MEDIUM_THRESHOLD-75RSSI boundary for MEDIUM zone
CORS_ORIGINSlocalhost variants + *.vercel.appAllowed frontend origins
Always set a strong, random SECRET_KEY in production. The default value dev-secret-key-change-in-production must not be used in any internet-facing deployment.

Fly.io deployment

The project includes a fly.toml targeting the cdg (Paris) region:
app = 'tfg-raspberrypi-ble'
primary_region = 'cdg'

[env]
  ENVIRONMENT = 'production'
  DEBUG = 'false'
  DEVICES_PER_PERSON = '1.5'
  NEAR_THRESHOLD = '-60'
  MEDIUM_THRESHOLD = '-75'
  PORT = '8000'
  WEBSOCKET_ENABLED = 'true'

[http_service]
  internal_port = 8000
  force_https = true
  auto_stop_machines = 'off'
  min_machines_running = 1

[[vm]]
  memory = '512mb'
  cpus = 1
1

Install the Fly CLI and authenticate

curl -L https://fly.io/install.sh | sh
fly auth login
2

Create the app (first time only)

cd backend-cloud
fly launch --no-deploy
3

Set secrets

Secrets are never stored in fly.toml. Set them via the CLI:
fly secrets set SECRET_KEY="your-strong-secret-key"
fly secrets set DATABASE_URL="postgresql+asyncpg://user:pass@host:5432/db"
fly secrets set API_KEY="your-pi-api-key"
4

Deploy

fly deploy

Health check

The backend exposes a /health endpoint. Use it to verify the deployment:
curl https://your-backend.fly.dev/health

Frontend

Vercel deployment

1

Connect the repository to Vercel

Import the repository in the Vercel dashboard or run vercel from the frontend/ directory.
2

Set the backend URL environment variable

Add VITE_API_URL in the Vercel project settings (Environment Variables):
VITE_API_URL=https://your-backend.fly.dev
This variable must be set for both Preview and Production environments.
3

Deploy

Vercel builds the project automatically on every push to main. For a manual deploy:
vercel --prod

CORS configuration

The backend’s allowed origins list in backend-cloud/src/config.py includes https://*.vercel.app by default:
cors_origins: list = [
    "http://localhost:3000",
    "http://localhost:5173",
    "http://127.0.0.1:5173",
    "https://*.vercel.app",
]
If you use a custom domain, add it to this list (or to the CORS_ORIGINS environment variable if that is wired up) and redeploy the backend.

Multi-Pi setup

To monitor multiple areas simultaneously, deploy one Pi per zone and give each a unique DEVICE_ID. All Pis point to the same backend:
# Pi 1 — entrance
DEVICE_ID=pi-entrance-01
DEVICE_NAME=Entrance Scanner
DEVICE_LOCATION=Main entrance
HTTP_BASE_URL=https://your-backend.fly.dev
HTTP_API_KEY=shared-api-key

# Pi 2 — main hall
DEVICE_ID=pi-hall-01
DEVICE_NAME=Hall Scanner
DEVICE_LOCATION=Main hall
HTTP_BASE_URL=https://your-backend.fly.dev
HTTP_API_KEY=shared-api-key
Each detection record stored in the database includes the device_id field, so the dashboard can display per-zone counts by Pi. There is no additional backend configuration required for multi-Pi operation — the backend handles data from any number of devices as long as they share the same API_KEY.
Use a naming convention for DEVICE_ID values that encodes the location, for example pi-floor2-west-01. This makes dashboard filtering and log analysis easier as the deployment grows.

Monitoring

Pi service status

sudo systemctl status ble-scanner
sudo journalctl -u ble-scanner -f
./scripts/check_status.sh

Pi log files

tail -f logs/iot-agent.log
Rotated automatically: up to 3 × 5 MB files.

Backend health

curl https://your-backend.fly.dev/health
Returns HTTP 200 when the API and database are reachable.

Fly.io logs

fly logs
fly status

Build docs developers (and LLMs) love