Skip to main content

Overview

Docker deployment provides a consistent, isolated environment for running Iris without manually installing Rust, OpenCV, or system dependencies. This guide explains the Dockerfile architecture and deployment strategies.
Docker is the recommended deployment method for production environments, ensuring reproducible builds across different systems.

Prerequisites

1

Install Docker

Ensure Docker is installed on your system:
# Install using official script
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh

# Add user to docker group (optional, avoids sudo)
sudo usermod -aG docker $USER
newgrp docker
2

Verify Docker installation

docker --version
docker run hello-world
Expected output:
Docker version 24.0.0, build abc1234
Hello from Docker!

Dockerfile Architecture

Iris uses a single-stage Dockerfile based on the official Rust image with OpenCV dependencies:
Dockerfile
FROM rust:latest

# Install system dependencies for OpenCV and Rust compilation
RUN apt-get update && apt-get install -y \
    clang \
    llvm-dev \
    libclang-dev \
    pkg-config \
    libopencv-dev \
    curl \
    && rm -rf /var/lib/apt/lists/*

WORKDIR /app

# Copy dependency manifests first (Docker layer caching optimization)
COPY Cargo.toml Cargo.lock ./
COPY src ./src

# Build the application in release mode
RUN cargo build --release

# Download AI models using setup script
COPY setup.sh ./setup.sh
RUN chmod +x setup.sh && ./setup.sh

# Expose API port
EXPOSE 8080

# Run the compiled binary
CMD ["./target/release/iris"]

Build Stages Explained

FROM rust:latest
Uses the official Rust Docker image based on Debian, providing:
  • Rust toolchain (rustc, cargo)
  • Standard Linux utilities
  • Debian package manager (apt)
For production, consider pinning to a specific version:
FROM rust:1.75-slim
RUN apt-get update && apt-get install -y \
    clang \
    llvm-dev \
    libclang-dev \
    pkg-config \
    libopencv-dev \
    curl \
    && rm -rf /var/lib/apt/lists/*
Installs OpenCV and build tools:
  • clang/llvm-dev/libclang-dev: Required for opencv-rust bindings compilation
  • pkg-config: Locates library paths during build
  • libopencv-dev: OpenCV 4.x headers and libraries
  • curl: Downloads ONNX models
The cleanup (rm -rf /var/lib/apt/lists/*) reduces image size by ~40MB.
WORKDIR /app
COPY Cargo.toml Cargo.lock ./
COPY src ./src
Sets /app as the working directory and copies source files.
Copying Cargo.toml/Cargo.lock before src/ enables Docker layer caching. Dependencies are only rebuilt if these files change.
RUN cargo build --release
Compiles Iris in release mode:
  • Optimizations enabled (-O3 equivalent)
  • Binary output: /app/target/release/iris
  • Build time: ~8-12 minutes on first build
Subsequent builds leverage Docker cache if dependencies unchanged.
COPY setup.sh ./setup.sh
RUN chmod +x setup.sh && ./setup.sh
Executes the setup script to download:
  • face_detection_yunet_2023mar.onnx (~360KB)
  • face_recognition_sface_2021dec.onnx (~41MB)
Models are baked into the image at /app/*.onnx.
EXPOSE 8080
CMD ["./target/release/iris"]
  • EXPOSE 8080: Documents the API port (doesn’t publish it)
  • CMD: Runs the compiled binary on container start
The binary expects models in /app (WORKDIR), which matches the build paths.

Building the Image

1

Clone the repository

git clone https://github.com/your-username/iris.git
cd iris
2

Build the Docker image

docker build -t iris-api:latest .
First build takes 10-15 minutes:
  • Downloading Rust dependencies (~2-3 min)
  • Compiling opencv-rust bindings (~5-7 min)
  • Building application (~1-2 min)
  • Downloading ONNX models (~30 sec)
Expected output:
[+] Building 650.2s (12/12) FINISHED
 => [internal] load build definition from Dockerfile
 => [internal] load .dockerignore
 => [1/7] FROM docker.io/library/rust:latest
 => [2/7] RUN apt-get update && apt-get install...
 => [3/7] WORKDIR /app
 => [4/7] COPY Cargo.toml Cargo.lock ./
 => [5/7] COPY src ./src
 => [6/7] RUN cargo build --release
 => [7/7] RUN chmod +x setup.sh && ./setup.sh
 => exporting to image
 => => naming to docker.io/library/iris-api:latest
3

Verify the image

docker images iris-api
Expected output:
REPOSITORY   TAG       IMAGE ID       CREATED          SIZE
iris-api     latest    abc123def456   2 minutes ago    2.1GB
Image size is ~2.1GB due to Rust toolchain and OpenCV. See Optimization for reduction strategies.

Running the Container

Basic Usage

Run Iris in detached mode with port mapping:
docker run -d \
  --name iris \
  -p 8080:8080 \
  iris-api:latest
  • -d: Detached mode (runs in background)
  • —name iris: Container name for easy reference
  • -p 8080:8080: Maps host port 8080 to container port 8080
  • iris-api:latest: Image name and tag

Verify the Container

Check container status:
docker ps
Expected output:
CONTAINER ID   IMAGE              COMMAND                  STATUS         PORTS
abc123def456   iris-api:latest    "./target/release/iris"  Up 10 seconds  0.0.0.0:8080->8080/tcp
View logs:
docker logs iris
Expected output:
Initializing Iris Face AI...
Iris API running on http://localhost:8080

Test the API

Verify the health endpoint:
curl http://localhost:8080/health
Expected response:
OK

Advanced Configuration

Custom Port Mapping

Run on a different host port:
docker run -d \
  --name iris \
  -p 3000:8080 \
  iris-api:latest
Access at http://localhost:3000
The container still listens on 8080 internally. Only the host mapping changes.

Resource Limits

Constrain CPU and memory usage:
docker run -d \
  --name iris \
  -p 8080:8080 \
  --cpus="2.0" \
  --memory="4g" \
  iris-api:latest

Environment Variables

While Iris doesn’t currently use environment variables, you can pass them for future extensibility:
docker run -d \
  --name iris \
  -p 8080:8080 \
  -e LOG_LEVEL=debug \
  -e MAX_IMAGE_SIZE=10MB \
  iris-api:latest

Volume Mounting

Mount external model files (useful for model updates without rebuilding):
docker run -d \
  --name iris \
  -p 8080:8080 \
  -v $(pwd)/models:/app/models \
  iris-api:latest
Requires modifying src/face.rs to load models from /app/models/ instead of /app/.

Persistent Logs

Redirect container logs to a file:
docker run -d \
  --name iris \
  -p 8080:8080 \
  --log-driver json-file \
  --log-opt max-size=10m \
  --log-opt max-file=3 \
  iris-api:latest

Docker Compose

For multi-container setups or simplified orchestration:
docker-compose.yml
version: '3.8'

services:
  iris:
    build: .
    container_name: iris-api
    ports:
      - "8080:8080"
    environment:
      - RUST_LOG=info
    restart: unless-stopped
    deploy:
      resources:
        limits:
          cpus: '2.0'
          memory: 4G
        reservations:
          cpus: '1.0'
          memory: 2G
Run with Docker Compose:
# Build and start
docker-compose up -d

# View logs
docker-compose logs -f iris

# Stop
docker-compose down
docker-compose.yml
version: '3.8'

services:
  iris:
    build: .
    container_name: iris-api
    expose:
      - "8080"
    restart: unless-stopped
  
  nginx:
    image: nginx:alpine
    container_name: iris-proxy
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf:ro
      - ./ssl:/etc/nginx/ssl:ro
    depends_on:
      - iris
    restart: unless-stopped
This setup:
  • Exposes Iris only to the internal Docker network
  • Routes external traffic through nginx
  • Enables HTTPS termination

Image Size Optimization

The default image is ~2.1GB. Reduce size using multi-stage builds:
Dockerfile.optimized
# Build stage
FROM rust:latest AS builder

RUN apt-get update && apt-get install -y \
    clang llvm-dev libclang-dev pkg-config libopencv-dev curl \
    && rm -rf /var/lib/apt/lists/*

WORKDIR /app
COPY Cargo.toml Cargo.lock ./
COPY src ./src
RUN cargo build --release

COPY setup.sh ./
RUN chmod +x setup.sh && ./setup.sh

# Runtime stage
FROM debian:bookworm-slim

RUN apt-get update && apt-get install -y \
    libopencv-core4.6 libopencv-imgcodecs4.6 libopencv-objdetect4.6 \
    && rm -rf /var/lib/apt/lists/*

WORKDIR /app
COPY --from=builder /app/target/release/iris ./
COPY --from=builder /app/*.onnx ./

EXPOSE 8080
CMD ["./iris"]
Build the optimized image:
docker build -f Dockerfile.optimized -t iris-api:slim .
Optimized image size: ~400MB (80% reduction)
Only runtime OpenCV libraries are included, eliminating build tools and Rust toolchain.

Production Deployment

Health Checks

Add Docker health checks for automatic restart on failure:
docker-compose.yml
services:
  iris:
    build: .
    ports:
      - "8080:8080"
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 40s
    restart: unless-stopped

Security Hardening

1

Run as non-root user

Modify Dockerfile:
# Add after WORKDIR /app
RUN useradd -m -u 1000 iris && chown -R iris:iris /app
USER iris
2

Use read-only filesystem

docker run -d \
  --name iris \
  -p 8080:8080 \
  --read-only \
  --tmpfs /tmp \
  iris-api:latest
Iris is stateless and doesn’t write to disk, making read-only mode safe.
3

Drop unnecessary capabilities

docker run -d \
  --name iris \
  -p 8080:8080 \
  --cap-drop=ALL \
  --cap-add=NET_BIND_SERVICE \
  iris-api:latest

Monitoring and Logging

Integrate with logging systems:
# Forward logs to syslog
docker run -d \
  --name iris \
  -p 8080:8080 \
  --log-driver syslog \
  --log-opt syslog-address=tcp://192.168.1.100:514 \
  iris-api:latest
Or use Docker’s built-in JSON logging with log rotation:
docker run -d \
  --name iris \
  -p 8080:8080 \
  --log-opt max-size=10m \
  --log-opt max-file=5 \
  iris-api:latest

Troubleshooting

Symptoms:
error: failed to run custom build command for `opencv`
Cause: libopencv-dev installation failed in Dockerfile.Solution:
  • Update package lists: Add apt-get update before install
  • Use specific OpenCV version:
    RUN apt-get install -y libopencv-dev=4.6.0+dfsg-12
    
  • Check Debian/Ubuntu compatibility with rust:latest base image
Symptoms:
curl: (6) Could not resolve host: github.com
Cause: Network connectivity issues during docker build.Solution:
  • Check Docker network settings
  • Use --network=host flag:
    docker build --network=host -t iris-api .
    
  • Pre-download models and COPY instead:
    COPY *.onnx ./
    
Symptoms:
docker ps  # Shows no running container
docker ps -a  # Shows Exited (1) status
Solution: Check logs for errors:
docker logs iris
Common causes:
  • Missing ONNX models in image
  • Port 8080 permission issues (use --cap-add=NET_BIND_SERVICE)
  • Panic during initialization (check model paths)
Symptoms: Container uses >4GB RAM.Cause: Multiple concurrent face recognition requests.Solution:
  • Set memory limits:
    docker run --memory="4g" --memory-swap="4g" iris-api
    
  • Reduce rate limit quota in src/main.rs
  • Scale horizontally with multiple containers + load balancer
Symptoms: curl: (7) Failed to connect to localhost port 8080Solution:
  • Verify port mapping: docker ps (should show 0.0.0.0:8080->8080/tcp)
  • Check firewall rules:
    sudo ufw allow 8080/tcp
    
  • Test from inside container:
    docker exec iris curl http://localhost:8080/health
    

Scaling with Docker

Horizontal Scaling

Run multiple Iris instances behind a load balancer:
# Start 3 instances
docker run -d --name iris-1 -p 8081:8080 iris-api:latest
docker run -d --name iris-2 -p 8082:8080 iris-api:latest
docker run -d --name iris-3 -p 8083:8080 iris-api:latest
Use nginx for load balancing:
nginx.conf
upstream iris_backend {
    least_conn;
    server localhost:8081;
    server localhost:8082;
    server localhost:8083;
}

server {
    listen 80;
    
    location / {
        proxy_pass http://iris_backend;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }
}

Docker Swarm / Kubernetes

For orchestration at scale, see:

Registry and Distribution

Push to Docker Hub

# Tag the image
docker tag iris-api:latest yourusername/iris-api:latest
docker tag iris-api:latest yourusername/iris-api:v0.1.0

# Login to Docker Hub
docker login

# Push
docker push yourusername/iris-api:latest
docker push yourusername/iris-api:v0.1.0

Pull and Run

Others can now deploy Iris without building:
docker pull yourusername/iris-api:latest
docker run -d -p 8080:8080 yourusername/iris-api:latest

Next Steps

API Reference

Integrate face comparison into your application

Local Installation

Alternative non-Docker setup guide

Security Model

Learn about privacy and security features

Architecture

Understand how Iris works under the hood

Build docs developers (and LLMs) love