Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/Esteban-Mendez-j/Proyecto-Docker/llms.txt

Use this file to discover all available pages before exploring further.

Running SearchJobs in production requires more than just docker-compose up. You need to harden secrets, configure a real database password, replace the in-memory MongoDB assumption with a hosted service, build and serve the React frontend as a static bundle, and plan for file-upload persistence. This page covers each of these areas in turn.

Security hardening

The default configuration in docker-compose.yml and the sample .env prioritises convenience for local development. Before exposing the application to the internet, make the following changes.

JWT secret key

MY_SECRET_KEY must be a long, random, unpredictable string in production. The default placeholder is not secure. Generate a strong value before deploying.
# Generate a cryptographically random 64-character hex string
openssl rand -hex 32
Set the output as the value of MY_SECRET_KEY in your production secrets store (e.g. a CI/CD secret, Docker Swarm secret, or a secrets manager). Never hard-code it in the repository.

MySQL password

The docker-compose.yml sets MYSQL_ALLOW_EMPTY_PASSWORD: "yes" and SPRING_DATASOURCE_PASSWORD: "" for local development. In production:
Change MYSQL_ALLOW_EMPTY_PASSWORD to "no" and set a strong, unique password for both MYSQL_ROOT_PASSWORD (or a dedicated MySQL user) and SPRING_DATASOURCE_PASSWORD.
# Production override (docker-compose.override.yml or separate compose file)
services:
  mysql:
    environment:
      MYSQL_DATABASE: mydb
      MYSQL_ROOT_PASSWORD: "${MYSQL_ROOT_PASSWORD}"
      MYSQL_ALLOW_EMPTY_PASSWORD: "no"
  backend:
    environment:
      SPRING_DATASOURCE_PASSWORD: "${SPRING_DATASOURCE_PASSWORD}"

JPA DDL mode

Never use SPRING_JPA_HIBERNATE_DDL_AUTO=create-drop in production — it drops and recreates all tables on every restart, destroying your data.
Set this to validate (Hibernate checks the schema matches the entities but makes no changes) or none (Hibernate does nothing — manage schema with Flyway or Liquibase):
SPRING_JPA_HIBERNATE_DDL_AUTO=validate

HTTPS and reverse proxy

The Spring Boot container exposes port 8080 over plain HTTP. In production, place a reverse proxy in front of it to terminate TLS.
nginx is a common choice. Configure it to proxy https://api.yourdomain.comhttp://springboot-app:8080. Cloud providers (Railway, Render, Fly.io) can also handle TLS termination automatically.
Update URL_FRONTEND to your actual production frontend domain so CORS is configured correctly:
URL_FRONTEND=https://yourdomain.com

MongoDB

MongoDB is not included in docker-compose.yml. The chat feature requires a live MongoDB instance. You have two options:
Without a valid MONGODB_URI, the parts of the application that rely on MongoDB (real-time chat) will fail at startup or at runtime. Provide this variable even if the feature is not immediately needed.
Option 1 — MongoDB Atlas (recommended for production)
  1. Create a free or paid cluster at mongodb.com/atlas.
  2. Whitelist your server’s IP address.
  3. Copy the connection string and set it in your environment:
MONGODB_URI=mongodb+srv://<username>:<password>@cluster0.xxxxx.mongodb.net/chatdb?retryWrites=true&w=majority
Option 2 — Add a MongoDB container to Compose
services:
  mongodb:
    image: mongo:7
    container_name: mongodb
    volumes:
      - mongo_data:/data/db
    networks:
      - app-network

volumes:
  mongo_data:
Then set:
MONGODB_URI=mongodb://mongodb:27017/chatdb

Frontend build

For production, build the React client into a static bundle rather than running the Vite dev server.
1

Install dependencies and build

cd client
npm install
npm run build
Vite outputs the compiled assets to client/dist/. Source maps are enabled by default (build.sourcemap: true in vite.config.js).
2

Serve the static files

Choose one of the following approaches:
server {
    listen 80;
    server_name yourdomain.com;

    root /var/www/searchjobs/dist;
    index index.html;

    # React Router — fall back to index.html for client-side routes
    location / {
        try_files $uri $uri/ /index.html;
    }
}
The client/ directory includes a vercel.json configuration file, making Vercel a zero-configuration deployment target for the frontend.

File uploads

The backend stores uploaded images, PDFs, and videos on the container filesystem via the volume mount defined in docker-compose.yml:
volumes:
  - ./Proyecto_backup/uploads:/app/uploads
This maps Proyecto_backup/uploads/ on the host into /app/uploads/ inside the container, which matches the UPLOAD_DIR_* environment variables.
Local volume mounts work for single-server deployments, but files will be lost if the host directory is not backed up. If you redeploy on a different machine or scale horizontally, uploaded files will not be available.
For durable, scalable file storage in production, migrate uploads to an object storage service such as Amazon S3, Cloudflare R2, or Google Cloud Storage. Update the backend’s storage integration and replace the UPLOAD_DIR_* variables with the appropriate bucket/endpoint configuration.

Health checking

Backend

The Spring Boot application listens on port 8080. If Spring Actuator is on the classpath and enabled, use the health endpoint:
curl http://your-host:8080/actuator/health
# Expected: {"status":"UP"}
If Actuator is not enabled, a simple TCP check on port 8080 confirms the application is accepting connections.

MySQL

The mysql service in docker-compose.yml does not define a healthcheck, which means the backend may attempt to connect before MySQL is fully ready.
Consider adding a healthcheck to the mysql service and a depends_on condition on the backend service so that Spring Boot only starts after MySQL is accepting connections.
services:
  mysql:
    image: mysql:8.0
    healthcheck:
      test: ["CMD", "mysqladmin", "ping", "-h", "localhost"]
      interval: 10s
      timeout: 5s
      retries: 5

  backend:
    depends_on:
      mysql:
        condition: service_healthy

Scaling considerations

The current architecture is designed for single-instance deployment:
  • WebSocket state is held in-memory. If you run multiple backend instances behind a load balancer, WebSocket connections are not shared between instances. You would need a Redis pub/sub adapter (e.g. Spring’s RedisMessageBroker) to support horizontal scaling.
  • File uploads are stored on the local filesystem (see above). Object storage is required for multi-instance deployments.
  • MySQL is a single container with no replication. For high availability, use a managed MySQL service (Amazon RDS, PlanetScale, etc.) instead.
For small teams and early-stage deployments, a single Docker host is perfectly adequate. Revisit the architecture when traffic or reliability requirements grow.

Build docs developers (and LLMs) love