Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/reds-skywalker/Lightpress/llms.txt

Use this file to discover all available pages before exploring further.

Lightpress is organized as a three-tier system: a user-facing frontend client, a backend composed of independent microservices, and cloud infrastructure provisioned declaratively on AWS. Each tier is contained in its own top-level directory and can be developed, tested, and deployed without coupling to the other tiers. Docker Compose bridges all three tiers locally; CloudFormation and CodeBuild take over in production.

System overview

The three tiers

Frontend client

The client/ directory holds the user-facing web application. It communicates with the microservices tier exclusively through HTTP APIs, keeping the UI layer decoupled from business logic.

Microservices

The microservices/ directory contains independently deployable backend services. Each service owns its own data store, exposes a well-defined API, and can be scaled or replaced without affecting other services.

AWS infrastructure

The infraestructure/cloudformation/ directory holds CloudFormation templates that provision every AWS resource the platform needs — VPCs, ECS clusters, RDS instances, S3 buckets, IAM roles, and more.

Directory structure

Lightpress/
├── client/                        # Frontend application
├── microservices/                 # Backend service directories
├── infraestructure/
│   └── cloudformation/            # AWS CloudFormation templates
├── scripts/
│   ├── bash/                      # Shell automation scripts
│   └── python/                    # Python automation scripts
├── docker-compose.yml             # Local development orchestration
├── buildspec.yml                  # AWS CodeBuild pipeline definition
└── .env                           # Local environment variables (git-ignored)

How services communicate

Within the local Docker Compose network, the frontend client and microservices share a private bridge network. The client sends HTTP requests to each microservice using its Docker Compose service name as the hostname — no external DNS or load balancer is required during development. In production on AWS, service discovery shifts to AWS-native mechanisms. An Application Load Balancer (ALB) sits in front of the microservices tier, routing requests by path prefix or hostname to the appropriate ECS task. The frontend client is served from a CDN (such as CloudFront backed by S3) and makes API calls through the ALB’s public endpoint.
Browser
  → client container (port 3000)
    → microservice-a container (internal DNS: microservice-a:8001)
    → microservice-b container (internal DNS: microservice-b:8002)
Services reference each other by their Docker Compose service names. No public network exposure is needed for inter-service communication.

Local development vs production

Docker Compose and CloudFormation serve the same purpose at different scopes: both describe the desired state of the system, and both are declarative. The key differences are in their execution environment and lifecycle.
ConcernDocker Compose (local)CloudFormation (production)
ScopeSingle developer machineAWS account and region
RuntimeDocker EngineECS Fargate, Lambda, RDS, etc.
State managementContainer process lifecycleAWS CloudFormation stacks
Secrets.env file (git-ignored)AWS Secrets Manager / SSM
NetworkingDocker bridge networkVPC, subnets, security groups
ScalingManual (--scale flag)Auto Scaling groups, ECS desired count
The buildspec.yml in the project root is the AWS CodeBuild build specification. It defines the steps CodeBuild runs when a push triggers the CI/CD pipeline: installing dependencies, running tests, building container images, pushing to Amazon ECR, and initiating a CloudFormation stack update.

CI/CD pipeline

Lightpress uses AWS CodeBuild as its CI/CD engine. A push to the main branch triggers a build that runs the buildspec.yml instructions. At a high level the pipeline:
  1. Installs Node.js and Python dependencies for all services
  2. Runs the test suite for each microservice
  3. Builds Docker images for the client and each microservice
  4. Pushes images to Amazon Elastic Container Registry (ECR)
  5. Updates the CloudFormation stack to deploy the new image versions to ECS
The scripts/bash/ and scripts/python/ directories contain helper scripts that the buildspec.yml calls during pipeline steps. Keeping these scripts separate from the build specification makes them easier to run locally for debugging.

Explore further

Deployment overview

Step-by-step instructions for deploying Lightpress to AWS for the first time.

CloudFormation templates

Reference for the infrastructure templates in infraestructure/cloudformation/.

Docker Compose reference

Configuration reference for the docker-compose.yml file.

CI/CD pipeline

How CodeBuild, ECR, and ECS work together to ship new versions automatically.

Build docs developers (and LLMs) love