Lightpress is organized as a three-tier system: a user-facing frontend client, a backend composed of independent microservices, and cloud infrastructure provisioned declaratively on AWS. Each tier is contained in its own top-level directory and can be developed, tested, and deployed without coupling to the other tiers. Docker Compose bridges all three tiers locally; CloudFormation and CodeBuild take over in production.Documentation Index
Fetch the complete documentation index at: https://mintlify.com/reds-skywalker/Lightpress/llms.txt
Use this file to discover all available pages before exploring further.
System overview
The three tiers
Frontend client
The
client/ directory holds the user-facing web application. It communicates with the microservices tier exclusively through HTTP APIs, keeping the UI layer decoupled from business logic.Microservices
The
microservices/ directory contains independently deployable backend services. Each service owns its own data store, exposes a well-defined API, and can be scaled or replaced without affecting other services.AWS infrastructure
The
infraestructure/cloudformation/ directory holds CloudFormation templates that provision every AWS resource the platform needs — VPCs, ECS clusters, RDS instances, S3 buckets, IAM roles, and more.Directory structure
How services communicate
Within the local Docker Compose network, the frontend client and microservices share a private bridge network. The client sends HTTP requests to each microservice using its Docker Compose service name as the hostname — no external DNS or load balancer is required during development. In production on AWS, service discovery shifts to AWS-native mechanisms. An Application Load Balancer (ALB) sits in front of the microservices tier, routing requests by path prefix or hostname to the appropriate ECS task. The frontend client is served from a CDN (such as CloudFront backed by S3) and makes API calls through the ALB’s public endpoint.- Local (Docker Compose)
- Production (AWS)
Local development vs production
Docker Compose and CloudFormation serve the same purpose at different scopes: both describe the desired state of the system, and both are declarative. The key differences are in their execution environment and lifecycle.| Concern | Docker Compose (local) | CloudFormation (production) |
|---|---|---|
| Scope | Single developer machine | AWS account and region |
| Runtime | Docker Engine | ECS Fargate, Lambda, RDS, etc. |
| State management | Container process lifecycle | AWS CloudFormation stacks |
| Secrets | .env file (git-ignored) | AWS Secrets Manager / SSM |
| Networking | Docker bridge network | VPC, subnets, security groups |
| Scaling | Manual (--scale flag) | Auto Scaling groups, ECS desired count |
The
buildspec.yml in the project root is the AWS CodeBuild build specification. It defines the steps CodeBuild runs when a push triggers the CI/CD pipeline: installing dependencies, running tests, building container images, pushing to Amazon ECR, and initiating a CloudFormation stack update.CI/CD pipeline
Lightpress uses AWS CodeBuild as its CI/CD engine. A push to the main branch triggers a build that runs thebuildspec.yml instructions. At a high level the pipeline:
- Installs Node.js and Python dependencies for all services
- Runs the test suite for each microservice
- Builds Docker images for the client and each microservice
- Pushes images to Amazon Elastic Container Registry (ECR)
- Updates the CloudFormation stack to deploy the new image versions to ECS
Explore further
Deployment overview
Step-by-step instructions for deploying Lightpress to AWS for the first time.
CloudFormation templates
Reference for the infrastructure templates in
infraestructure/cloudformation/.Docker Compose reference
Configuration reference for the
docker-compose.yml file.CI/CD pipeline
How CodeBuild, ECR, and ECS work together to ship new versions automatically.