This page collects the questions that come up most often when working with Lightpress — whether you’re onboarding to the codebase, debugging a deployment, or planning infrastructure changes. If your question isn’t covered here, open an issue or ask in the team channel.Documentation Index
Fetch the complete documentation index at: https://mintlify.com/reds-skywalker/Lightpress/llms.txt
Use this file to discover all available pages before exploring further.
How do I add a new microservice?
How do I add a new microservice?
Adding a microservice to Lightpress involves four areas: the service code itself, Docker Compose, the buildspec, and CloudFormation.1. Create the service directoryAdd a
Dockerfile following the pattern of an existing service (e.g. microservices/auth-service/Dockerfile). Include a health check endpoint at GET /health — this is required by the ECS load balancer target group.2. Register it in Docker ComposeAdd a service block to docker-compose.yml with the appropriate port, environment variables, and depends_on entries. See the Docker Compose reference for the full pattern.3. Update the buildspecThe default buildspec.yml uses for dir in microservices/*/ to iterate over all service directories, so new services are picked up automatically as long as each service directory has a valid Dockerfile and a package.json with lint and test scripts.4. Add a CloudFormation templateCreate a new template in infraestructure/cloudformation/ for the service’s ECS task definition, service, and target group. Reference existing templates for the naming conventions and parameter patterns used across the stack.5. Document the service URLAdd the new service’s URL environment variable (e.g. MY_NEW_SERVICE_URL) to .env.example and to the environment variables reference. Update the environment sections of any service that will call it.How do I update environment variables in production?
How do I update environment variables in production?
Lightpress production environment variables are stored in AWS Systems Manager Parameter Store (for configuration) and AWS Secrets Manager (for secrets). Avoid changing environment variables directly in ECS task definitions — always update the source in Parameter Store or Secrets Manager and then redeploy.To update a value in Parameter Store:To update a secret in Secrets Manager:After updating the value, trigger a redeployment so ECS tasks pick up the new value:ECS performs a rolling update, replacing old tasks with new ones that read the updated parameter values at startup. No downtime occurs during a rolling update.
How do I roll back a deployment?
How do I roll back a deployment?
Lightpress Docker images are tagged with the Git commit SHA (For infrastructure changes made via CloudFormation, roll back by redeploying the previous template version from
$CODEBUILD_RESOLVED_SOURCE_VERSION) in addition to latest. This means you can roll back to any previously built commit by pointing ECS at the older image tag.Roll back a specific service to a previous commit:git or by using CloudFormation’s built-in rollback:What AWS permissions are required?
What AWS permissions are required?
The permissions required depend on your role. Three sets of credentials interact with Lightpress:Developer workstation (for local AWS CLI usage and scripts):
ecr:GetAuthorizationToken,ecr:BatchGetImage,ecr:GetDownloadUrlForLayer— to pull images locallyssm:GetParameter,ssm:GetParameters— to read Parameter Store valueslogs:FilterLogEvents,logs:GetLogEvents— to tail CloudWatch logsecs:DescribeServices,ecs:DescribeTasks,ecs:ListTasks— to inspect production services
ecr:*on the Lightpress ECR repositoriesecs:UpdateService,ecs:RegisterTaskDefinition,ecs:DescribeTaskDefinitions3:PutObject,s3:DeleteObject,s3:ListBucketon the client and artifacts bucketsssm:GetParameterson/lightpress/*secretsmanager:GetSecretValueonlightpress/*cloudfront:CreateInvalidationon the client distributionlogs:CreateLogGroup,logs:CreateLogStream,logs:PutLogEvents— for build logs
s3:GetObject,s3:PutObjecton service-specific bucket prefixessqs:SendMessage,sqs:ReceiveMessage,sqs:DeleteMessageon the relevant queuesssm:GetParameteron service-specific parameter pathssecretsmanager:GetSecretValueon service-specific secrets
Follow the principle of least privilege. Scope IAM policies to specific resource ARNs rather than using wildcards (
*) where possible.How do I run only one service locally?
How do I run only one service locally?
Docker Compose lets you start a specific service by name. If the service has If you want to run a service outside of Docker entirely — for example to use your local Node.js debugger — stop the containerized version first and then start it directly:Make sure your local
depends_on entries, Compose also starts those dependencies automatically..env sets DB_HOST=localhost (instead of the Docker service name postgres) when running services outside the Docker network.How do I debug a failing CodeBuild pipeline?
How do I debug a failing CodeBuild pipeline?
When a CodeBuild build fails, work through these steps in order:1. Read the phase detail in the consoleOpen the failed build in the AWS CodeBuild console. Expand the phase that shows as 4. Add verbose outputTemporarily add Remove
FAILED. Each command’s output is captured — look for the first non-zero exit code and read the lines immediately above it.2. Check for permission errorsAccessDenied and not authorized errors indicate missing IAM permissions on the CodeBuild service role. Identify the exact API action and resource from the error message and add the permission to the role.3. Reproduce locallyRun your commands locally in the same order as the buildspec.yml phases. This is the fastest iteration loop. For environment-dependent issues, use the CodeBuild local agent:set -x to the failing phase to print each command before it runs:set -x before merging the fix.5. Check CloudWatch LogsFull build logs are also available in CloudWatch Logs under the log group /aws/codebuild/<project-name>. Use CloudWatch Logs Insights to search across multiple builds for a pattern.Can I deploy Lightpress to a different cloud provider?
Can I deploy Lightpress to a different cloud provider?
Lightpress is designed for AWS, and several components have direct AWS dependencies:
- CloudFormation — infrastructure-as-code templates are AWS-specific
- ECR — Docker image registry used by the buildspec
- ECS — container orchestration
- S3 + CloudFront — client hosting and file storage
- SSM Parameter Store / Secrets Manager — secrets management
- CodeBuild — CI/CD execution
microservices/ and client/ has no AWS SDK dependencies and is cloud-agnostic.A practical middle ground is to keep the application code and Docker Compose setup unchanged while replacing only the infrastructure layer. Use Terraform to provision equivalent resources on GCP or Azure, and replace the buildspec.yml with a CI/CD configuration for GitHub Actions or Cloud Build.If you are evaluating a multi-cloud or cloud-agnostic deployment, open an issue on GitHub to discuss it. Architectural changes of this scope benefit from team consensus before implementation.
How do I scale a specific microservice?
How do I scale a specific microservice?
In production on ECS, you scale a service by adjusting its desired task count. You can do this manually, or configure auto-scaling to do it automatically based on CPU, memory, or custom metrics.Manual scaling:Automatic scaling with Application Auto Scaling:Define a scaling policy in your CloudFormation template:This keeps average CPU utilization near 60%, scaling out when traffic increases and in when it drops.Locally with Docker Compose:Docker Compose does not support ECS-style auto-scaling, but you can run multiple replicas of a service for manual load testing:Note that scaling beyond one instance locally requires removing the fixed port mapping from the service (Compose handles port allocation automatically when scaling).