Skip to main content
Shipyard is a self-hosted CI/CD engine built on Node.js, Express 5, and Docker. When you push code to a connected GitHub repository, a signed webhook triggers the server to clone the repo, build it inside an isolated Docker container, stream logs back to your browser in real time, and — if the build passes — copy the output to a served deployment directory. Every piece of that journey is handled by a small set of focused services that share a single PostgreSQL database.

End-to-end flow

GitHub Push
  └─► Webhook endpoint (HMAC-SHA256 signature verified)
        └─► Clone repo (authenticated Git URL)
              └─► Detect framework (Vite / Next.js)
                    └─► Generate Dockerfile (if none present)
                          └─► Docker image build
                                └─► Container execution (install + build command)
                                      ├─► Log streaming via Socket.io
                                      ├─► Pass → Copy output → Serve at subdomain
                                      └─► Fail → Persist logs → Report status

Main components

Express 5 HTTP server

The entry point (src/index.ts) creates an Express application and mounts route modules under the /api prefix. Each route family has its own file under src/routes/ and a corresponding controller under src/controller/. Protected routes run the isAuth middleware, which validates the JWT before the controller is reached. The server listens on port 8080.
/api/auth        — GitHub OAuth + JWT issuance
/api/repo        — Organization and repository browsing
/api/project     — Project CRUD and secrets management
/api/build       — Rebuild and build lookup
/api/deploy      — Deployment lookup and rollback
/api/webhook     — Incoming GitHub push events (HMAC-verified)
/health          — Health check

Socket.io real-time log streaming

A Socket.io server is attached to the same HTTP server. After a client authenticates (the SocketAuth middleware validates the JWT on the WebSocket handshake), the socket joins a room named after the user’s numeric database ID. The build engine emits all log lines and status changes to that room so only the owning user receives them.
EventDirectionPayload
build_logsServer → ClientDocker build stdout
build_errorsServer → ClientDocker build stderr
run_logsServer → ClientContainer execution stdout
run_errorServer → ClientContainer execution stderr
buildStatusUpdateServer → ClientBuild status change
deploymentUpdateServer → ClientDeployment status and URL

Build engine (buildEngine.ts)

runBuild is the core of the CI pipeline. It receives a project record (including the user’s GitHub token and any associated secrets), a build record, and the Socket.io server instance. It orchestrates every step from cloning to cleanup: writing secrets to a .env file, authenticating the clone URL, detecting the framework, generating a fallback Dockerfile, running docker build, and then running the built image with a volume mount so build output persists on the host. Logs are buffered in memory and flushed to PostgreSQL in a single batch insert.

Deployment engine (deploymentEngine.ts)

deployProject copies the build output from the temporary working directory to a permanent deployments/<project-name>/ directory on the host filesystem. It then emits a deploymentUpdate event over Socket.io, updates projectTable.productionUrl in the database, and inserts a deployment record. For rollbacks the insertion step is skipped because the deployment record already exists.

PostgreSQL database (Drizzle ORM)

All persistent state lives in a single PostgreSQL database accessed through Drizzle ORM. The schema defines six tables:
TablePurpose
userGitHub OAuth profile, access token, and avatar
projectConnected repo, branch, build/install commands, output directory, production URL, and webhook ID
buildIndividual build record: status, commit message, commit hash, author, exit code, and timestamps
build_logsLine-by-line build output with line numbers, linked to a build by foreign key
deploymentDeployment record with status (live / rolled_back), linked to a build
secretsAES-256-GCM encrypted environment variables linked to a project

Subdomain middleware

Before any API route is reached, every incoming request passes through subdomainMiddleware. The middleware reads the Host header, extracts the first label as the subdomain, and checks whether a deployments/<subdomain>/ directory exists on disk. If it does, express.static serves the files directly; requests that do not match a static file fall back to index.html to support single-page applications. If no deployment directory matches, the request proceeds to the API routes normally.

Data flow

Client (browser)
  │  REST + JWT          WebSocket (Socket.io)
  │                      (joins room = userId)

Express 5 server
  ├─ isAuth middleware ──► JWT verified
  ├─ Controllers ─────────► Drizzle ORM ──► PostgreSQL
  └─ Webhook route ───────► buildEngine

                               ├─ spawn git clone
                               ├─ spawn docker build ──► emit build_logs
                               ├─ spawn docker run  ──► emit run_logs
                               ├─ db.insert(buildLogs)
                               └─ deploymentEngine
                                    ├─ fs.cpSync → deployments/
                                    └─ db.insert(deployment)

Explore further

Build pipeline

Every stage of the clone-build-test cycle in detail.

Deployment

How build output becomes a live subdomain.

Secrets

AES-256-GCM encryption and build-time injection.

Authentication

GitHub OAuth flow and JWT session tokens.

Build docs developers (and LLMs) love