Skip to main content
Every time you push to a connected branch, Shipyard runs a fully automated pipeline that takes your code from a raw GitHub commit to a tested build artifact. The pipeline is implemented in src/services/buildEngine.ts and runs entirely on the server — no external CI service required. Each stage writes progress back to your browser via Socket.io so you can watch the build unfold in real time.

Build status values

A build record moves through these statuses during the pipeline:
StatusMeaning
queuedBuild record created, pipeline not yet started
runningPipeline is actively executing
passedAll stages completed with exit code 0
failedAny stage exited with a non-zero code

Pipeline stages

1

Webhook received and verified

GitHub sends a POST to /api/webhook for every push to a watched branch. The server computes an HMAC-SHA256 digest of the raw request body using WEBHOOK_SECRET and compares it to the X-Hub-Signature-256 header using a timing-safe comparison. Requests with invalid signatures are rejected before any build work begins.
2

Build record created

A row is inserted into the build table with status: "queued". The commit message, commit hash, branch, and author are captured from the webhook payload. A buildStatusUpdate event is emitted over Socket.io to notify the client that the build is starting.
3

Temp directory created

A working directory is created at:
temp/<project-name>-<timestamp>/
The timestamp component (Date.now()) ensures concurrent builds for different projects never collide.
4

Secrets decrypted and written to .env

If the project has any stored secrets, each value is decrypted with AES-256-GCM and written to a .env file inside the build directory:
temp/<project-name>-<timestamp>/.env
The presence of this file is tracked with a hasEnvFile flag that controls both the --env-file argument to docker run and the deployment decision at the end of the pipeline.
5

Repository cloned

The GitHub access token stored on the user record is embedded in the clone URL to allow access to private repositories:
git clone --branch <branch> --depth 1 \
  https://x-access-token:<token>@<repo-url> .
The shallow clone (--depth 1) keeps the operation fast by avoiding full history.
6

Specific commit fetched and checked out

After the initial clone, the exact commit hash from the webhook payload is fetched and checked out. This guarantees the build always runs against the precise commit that triggered the webhook, even if newer commits have landed since:
git fetch origin <commitHash> --depth 1
git checkout <commitHash>
7

Framework detection

Shipyard inspects the root of the cloned repository for well-known framework config files and sets the output directory automatically:
Config file detectedOutput directory
vite.config.ts or vite.config.jsdist
next.config.ts or next.config.js.next
Neither (or a manually configured value)Project’s outputDirectory setting
8

Dockerfile generated if none exists

If the repository does not include a Dockerfile, Shipyard generates one at <buildPath>/Dockerfile:
FROM node:20-slim
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
CMD <buildCommand>
The CMD line is set to the project’s configured build command. Projects that already ship a Dockerfile use it as-is.
9

Docker image built

The image tag is derived from the project name (lowercased, spaces replaced with hyphens, special characters removed):
docker build -t <image-tag> .
Standard output and standard error from the Docker daemon are both captured. Lines from stderr are inspected for keywords (error, failed, fatal, exception) to determine whether each line is an actual error or informational Docker progress output. All lines are emitted to the client via build_logs / build_errors Socket.io events.
10

Container executed with volume mount

After a successful image build, a container is run with the build directory mounted so output written inside /app is immediately visible on the host:
docker run --rm \
  -v <buildPath>:/app \
  --user <uid>:<gid> \
  [--env-file .env] \
  <image-tag> \
  sh -c "cd /app && <installCommand> && <buildCommand>"
The --user flag passes the server process’s own UID and GID into the container. This prevents files written by the container from being owned by root on the host, which would block subsequent cleanup. The --env-file argument is only added when the project has secrets.Output from this stage is emitted via run_logs / run_error Socket.io events.
11

Build logs batched and saved

Log lines are accumulated in an in-memory buffer as they arrive from docker build and docker run. When each process closes, the entire buffer is flushed to the build_logs table in a single db.insert() call. Each row stores the line number, raw log text, and the build ID.
12

Build status updated

Once the container exits, the build row is updated:
  • Exit code 0 → status: "passed", finishedAt set to now
  • Any other exit code → status: "failed", finishedAt set to now
A buildStatusUpdate event is emitted to the client reflecting the final status.
13

Deployment triggered (if eligible)

If the build passed and the project has no secrets (i.e. hasEnvFile is false), deployProject is called automatically. See Static site deployment and subdomain routing for details on what happens next.
Projects with environment variables are not automatically deployed as static sites. Because a static site is served directly from the filesystem, any secrets embedded in the build output would be publicly accessible. Keep secrets-based projects behind a server-side runtime that can read them safely.
14

Temp directory cleaned up

Regardless of whether the build passed or failed, the temp directory is removed:
fs.rmSync(buildPath, { recursive: true, force: true });
This keeps disk usage bounded even across many concurrent builds.

Build docs developers (and LLMs) love