Skip to main content

Purpose

The Upload Service is the application-layer owner of the content creation workflow. It sits between the creator client and the Media Processing Pipeline, handling all coordination that precedes file processing. The Upload Service does not handle file bytes — raw media is uploaded directly from the client to object storage via presigned URLs, keeping the Upload Service out of the data path.

Responsibilities

ResponsibilityDetail
Presigned URL generationIssues time-limited (15-minute) presigned S3 URLs for direct client-to-storage multipart uploads. Routes to the global hot staging bucket or Nigeria residency staging bucket based on the Residency Policy Engine decision made at session creation.
Upload session managementCreates and tracks upload sessions in Redis: session_id, content_id, creator_id, residency_decision, upload_progress, expiry. Supports resumable uploads — clients reconnect using the stored session_id.
Metadata submissionAccepts content metadata at session creation: title, description, tags, category, language, visibility, scheduled publish time. Metadata is validated and persisted to Postgres before the file lands in storage.
Draft savingUpload sessions can be saved as drafts — metadata is persisted but the pipeline is not triggered until the creator explicitly publishes.
Content schedulingA scheduled publish timestamp is stored at metadata submission. A scheduler cron polls for due-to-publish drafts and emits media.upload.initiated to trigger the pipeline at the configured time.
Pipeline triggerOn upload completion (confirmed via S3 event notification), the Upload Service marks the session complete and emits media.upload.initiated to Kafka, handing off to the Upload Ingestor.
Residency decisions are immutable. Once the Residency Policy Engine records a NIGERIA or GLOBAL decision at upload time, that decision cannot be changed by any application principal. Enforcement is backed by cloud IAM bucket policy — not application code.

API Surface

MethodEndpointAuthDescription
POST/api/v1/uploads/sessionBearer (Creator)Create upload session, get presigned URL and content ID
GET/api/v1/uploads/session/{sessionId}Bearer (Creator)Fetch session status and resume upload URL
PATCH/api/v1/uploads/session/{sessionId}/metadataBearer (Creator)Update content metadata before publish
POST/api/v1/uploads/session/{sessionId}/publishBearer (Creator)Publish a draft (triggers pipeline via Kafka)
DELETE/api/v1/uploads/session/{sessionId}Bearer (Creator)Cancel an in-progress upload session

Data Owned

StoreSchema
Postgrescontent_drafts (content_id, creator_id, title, description, tags, category, visibility, scheduled_at, status, residency, created_at)
RedisUpload sessions keyed by upload_session:{sessionId} — includes presigned URL metadata, progress, expiry, residency routing decision

Kafka Topics

TopicAction
media.upload.initiatedProduced on upload completion or scheduled publish trigger

Failure Behaviour

FailureBehaviour
Upload Service unavailableReturns a user-facing error. In-progress uploads are resumable — the session state is persisted in Redis and survives a service restart. Clients present the session_id to resume.
S3 completion event lostThe Upload Service provides a manual completion endpoint as a fallback. Clients may poll session status; a dedicated reconciliation job checks for stale sessions with complete S3 objects.
Residency Policy Engine unavailableUpload session creation fails with 503. The RPE is a hard dependency for routing decisions — no fallback routing is applied to avoid accidental misrouting of residency content.
Kafka producer failurePipeline trigger is retried with exponential backoff. If the emit fails after exhausting retries, the session is marked PENDING_TRIGGER and a background job retries on the next polling cycle.

Build docs developers (and LLMs) love