System overview
In production, Nginx sits in front of the Express server as a reverse proxy, handling TLS termination and request forwarding. In development, the Express server runs directly with a local HTTPS certificate.Component breakdown
Next.js frontend
Next.js frontend
Package:
frontend/Built with Next.js 16 using the App Router, React 19, Tailwind CSS v4, and shadcn/ui (Radix UI primitives). The frontend is a pure client-consumer: it never touches MongoDB or any infrastructure service directly. All data operations go through the Express REST API.Key responsibilities:- Renders the post composer, dashboard, analytics views, and settings pages
- Uploads media to AWS S3 via a signed URL received from the backend
- Maintains a Socket.IO connection to receive real-time post status updates
- Manages authentication state using JWT access and refresh tokens stored in HTTP-only cookies
- Renders analytics charts using Recharts
Express API server
Express API server
Package:
The
backend/
Entry point: backend/src/app.tsBuilt with Express v5 and TypeScript. The server bootstraps in sequence: MongoDB → Redis → RabbitMQ → analytics cron → HTTP/HTTPS server with Socket.IO.API routes are all mounted under /api:| Prefix | Purpose |
|---|---|
/api/auth | Registration, login, Google OAuth, OTP verification |
/api/posts | Create, list, update, delete, retry posts; media upload |
/api/platform | Connect and manage social platform credentials |
/api/generate | AI caption generation via Google Gemini |
/api/analytics | Post and account analytics |
/api/payments | Stripe checkout, subscription management, billing portal |
/api/notifications | User notification list and read status |
/api/admin | User management and platform-level admin controls |
/api/profile | User profile reads and updates |
/api/firebase | Firebase Cloud Messaging token registration |
/api/payments/webhook route receives the raw request body (not JSON-parsed) so that Stripe webhook signature verification works correctly.Security middleware: Helmet (HTTP headers), CORS (origin allowlist), cookie-parser, and Morgan (structured JSON request logs forwarded to Better Stack via Winston).MongoDB
MongoDB
Client library: Mongoose v8
Config:
backend/src/config/database.tsMongoDB is the primary datastore. It holds users, posts (with per-platform status sub-documents), platform credentials, notifications, analytics snapshots, and subscription records.The connection uses autoIndex: false in all environments to prevent index builds from blocking startup. Indexes should be created manually or via migration scripts.The database module implements automatic reconnection: if the connection drops, it retries every 5 seconds.Redis
Redis
Client library:
redis v4
Config: backend/src/config/redis.tsRedis connects using REDIS_HOST and REDIS_PORT environment variables. It is used for short-lived caching, rate-limit counters, and any session-adjacent data that benefits from fast in-memory reads.The client logs connection errors and successful connections through Winston.RabbitMQ
RabbitMQ
Client library:
amqplib
Config: backend/src/config/rabbitmq.ts
Worker entry: backend/src/workers/index.tsRabbitMQ is the message broker for all asynchronous jobs. It decouples HTTP request handling from social platform API calls and analytics fetching. See the background job processing section for the full queue topology.Background workers
Background workers
Scripts:
pnpm run worker (inside backend/)
Files: backend/src/workers/posting.worker.ts, backend/src/workers/analytics.worker.tsWorkers run as a separate Node.js process from the web server. They connect to MongoDB and RabbitMQ independently. Two consumers run concurrently in the same worker process:- PostWorker — processes social media publishing jobs from the
social_postsqueue - AnalyticsWorker — processes analytics fetch jobs from the
analytics_fetchqueue
1 per worker, meaning each worker processes one message at a time before acknowledging and picking up the next.Data flow: from post creation to publishing
The sequence below shows the full lifecycle of a post, from user action to platform publication:Step-by-step
- Media upload — The user attaches an image. The frontend sends it to
POST /api/posts/media-upload. The backend uploads it to AWS S3 and returns the object URL. - Caption generation (optional) — The user clicks Generate Caption. The backend calls the Google Gemini API and streams back platform-appropriate suggestions.
- Post submission — The user submits the post form. For each selected platform, the backend creates a job message and publishes it to RabbitMQ.
- Immediate posts →
POST_EXCHANGE(topic, routing keypost.create.<platform>) - Scheduled posts →
POST_DELAYED_EXCHANGE(x-delayed-message, same routing key with ax-delayheader in milliseconds)
- Immediate posts →
- Queue routing — Both exchanges route matching messages to the
social_postsqueue via thepost.create.*binding. - Worker processing — The PostWorker dequeues a message and:
a. Checks if the post was cancelled (skip and ACK if so)
b. Checks for duplicate delivery (idempotency: if the platform already has
status=completed, skip) c. Validates stored platform credentials d. Calls the platform-specific posting service e. Updates the per-platform status in MongoDB - Notification — On success or permanent failure, the worker calls
NotificationService.createNotification(), which emits a Socket.IO event to the user’s room and creates a Firebase push notification.
Background job processing
Hayon uses a carefully structured RabbitMQ topology to handle retries, dead letters, and scheduled delivery.Queue topology
Exchanges
| Exchange | Type | Purpose |
|---|---|---|
POST_EXCHANGE | topic | Immediate post delivery |
POST_DELAYED_EXCHANGE | x-delayed-message | Scheduled post delivery (requires plugin) |
DLX_EXCHANGE | direct | Routes failed messages to dead letter or retry queues |
ANALYTICS_EXCHANGE | topic | Analytics fetch job delivery |
Queues
| Queue | Purpose |
|---|---|
social_posts | Main processing queue for all post jobs |
analytics_fetch | Analytics data fetch jobs |
retry_queue | Holds retryable messages with a TTL; routes back to POST_EXCHANGE on expiry |
dead_letters | Permanent failures and unroutable messages for inspection |
parking_lot | Messages that have exhausted all retry attempts |
Retry logic
When a job fails, the worker checks two conditions before deciding whether to retry:- Attempt count — fewer than 3 attempts recorded in the post’s
platformStatusessub-document - Error type — the error is classified as retryable (rate-limit responses, network timeouts,
ECONNRESET,ENOTFOUND)
retry_queue with a TTL (the delay grows with each attempt). When the TTL expires, RabbitMQ routes it back to POST_EXCHANGE for another processing attempt. After three failed attempts, the message goes to parking_lot and the post status is set to failed.
Analytics jobs
A separateAnalyticsCronService runs on the backend server and periodically publishes messages to ANALYTICS_EXCHANGE with the routing key analytics.fetch.*. The AnalyticsWorker consumes these from the analytics_fetch queue and writes results back to MongoDB.
Real-time updates via WebSockets
Hayon uses Socket.IO (v4) for real-time post status updates and notifications. Configuration:backend/src/config/socket.ts
Socket.IO is initialised on the same HTTP/HTTPS server as Express. Authentication is enforced in a middleware layer: every connecting client must provide a valid JWT access token in socket.handshake.auth.token. The token is verified against ACCESS_TOKEN_SECRET, and the decoded userId is stored in socket.data.user.
Upon successful authentication, the socket is added to a private room named after the user’s MongoDB _id. This means the worker can target notifications to exactly one user by emitting to userId — no broadcast, no leakage between accounts.
socket.io-client (v4) with the access token attached. When a post status changes, the user sees the update in the Posts list without refreshing.
External integrations
AWS S3 (media storage)
AWS S3 (media storage)
SDK:
@aws-sdk/client-s3, @aws-sdk/s3-request-presignerAll user-uploaded media (images, etc.) is stored in a dedicated S3 bucket. The backend generates presigned URLs for uploads and returns the final object URL for inclusion in post payloads sent to social platforms.Required variables: AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_REGION, AWS_S3_BUCKET_NAME.Stripe (payments and subscriptions)
Stripe (payments and subscriptions)
SDK:
stripe v19Stripe handles all payment operations. The integration covers:- Checkout sessions — initiated by
POST /api/payments/checkoutto upgrade to Pro - Billing portal — lets users manage or cancel subscriptions via Stripe’s hosted UI
- Webhooks —
POST /api/payments/webhookreceives Stripe lifecycle events (subscription created, updated, deleted) and updates the user’s plan in MongoDB
stripe.webhooks.constructEvent() signature check passes.Required variables: STRIPE_SECRET_KEY, STRIPE_PUBLISHABLE_KEY, STRIPE_WEBHOOK_SECRET, STRIPE_PRO_PRICE_ID.Google Gemini AI (caption generation)
Google Gemini AI (caption generation)
SDK:
@google/genaiThe POST /api/generate endpoint accepts post content and calls the Gemini API to produce platform-specific caption suggestions. Usage is metered per user: Free plan users get 15 generations/month, Pro users get 30.Required variable: GEMINI_API_KEY.Google OAuth (authentication)
Google OAuth (authentication)
Library:
passport-google-oauth20Passport.js orchestrates the Google OAuth 2.0 flow. On first login, a new user document is created in MongoDB with the Google ID and display name. On subsequent logins, only lastLogin is updated. If an email already exists under a different provider, the login is rejected with an explicit email_exists_different_provider error.Required variables: GOOGLE_CLIENT_ID, GOOGLE_CLIENT_SECRET, GOOGLE_CALLBACK_URL.Firebase (push notifications)
Firebase (push notifications)
SDKs:
firebase-admin (backend), firebase (frontend)Firebase Cloud Messaging delivers push notifications to users’ browsers or mobile devices. The backend initialises the Admin SDK using a service account key file (serviceAccountKey.json). The frontend registers FCM tokens via POST /api/firebase, which are stored against the user record and used when the worker dispatches a notification after publishing.Social platform APIs
Social platform APIs
Email (Nodemailer / Gmail)
Email (Nodemailer / Gmail)
Library:
nodemailer
Config: backend/src/config/mailer.tsTransactional emails (OTP verification, password reset) are sent through Gmail’s SMTP service. The transport is configured with EMAIL_USER and EMAIL_PASS (a Gmail app password, not your account password).Repository structure
Frontend and backend share a
@hayon/schemas workspace package (workspace:*) for Zod validation schemas. This is the only code shared between the two packages.Branch strategy
Hayon uses a three-branch promotion strategy to keep production stable:| Branch | Purpose | Rules |
|---|---|---|
main | Production | Always stable. No direct commits. Deployed to live. |
staging | Pre-production | QA and testing environment. Mirrors production config. |
dev | Active development | All feature branches merge here first. |
Logging and observability
The backend uses Winston for structured logging with two transports in production:- DailyRotateFile — writes JSON logs to disk with daily rotation
- Logtail / Better Stack (
@logtail/winston) — streams logs to Better Stack for real-time search and alerting
BETTER_STACK_TOKEN environment variable is required at startup.
Next steps
Quickstart
Follow the step-by-step setup guide to run Hayon locally.
Platform integrations
Connect Bluesky, Facebook, Threads, Tumblr, and Mastodon accounts.
Self-hosting
Deploy Hayon to your own infrastructure with the full self-hosting guide.
API reference
Explore the full REST API surface.

@atproto/apiaxios+ OAuth 2.0axios+ Meta OAuthoauth-1.0a+axiosaxios+ OAuth 2.0