Documentation Index
Fetch the complete documentation index at: https://mintlify.com/sockudo/sockudo/llms.txt
Use this file to discover all available pages before exploring further.
Overview
Sockudo supports horizontal scaling across multiple nodes using Redis, Redis Cluster, or NATS as message brokers. Scale to handle 100K+ concurrent connections with proper load balancing.
Architecture
┌─────────────────┐
│ Load Balancer │ (Nginx/HAProxy)
│ Sticky Sessions│
└────────┬────────┘
│
┌────┴─────┬──────────┬──────────┐
│ │ │ │
┌───▼──┐ ┌──▼───┐ ┌───▼──┐ ┌──▼───┐
│Node 1│ │Node 2│ │Node 3│ │Node N│
└───┬──┘ └──┬───┘ └───┬──┘ └──┬───┘
│ │ │ │
└─────────┴──────────┴─────────┘
│
┌────▼─────┐
│ Redis │ (Adapter)
│ Cluster │
└──────────┘
Choosing an Adapter
Redis (Recommended)
Best for: Most production deployments, 2-10 nodes
Pros:
- Simple setup
- Proven reliability
- Built-in persistence
- Low latency (~1-2ms)
Cons:
- Single point of failure (use Sentinel for HA)
- Limited to single master
environment:
ADAPTER_DRIVER: "redis"
DATABASE_REDIS_HOST: "redis"
DATABASE_REDIS_PORT: "6379"
DATABASE_REDIS_PASSWORD: "${REDIS_PASSWORD}"
Redis Cluster
Best for: Large deployments, 10+ nodes, 100K+ connections
Pros:
- Horizontal scaling
- Automatic sharding
- High availability
- No single point of failure
Cons:
- More complex setup
- Higher operational overhead
environment:
ADAPTER_DRIVER: "redis-cluster"
REDIS_CLUSTER_NODES: "redis-1:6379,redis-2:6379,redis-3:6379,redis-4:6379,redis-5:6379,redis-6:6379"
NATS
Best for: Cloud-native deployments, Kubernetes, multi-region
Pros:
- Built for distributed systems
- Excellent multi-region support
- Low latency (<1ms)
- Native clustering
Cons:
- No persistence by default
- Less common than Redis
environment:
ADAPTER_DRIVER: "nats"
NATS_SERVERS: "nats://nats-1:4222,nats://nats-2:4222,nats://nats-3:4222"
Multi-Node Setup with Redis
Docker Compose Configuration
services:
# Shared Redis for all nodes
redis:
image: redis:7-alpine
command: >
redis-server
--requirepass ${REDIS_PASSWORD}
--appendonly yes
--maxmemory 1gb
--maxmemory-policy allkeys-lru
volumes:
- redis-data:/data
networks:
- sockudo-network
deploy:
resources:
limits:
memory: 1G
cpus: '2.0'
# Sockudo Node 1
sockudo-node1:
build: .
environment:
ADAPTER_DRIVER: "redis"
CACHE_DRIVER: "redis"
QUEUE_DRIVER: "redis"
RATE_LIMITER_DRIVER: "redis"
DATABASE_REDIS_HOST: "redis"
DATABASE_REDIS_PASSWORD: "${REDIS_PASSWORD}"
INSTANCE_PROCESS_ID: "sockudo-node1"
HOST: "0.0.0.0"
PORT: "6001"
METRICS_PORT: "9601"
ports:
- "6001:6001"
- "9601:9601"
depends_on:
redis:
condition: service_healthy
networks:
- sockudo-network
# Sockudo Node 2
sockudo-node2:
build: .
environment:
ADAPTER_DRIVER: "redis"
CACHE_DRIVER: "redis"
QUEUE_DRIVER: "redis"
RATE_LIMITER_DRIVER: "redis"
DATABASE_REDIS_HOST: "redis"
DATABASE_REDIS_PASSWORD: "${REDIS_PASSWORD}"
INSTANCE_PROCESS_ID: "sockudo-node2"
HOST: "0.0.0.0"
PORT: "6002"
METRICS_PORT: "9602"
ports:
- "6002:6002"
- "9602:9602"
depends_on:
redis:
condition: service_healthy
networks:
- sockudo-network
# Sockudo Node 3
sockudo-node3:
build: .
environment:
ADAPTER_DRIVER: "redis"
CACHE_DRIVER: "redis"
QUEUE_DRIVER: "redis"
RATE_LIMITER_DRIVER: "redis"
DATABASE_REDIS_HOST: "redis"
DATABASE_REDIS_PASSWORD: "${REDIS_PASSWORD}"
INSTANCE_PROCESS_ID: "sockudo-node3"
HOST: "0.0.0.0"
PORT: "6003"
METRICS_PORT: "9603"
ports:
- "6003:6003"
- "9603:9603"
depends_on:
redis:
condition: service_healthy
networks:
- sockudo-network
# Load Balancer
nginx:
image: nginx:alpine
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx/nginx-lb.conf:/etc/nginx/nginx.conf:ro
- ./nginx/ssl:/etc/nginx/ssl:ro
depends_on:
- sockudo-node1
- sockudo-node2
- sockudo-node3
networks:
- sockudo-network
volumes:
redis-data:
networks:
sockudo-network:
driver: bridge
Load Balancing
Nginx Configuration
Create nginx/nginx-lb.conf:
upstream sockudo_backend {
# Sticky sessions using IP hash
ip_hash;
server sockudo-node1:6001 max_fails=3 fail_timeout=30s;
server sockudo-node2:6002 max_fails=3 fail_timeout=30s;
server sockudo-node3:6003 max_fails=3 fail_timeout=30s;
}
server {
listen 80;
server_name ws.your-domain.com;
# Redirect to HTTPS
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
server_name ws.your-domain.com;
ssl_certificate /etc/nginx/ssl/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
location / {
proxy_pass http://sockudo_backend;
proxy_http_version 1.1;
# WebSocket support
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# Headers
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Timeouts
proxy_connect_timeout 7d;
proxy_send_timeout 7d;
proxy_read_timeout 7d;
}
# Health check endpoint
location /health {
access_log off;
return 200 "OK";
}
}
HAProxy Configuration
Alternatively, use HAProxy:
global
log /dev/log local0
maxconn 50000
defaults
log global
mode http
option httplog
timeout connect 5s
timeout client 7d
timeout server 7d
frontend websocket
bind *:80
bind *:443 ssl crt /etc/haproxy/certs/sockudo.pem
default_backend sockudo_nodes
backend sockudo_nodes
balance source # Sticky sessions using source IP
option httpchk GET /up
server node1 sockudo-node1:6001 check
server node2 sockudo-node2:6002 check
server node3 sockudo-node3:6003 check
Sticky Sessions
Why Sticky Sessions?
WebSocket connections are stateful and must remain on the same node. Use one of these methods:
IP Hash (Nginx)
upstream sockudo_backend {
ip_hash; # Routes same IP to same server
server sockudo-node1:6001;
server sockudo-node2:6002;
server sockudo-node3:6003;
}
Pros: Simple, no cookies
Cons: Not ideal for users behind NAT/proxies
Cookie-Based (HAProxy)
backend sockudo_nodes
balance roundrobin
cookie SERVERID insert indirect nocache
server node1 sockudo-node1:6001 check cookie node1
server node2 sockudo-node2:6002 check cookie node2
server node3 sockudo-node3:6003 check cookie node3
Pros: Works with NAT, better distribution
Cons: Requires cookie support
Source IP (HAProxy)
backend sockudo_nodes
balance source
server node1 sockudo-node1:6001 check
server node2 sockudo-node2:6002 check
server node3 sockudo-node3:6003 check
Redis Cluster Setup
Creating Redis Cluster
services:
redis-1:
image: redis:7-alpine
command: redis-server --port 6379 --cluster-enabled yes --cluster-config-file nodes.conf --cluster-node-timeout 5000 --appendonly yes
ports:
- "6379:6379"
volumes:
- redis-1-data:/data
networks:
- sockudo-network
redis-2:
image: redis:7-alpine
command: redis-server --port 6380 --cluster-enabled yes --cluster-config-file nodes.conf --cluster-node-timeout 5000 --appendonly yes
ports:
- "6380:6380"
volumes:
- redis-2-data:/data
networks:
- sockudo-network
redis-3:
image: redis:7-alpine
command: redis-server --port 6381 --cluster-enabled yes --cluster-config-file nodes.conf --cluster-node-timeout 5000 --appendonly yes
ports:
- "6381:6381"
volumes:
- redis-3-data:/data
networks:
- sockudo-network
# Additional nodes for replicas
redis-4:
image: redis:7-alpine
command: redis-server --port 6382 --cluster-enabled yes --cluster-config-file nodes.conf --cluster-node-timeout 5000 --appendonly yes
ports:
- "6382:6382"
volumes:
- redis-4-data:/data
networks:
- sockudo-network
redis-5:
image: redis:7-alpine
command: redis-server --port 6383 --cluster-enabled yes --cluster-config-file nodes.conf --cluster-node-timeout 5000 --appendonly yes
ports:
- "6383:6383"
volumes:
- redis-5-data:/data
networks:
- sockudo-network
redis-6:
image: redis:7-alpine
command: redis-server --port 6384 --cluster-enabled yes --cluster-config-file nodes.conf --cluster-node-timeout 5000 --appendonly yes
ports:
- "6384:6384"
volumes:
- redis-6-data:/data
networks:
- sockudo-network
volumes:
redis-1-data:
redis-2-data:
redis-3-data:
redis-4-data:
redis-5-data:
redis-6-data:
Initialize Cluster
# Create cluster with 3 masters and 3 replicas
docker exec -it redis-1 redis-cli --cluster create \
redis-1:6379 \
redis-2:6380 \
redis-3:6381 \
redis-4:6382 \
redis-5:6383 \
redis-6:6384 \
--cluster-replicas 1
environment:
ADAPTER_DRIVER: "redis-cluster"
CACHE_DRIVER: "redis-cluster"
QUEUE_DRIVER: "redis-cluster"
RATE_LIMITER_DRIVER: "redis-cluster"
REDIS_CLUSTER_NODES: "redis-1:6379,redis-2:6380,redis-3:6381,redis-4:6382,redis-5:6383,redis-6:6384"
NATS Setup
NATS Cluster
services:
nats-1:
image: nats:latest
command:
- "-js"
- "-m"
- "8222"
- "--cluster"
- "nats://0.0.0.0:6222"
- "--routes"
- "nats://nats-2:6222,nats://nats-3:6222"
ports:
- "4222:4222"
- "8222:8222"
networks:
- sockudo-network
nats-2:
image: nats:latest
command:
- "-js"
- "-m"
- "8222"
- "--cluster"
- "nats://0.0.0.0:6222"
- "--routes"
- "nats://nats-1:6222,nats://nats-3:6222"
ports:
- "4223:4222"
- "8223:8222"
networks:
- sockudo-network
nats-3:
image: nats:latest
command:
- "-js"
- "-m"
- "8222"
- "--cluster"
- "nats://0.0.0.0:6222"
- "--routes"
- "nats://nats-1:6222,nats://nats-2:6222"
ports:
- "4224:4222"
- "8224:8222"
networks:
- sockudo-network
environment:
ADAPTER_DRIVER: "nats"
NATS_SERVERS: "nats://nats-1:4222,nats://nats-2:4222,nats://nats-3:4222"
Cluster Health Monitoring
Sockudo includes built-in cluster health tracking:
{
"adapter": {
"cluster_health": {
"enabled": true,
"heartbeat_interval_ms": 10000,
"node_timeout_ms": 30000,
"cleanup_interval_ms": 10000
}
}
}
Or via environment:
environment:
ADAPTER_CLUSTER_HEALTH_ENABLED: "true"
ADAPTER_CLUSTER_HEALTH_HEARTBEAT_INTERVAL_MS: "10000"
ADAPTER_CLUSTER_HEALTH_NODE_TIMEOUT_MS: "30000"
ADAPTER_CLUSTER_HEALTH_CLEANUP_INTERVAL_MS: "10000"
Features:
- Automatic node discovery
- Heartbeat monitoring
- Failed node detection
- Automatic cleanup of stale connections
Buffer Sizing
environment:
# Scale adapter buffer with CPU cores
ADAPTER_BUFFER_MULTIPLIER_PER_CPU: "128"
# Cleanup queue for mass disconnects
CLEANUP_QUEUE_BUFFER_SIZE: "50000"
CLEANUP_BATCH_SIZE: "25"
Redis Tuning
redis:
command: >
redis-server
--maxmemory 2gb
--maxmemory-policy allkeys-lru
--tcp-backlog 511
--timeout 0
--tcp-keepalive 300
--save 900 1
--save 300 10
Database Pooling
environment:
DATABASE_POOLING_ENABLED: "true"
DATABASE_POOL_MIN: "4"
DATABASE_POOL_MAX: "32"
# Per-database overrides
DATABASE_MYSQL_POOL_MIN: "4"
DATABASE_MYSQL_POOL_MAX: "32"
Testing Multi-Node Setup
Use the provided test configuration:
# Start multi-node test environment
docker-compose -f docker-compose.multinode.yml up
# Open test client
open http://localhost:82/client/
# Connect to different nodes
# Node 1: http://localhost:6002
# Node 2: http://localhost:6003
# Load Balancer: http://localhost:82
Scaling Commands
# Scale to 5 nodes
docker-compose up -d --scale sockudo=5
# View all nodes
docker-compose ps
# View logs from all nodes
docker-compose logs -f sockudo
# Check specific node
docker-compose logs -f sockudo-node1
# Restart single node (zero downtime)
docker-compose restart sockudo-node1
Next Steps