Watch N Chill implements two layers of rate limiting to protect against abuse:
HTTP rate limiting - Limits requests per IP address over a time window
Socket.IO connection limiting - Limits concurrent WebSocket connections per IP
Both use Redis for distributed rate limiting across multiple server instances, with in-memory fallback if Redis is unavailable.
HTTP rate limiting
HTTP requests are rate-limited per IP address using a sliding window algorithm.
Configuration
Time window in milliseconds for rate limiting. Default is 60 seconds.
Maximum number of requests allowed per window. Default is 360 requests per minute (6 req/s).
Implementation
src/backend/rate-limit.ts:4-6
const RATE_LIMIT_WINDOW_MS = parseInt ( process . env . RATE_LIMIT_WINDOW_MS || '60000' , 10 );
const RATE_LIMIT_MAX_REQUESTS = parseInt ( process . env . RATE_LIMIT_MAX_REQUESTS || '360' , 10 );
const RATE_LIMIT_KEY_PREFIX = 'rl:' ;
Redis-backed rate limiting
The rate limiter uses a Lua script for atomic increment and expiry operations:
src/backend/rate-limit.ts:9-15
const LUA_SCRIPT = `
local current = redis.call('INCR', KEYS[1])
if current == 1 then
redis.call('PEXPIRE', KEYS[1], ARGV[1])
end
return current
` ;
Lua scripts execute atomically in Redis, ensuring that the increment and expiry operations happen together without race conditions. This prevents edge cases where keys could be incremented but never expire.
src/backend/rate-limit.ts:60-67
async function checkRedisRateLimit ( ip : string ) : Promise <{ allowed : boolean ; remaining : number }> {
const key = ` ${ RATE_LIMIT_KEY_PREFIX }${ ip } ` ;
const windowMs = RATE_LIMIT_WINDOW_MS . toString ();
const current = ( await redis . eval ( LUA_SCRIPT , 1 , key , windowMs )) as number ;
const allowed = current <= RATE_LIMIT_MAX_REQUESTS ;
const remaining = Math . max ( 0 , RATE_LIMIT_MAX_REQUESTS - current );
return { allowed , remaining };
}
In-memory fallback
If Redis is unavailable, rate limiting falls back to an in-memory store:
src/backend/rate-limit.ts:31-58
const memoryStore = new Map < string , { count : number ; resetAt : number }>();
async function checkMemoryRateLimit ( ip : string ) : Promise <{ allowed : boolean ; remaining : number }> {
const now = Date . now ();
const key = ip ;
let entry = memoryStore . get ( key );
if ( ! entry || entry . resetAt <= now ) {
entry = { count: 0 , resetAt: now + RATE_LIMIT_WINDOW_MS };
memoryStore . set ( key , entry );
}
entry . count += 1 ;
// Periodic cleanup...
const allowed = entry . count <= RATE_LIMIT_MAX_REQUESTS ;
const remaining = Math . max ( 0 , RATE_LIMIT_MAX_REQUESTS - entry . count );
return { allowed , remaining };
}
The in-memory fallback is per-process and not shared across multiple server instances. For production deployments with multiple servers, ensure Redis is always available.
Rate limit enforcement
Rate limiting is enforced in the custom server before Next.js handles the request:
if ( ! dev ) {
const { allowed , remaining } = await checkRateLimit ( req );
res . setHeader ( 'X-RateLimit-Limit' , String ( getRateLimitConfig (). maxRequests ));
res . setHeader ( 'X-RateLimit-Remaining' , String ( remaining ));
if ( ! allowed ) {
res . statusCode = 429 ;
res . setHeader ( 'Retry-After' , String ( retryAfterSec ));
res . setHeader ( 'content-type' , 'text/plain; charset=utf-8' );
res . end ( 'Too Many Requests' );
return ;
}
}
Rate limiting is disabled in development mode (NODE_ENV !== 'production') to avoid interrupting local development.
Rate limit information is included in response headers:
Header Description X-RateLimit-LimitMaximum requests allowed per window X-RateLimit-RemainingRequests remaining in current window Retry-AfterSeconds to wait before retrying (on 429 responses)
The rate limiter identifies clients by IP address, checking multiple headers:
src/backend/rate-limit.ts:17-28
function getClientIp ( req : IncomingMessage ) : string {
const forwarded = req . headers [ 'x-forwarded-for' ];
if ( forwarded ) {
const first = typeof forwarded === 'string' ? forwarded . split ( ',' )[ 0 ] : forwarded [ 0 ];
return first ?. trim () ?? req . socket . remoteAddress ?? 'unknown' ;
}
const realIp = req . headers [ 'x-real-ip' ];
if ( realIp ) {
return typeof realIp === 'string' ? realIp : realIp [ 0 ] ?? 'unknown' ;
}
return req . socket . remoteAddress ?? 'unknown' ;
}
Socket.IO connection limiting
Socket.IO connections are limited per IP address to prevent WebSocket exhaustion attacks.
Configuration
RATE_LIMIT_SOCKET_MAX_PER_IP
Maximum concurrent Socket.IO connections per IP address. Default is 10.
src/backend/rate-limit.ts:86
const SOCKET_CONN_MAX_PER_IP = parseInt ( process . env . RATE_LIMIT_SOCKET_MAX_PER_IP || '10' , 10 );
Implementation
Connection counts are tracked in Redis with automatic expiry:
src/backend/rate-limit.ts:93-102
export async function checkSocketConnectionAllowed ( ip : string ) : Promise <{ allowed : boolean }> {
const key = ` ${ SOCKET_KEY_PREFIX }${ ip } ` ;
try {
const current = await redis . incr ( key );
if ( current === 1 ) await redis . expire ( key , 3600 ); // 1h TTL
return { allowed: current <= SOCKET_CONN_MAX_PER_IP };
} catch {
return { allowed: true }; // allow on Redis failure
}
}
The 1-hour TTL ensures that connection counters are eventually cleaned up even if clients disconnect ungracefully without triggering the decrementSocketConnection cleanup.
Enforcement in Socket.IO
Connection limits are checked during Socket.IO connection establishment:
src/backend/socket/index.ts:32-45
io . on ( 'connection' , async ( socket ) => {
const forwarded = socket . handshake . headers [ 'x-forwarded-for' ];
const clientIp =
( typeof forwarded === 'string' ? forwarded . split ( ',' )[ 0 ]?. trim () : forwarded ?.[ 0 ]?. trim ()) ??
socket . handshake . address ??
socket . conn . remoteAddress ??
'unknown' ;
socket . data . clientIp = clientIp ;
const { allowed } = await checkSocketConnectionAllowed ( clientIp );
if ( ! allowed ) {
socket . disconnect ( true );
return ;
}
// ... rest of connection handler
});
Cleanup on disconnect
When a socket disconnects, the connection counter is decremented:
src/backend/socket/handleDisconnect.ts
import { decrementSocketConnection } from '@/backend/rate-limit' ;
export function handleDisconnect ( socket : Socket ) {
const clientIp = socket . data . clientIp ;
if ( clientIp ) {
decrementSocketConnection ( clientIp );
}
// ... rest of disconnect handling
}
src/backend/rate-limit.ts:104-111
export async function decrementSocketConnection ( ip : string ) : Promise < void > {
const key = ` ${ SOCKET_KEY_PREFIX }${ ip } ` ;
try {
await redis . decr ( key );
} catch {
// ignore
}
}
Redis key structure
HTTP rate limit Key : rl:{ip}Type : Integer (request count)TTL : Rate limit window (default 60s)Counter of requests from this IP in the current window.
Socket connections Key : rl:socket:{ip}Type : Integer (connection count)TTL : 1 hourNumber of active Socket.IO connections from this IP.
Production recommendations
Adjust limits for your traffic
The default limits (360 req/min HTTP, 10 concurrent sockets) are conservative. Monitor your traffic patterns and adjust accordingly: # For high-traffic sites
RATE_LIMIT_MAX_REQUESTS = 1000
RATE_LIMIT_SOCKET_MAX_PER_IP = 25
Always use Redis in production for accurate distributed rate limiting: REDIS_URL = rediss://default:password@your-upstash.upstash.io:6379
The in-memory fallback is per-process and won’t protect against distributed attacks.
Log or monitor 429 responses to identify legitimate users hitting limits or potential attacks: if ( ! allowed ) {
console . warn ( `Rate limit exceeded for IP ${ ip } ` );
// Send to monitoring service
}
For known good IPs (health checks, monitoring, trusted partners), implement allowlisting: const ALLOWLISTED_IPS = process . env . ALLOWLISTED_IPS ?. split ( ',' ) || [];
if ( ALLOWLISTED_IPS . includes ( ip )) {
return { allowed: true , remaining: maxRequests };
}
API reference
checkRateLimit
async function checkRateLimit ( req : IncomingMessage ) : Promise <{
allowed : boolean ;
remaining : number ;
}>
Checks if an HTTP request should be allowed based on rate limiting.
checkSocketConnectionAllowed
async function checkSocketConnectionAllowed ( ip : string ) : Promise <{
allowed : boolean ;
}>
Checks if a new Socket.IO connection from the given IP should be allowed.
decrementSocketConnection
async function decrementSocketConnection ( ip : string ) : Promise < void >
Decrements the connection counter for an IP when a socket disconnects.
getRateLimitConfig
function getRateLimitConfig () : {
windowMs : number ;
maxRequests : number ;
}
Returns the current rate limiting configuration.