Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/GuancheData/stage_3/llms.txt

Use this file to discover all available pages before exploring further.

The load balancer is an Nginx process that sits in front of all search-service instances and routes /search and /health requests to them. It uses the least_conn directive so that each new connection goes to the backend that currently has the fewest active connections, spreading load evenly when response times vary. Backends are declared statically in nginx.conf before the service starts; there is no dynamic service discovery.
nginx.conf must be edited to include the correct IP addresses of your search-service nodes before starting the load balancer. The placeholder value <NODE_IP> is not replaced automatically. Starting Nginx with the unedited config will result in all requests failing immediately.

Upstream configuration

The active nginx.conf upstream block is:
upstream search_backend {
    least_conn;

    server <NODE_IP>:7003 max_fails=10 fail_timeout=30s;
    # server <NODE_IP>:7003 max_fails=10 fail_timeout=30s;
    # server <NODE_IP>:7003 max_fails=10 fail_timeout=30s;

    keepalive 64;
}
The two commented-out server lines are templates for additional backends. Uncomment and fill in the IP address of each search-service node you want to add.

Failover behavior

Each server directive carries two failure-tracking parameters:
ParameterValueMeaning
max_fails10Nginx marks a backend as temporarily unavailable after 10 consecutive failed upstream attempts (connection errors, timeouts, or 502/503/504 responses).
fail_timeout30sThe backend stays unavailable for 30 seconds before Nginx tries it again. The same 30-second window is used to count the 10 failures.
The proxy_next_upstream directive on the /search location retries failed requests on the next available backend for error, timeout, http_502, http_503, and http_504 conditions.

Keepalive connection pooling

keepalive 64 instructs Nginx to maintain a pool of up to 64 idle persistent connections to the upstream group. Combined with proxy_http_version 1.1 and proxy_set_header Connection "", this ensures that HTTP/1.1 keep-alive is used end-to-end, avoiding the overhead of a TCP handshake on every request.

Adding or removing backends

To add a backend, uncomment one of the template lines and replace <NODE_IP> with the actual address of the search-service node:
upstream search_backend {
    least_conn;

    server 10.0.1.10:7003 max_fails=10 fail_timeout=30s;
    server 10.0.1.11:7003 max_fails=10 fail_timeout=30s;
    server 10.0.1.12:7003 max_fails=10 fail_timeout=30s;

    keepalive 64;
}
To remove a backend, delete or comment out its server line. After every change, reload Nginx:
nginx -s reload
A reload applies the new configuration without dropping in-flight connections.

Proxied locations

Both locations forward Host and X-Real-IP headers to the upstream.
LocationConnect timeoutSend/Read timeoutNotes
/search60 s60 sRetries on error, timeout, 502, 503, 504.
/health5 s5 sShort timeout; used by health checks.
The load balancer listens on port 8080 (mapped from container port 80).

Build docs developers (and LLMs) love