Skip to main content

Quick Diagnostics

Before diving into specific issues, run these diagnostic commands:
# Check hub health
curl http://localhost:3000/health

# List available rooms
gambiarra list

# Check if your LLM endpoint is responding
curl http://localhost:11434/v1/models

Common Issues

Symptoms:
Error: Failed to start server: address already in use
Cause: Another process is using port 3000 (or your specified port).Solutions:
  1. Use a different port:
gambiarra serve --port 3001
  1. Find and kill the process using the port:
# Find the process
lsof -i :3000
# or
netstat -tlnp | grep :3000

# Kill the process
kill -9 <PID>
  1. Check if another Gambiarra instance is running:
ps aux | grep gambiarra
Symptoms:
Error: Failed to connect to hub at http://localhost:3000
ECONNREFUSED
Cause: Hub is not running or not accessible.Solutions:
  1. Verify hub is running:
curl http://localhost:3000/health
  1. Check if hub is listening on the correct interface:
# If hub is bound to 127.0.0.1, it's not accessible from other machines
gambiarra serve --hostname 0.0.0.0 --port 3000
  1. Verify firewall rules:
# Linux
sudo ufw status
sudo ufw allow 3000/tcp

# macOS
sudo /usr/libexec/ApplicationFirewall/socketfilterfw --list
  1. Try using the IP address instead of localhost:
# Find your local IP
hostname -I  # Linux
ipconfig getifaddr en0  # macOS

# Connect using IP
gambiarra join ABC123 --hub http://192.168.1.100:3000
Symptoms:
Error: Room not found
Cause: Room code is incorrect or room was deleted.Solutions:
  1. List available rooms:
gambiarra list
  1. Create a new room:
gambiarra create --name "My Room"
  1. Check for typos in room code (case-sensitive):
# Room codes are uppercase
gambiarra join ABC123  # Correct
gambiarra join abc123  # Will be converted to ABC123
  1. Verify hub URL:
# If using a custom hub URL
gambiarra list --hub http://192.168.1.100:3000
Symptoms:
Model 'llama3' not found.
Available models: llama3.2, mistral, codellama
Cause: Model name doesn’t match any loaded model.Solutions:
  1. List available models on your endpoint:
# Ollama
ollama list
curl http://localhost:11434/api/tags

# OpenAI-compatible
curl http://localhost:11434/v1/models
  1. Pull/download the model:
# Ollama
ollama pull llama3

# LM Studio: Use the GUI to download models
  1. Use exact model name (case-sensitive):
gambiarra join ABC123 --model llama3.2
  1. Verify model is loaded:
# Some providers require models to be explicitly loaded
# Check your LLM server's documentation
Symptoms:
Error: Participant is offline
Cause: Participant health checks are failing.Technical Details:
  • Health check interval: 10 seconds (see packages/core/src/types.ts:15)
  • Timeout threshold: 30 seconds (3 missed checks)
  • Implementation: packages/core/src/hub.ts:380-388
Solutions:
  1. Check if participant process is still running:
ps aux | grep gambiarra
  1. Verify network connectivity:
# From participant to hub
curl http://<hub-ip>:3000/health
  1. Check participant logs for errors:
# Look for health check failures or connection errors
  1. Restart the participant:
# Ctrl+C to stop, then rejoin
gambiarra join ABC123 --model llama3
  1. Check for network issues:
  • WiFi disconnections
  • Laptop sleep/hibernate
  • Network congestion
  • Firewall blocking health checks
Symptoms:
Cannot auto-discover hub on local network
Cause: mDNS/Bonjour is not enabled or blocked.Solutions:
  1. Enable mDNS on hub:
gambiarra serve --mdns
  1. Check if mDNS is supported:
# Linux: Ensure avahi-daemon is running
sudo systemctl status avahi-daemon
sudo systemctl start avahi-daemon

# macOS: Bonjour is built-in
# Windows: Bonjour Print Services must be installed
  1. Check for firewall blocking mDNS (port 5353 UDP):
# Linux
sudo ufw allow 5353/udp

# Test mDNS manually
avahi-browse -a
  1. Use explicit hub URL instead:
gambiarra join ABC123 --hub http://192.168.1.100:3000
  1. Check network configuration:
  • Ensure devices are on the same network segment
  • Some networks block multicast traffic
  • VPNs may interfere with mDNS
Symptoms:
Error: Failed to proxy request: ECONNREFUSED
No models found at http://localhost:11434
Cause: LLM server is not running or not accessible.Solutions:
  1. Verify LLM server is running:
# Ollama
ollama list
curl http://localhost:11434/api/tags

# LM Studio: Check if server is started in GUI

# vLLM
curl http://localhost:8000/v1/models
  1. Check correct port:
# Common default ports
# Ollama: 11434
# LM Studio: 1234
# LocalAI: 8080
# vLLM: 8000
# text-generation-webui: 5000
  1. Test endpoint directly:
curl http://localhost:11434/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "llama3",
    "messages": [{"role": "user", "content": "Hi"}]
  }'
  1. Check server logs for errors:
# Ollama logs
journalctl -u ollama -f  # Linux systemd

# Other servers: Check console output
  1. Restart the LLM server:
# Ollama
sudo systemctl restart ollama  # Linux
ollama serve  # Manual

# Others: Use provider-specific restart method
Symptoms:
Streaming response hangs or returns all at once
Cause: SSE (Server-Sent Events) not properly configured.Solutions:
  1. Verify streaming is enabled in request:
const stream = await streamText({
  model: gambiarra.any(),
  prompt: "Write a story",
  stream: true,  // Ensure this is set
});
  1. Check if using reverse proxy with buffering:
# Nginx: Disable buffering for SSE
location / {
  proxy_buffering off;
  proxy_cache off;
  chunked_transfer_encoding off;
}
  1. Verify LLM endpoint supports streaming:
curl -N http://localhost:11434/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "llama3",
    "messages": [{"role": "user", "content": "Hi"}],
    "stream": true
  }'
  1. Check hub logs for streaming errors:
  • Hub should return Content-Type: text/event-stream
  • See implementation: packages/core/src/hub.ts:284-293
Symptoms:
Error: Invalid password
Cause: Incorrect password or password not provided.Solutions:
  1. Verify password is correct:
# Passwords are case-sensitive
gambiarra join ABC123 --password "Secret123"
  1. Check if room requires password:
# List rooms to see if password is required
gambiarra list
  1. Create room with password:
gambiarra create --name "Private" --password "mypass"
  1. Join with password:
gambiarra join ABC123 \
  --model llama3 \
  --password "mypass"
Passwords are hashed with argon2id before storage (see packages/core/src/room.ts:10-18).
Symptoms:
Hub or participant consuming excessive RAM
Cause: Memory leaks or resource buildup.Solutions:
  1. Check for room/participant accumulation:
# Rooms are stored in memory
# List all rooms and participants
gambiarra list
  1. Restart the hub periodically:
# Hub stores rooms in memory only
# Restarting clears all rooms
  1. Monitor SSE connections:
# Long-lived SSE connections can accumulate
# Ensure TUI/monitoring clients reconnect properly
  1. Check for participant health check buildup:
  • Health checks run every 10 seconds
  • Failed participants should be marked offline
  • See: packages/core/src/hub.ts:380-388
Symptoms:
SDK calls failing or hanging
Cause: Incorrect SDK configuration or network issues.Solutions:
  1. Verify SDK configuration:
import { createGambiarra } from "gambiarra-sdk";

const gambiarra = createGambiarra({
  roomCode: "ABC123",
  hubUrl: "http://localhost:3000",  // Ensure correct URL
});
  1. Test hub connectivity:
// Check if hub is accessible
const response = await fetch(
  `${hubUrl}/rooms/${roomCode}/participants`
);
console.log(await response.json());
  1. Verify participants are online:
const response = await fetch(
  `${hubUrl}/rooms/${roomCode}/participants`
);
const { participants } = await response.json();
console.log(participants.filter(p => p.status === "online"));
  1. Check model routing:
// Try different routing strategies
gambiarra.any()  // Random online participant
gambiarra.participant("participant-id")  // Specific participant
gambiarra.model("llama3")  // By model name
  1. Enable debug logging:
// Add error handling
try {
  const result = await generateText({
    model: gambiarra.any(),
    prompt: "test",
  });
} catch (error) {
  console.error("Generation failed:", error);
}

Platform-Specific Issues

Linux

Issue: Permission denied on port 3000
# Use a port > 1024 or run with sudo (not recommended)
gambiarra serve --port 3001
Issue: avahi-daemon not running
sudo systemctl enable avahi-daemon
sudo systemctl start avahi-daemon

macOS

Issue: Firewall blocking connections
# Allow Gambiarra through firewall
# System Preferences > Security & Privacy > Firewall > Firewall Options
# Add gambiarra to allowed applications
Issue: Bonjour not discovering services
# Bonjour is built-in, but can be blocked by network settings
# Check network preferences for any proxy/VPN interference

Windows

Issue: Bonjour not installed
# Install Bonjour Print Services
# Download from Apple or use Chocolatey
choco install bonjour
Issue: Windows Firewall blocking
# Allow port through Windows Firewall
netsh advfirewall firewall add rule name="Gambiarra" dir=in action=allow protocol=TCP localport=3000

Debugging Tools

Enable Debug Logging

# Set environment variable for verbose output
export DEBUG=gambiarra:*
gambiarra serve

Monitor SSE Events

Watch real-time events from a room:
curl -N http://localhost:3000/rooms/ABC123/events

Inspect Hub State

# List all rooms
curl http://localhost:3000/rooms

# Get room participants
curl http://localhost:3000/rooms/ABC123/participants

# Check available models
curl http://localhost:3000/rooms/ABC123/v1/models

Network Diagnostics

# Test connectivity from participant to hub
ping <hub-ip>

# Test TCP connection
telnet <hub-ip> 3000
# or
nc -zv <hub-ip> 3000

# Trace route
traceroute <hub-ip>

Check Process Status

# List all Gambiarra processes
ps aux | grep gambiarra

# Check network connections
lsof -i :3000
netstat -tlnp | grep :3000

Getting Help

If you’re still experiencing issues:
  1. Check existing GitHub issues: github.com/arthurbm/gambiarra/issues
  2. Create a new issue with:
    • Full error message
    • Steps to reproduce
    • Environment details (OS, Bun version, LLM provider)
    • Output from diagnostic commands
  3. Join the community for real-time help

Known Limitations

These are not bugs but design limitations:
  • No persistent storage: Rooms exist only in memory
  • No authentication: Designed for trusted networks
  • No rate limiting: Can be abused without proxy
  • No load balancing: Participants selected randomly
  • No request queuing: Busy participants may reject requests
See the Roadmap for planned improvements.

Build docs developers (and LLMs) love