join command connects your local LLM endpoint to a Gambiarra room, making your models available to other participants through the hub.
Usage
Options
Room code to join. Also accepts
-c shorthand. This is the code you received when the room was created.Model to expose from your endpoint. Also accepts
-m shorthand. Must be available on your LLM server.OpenAI-compatible API endpoint. Also accepts
-e shorthand. Works with Ollama, LM Studio, and other OpenAI-compatible servers.Display name for your endpoint. Also accepts
-n shorthand. If not provided, defaults to <model>@<id-prefix>.Room password (if the room is password-protected). Also accepts
-p shorthand.Hub URL to connect to. Also accepts
-H shorthand.Don’t share machine specs (CPU, RAM, GPU) with the room. By default, the command detects and shares your system specifications.
Examples
Join with Ollama
Connect your local Ollama server to a room:The command stays running and sends periodic health checks to the hub. Keep the terminal open to remain in the room.
Join with LM Studio
LM Studio typically runs on port 1234:Join with custom nickname
Set a friendly display name:Join password-protected room
Provide the room password:Join remote hub
Connect to a hub on a different machine:Hide system specs
Prevent sharing your hardware information:Complete example with all options
How it works
- Model validation: The command connects to your endpoint and verifies the model exists
- Registration: Sends a join request to the hub with your model, endpoint, and specs
- Health checks: Sends periodic health checks every 10 seconds (configurable in core)
- Active session: Remains connected until you press
Ctrl+Cor connection is lost - Graceful exit: Notifies the hub when leaving
Health checks run every 10 seconds. If 3 consecutive checks fail (30 seconds), you’re automatically removed from the room.
Supported endpoints
Gambiarra works with any OpenAI-compatible API server:- Ollama: Default endpoint
http://localhost:11434 - LM Studio: Default endpoint
http://localhost:1234 - LocalAI: Configurable endpoint
- Custom servers: Any server implementing OpenAI’s API format
Endpoint detection
The command automatically detects your endpoint type:- Tries Ollama API format (
/api/tags) - Falls back to OpenAI format (
/v1/models) - Lists available models from the detected format
Error handling
Model not found
If the specified model isn’t available:No models at endpoint
If the endpoint has no models:Wrong password
If the password is incorrect:Room not found
If the room code doesn’t exist:Connection lost
If connection to the hub is lost during the session:Health check behavior
While connected:- Health checks are sent every 10 seconds
- Participant timeout is 30 seconds (3 failed checks)
- If health checks fail, you’re removed from the room
- Graceful shutdown notifies the hub immediately
System specs detection
By default, the command detects and shares:- CPU: Model and core count
- RAM: Total system memory
- GPU: Graphics card model (if available)
--no-specs if you prefer not to share.
Related commands
create- Create a new roomlist- List available roomsmonitor- Monitor room activity in real-timeserve- Start the hub server
Next steps
After joining a room:- Keep the terminal open to stay connected
- Monitor the room from another terminal
- Other participants can now route requests to your model
- Press
Ctrl+Cwhen you want to leave