Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/jolehuit/clother/llms.txt

Use this file to discover all available pages before exploring further.

Clother supports three local inference backends. No API key is needed for any of them. You must always pass --model because local providers have no built-in default model.
LauncherBackendPort
clother-ollamaOllama11434
clother-lmstudioLM Studio1234
clother-llamacppllama.cpp8000

Ollama

Ollama manages and serves local models with a simple CLI.
1

Install Ollama

Download and install Ollama from ollama.com.
2

Pull a model

ollama pull qwen3-coder
3

Start the server

ollama serve
Ollama listens on port 11434 by default.
4

Launch with Clother

clother-ollama --model qwen3-coder
The auth token is set to the literal string ollama by Clother. You do not need to configure an API key.

LM Studio

LM Studio provides a desktop GUI for downloading and running models locally.
1

Install LM Studio

Download and install LM Studio from lmstudio.ai/download.
2

Load a model

Open LM Studio and download a model from its built-in model browser. Once downloaded, load it.
3

Start the local server

In LM Studio, navigate to the Local Server tab and start the server on port 1234.
4

Launch with Clother

clother-lmstudio --model <model-name>
Replace <model-name> with the model identifier shown in LM Studio.
The auth token is set to the literal string lmstudio by Clother. You do not need to configure an API key.

llama.cpp

llama.cpp is a high-performance inference engine for GGUF models.
1

Build llama.cpp

Clone and build the project following the instructions at github.com/ggml-org/llama.cpp.
2

Start the server

./llama-server --model model.gguf --port 8000 --jinja
Replace model.gguf with the path to your model file. The --jinja flag is required for correct tool-calling support.
3

Launch with Clother

clother-llamacpp --model <model-name>
llama.cpp uses auth_mode: none. No authentication token is sent.

Build docs developers (and LLMs) love