PROVIDER in your .gga config to point to one of the supported options.
Provider overview
| Provider | Config value | CLI command used | Installation |
|---|---|---|---|
| Claude | claude | echo "prompt" | claude --print | claude.ai/code |
| Gemini | gemini | gemini -p "prompt" | github.com/google-gemini/gemini-cli |
| Codex | codex | codex exec "prompt" | npm i -g @openai/codex |
| OpenCode | opencode or opencode:<model> | opencode run "prompt" | opencode.ai |
| Ollama | ollama:<model> | ollama run <model> "prompt" | ollama.ai |
| LM Studio | lmstudio or lmstudio:<model> | HTTP API call to local server | lmstudio.ai |
| GitHub Models | github:<model> | HTTP API via gh auth token | github.com/marketplace/models |
Override the provider for a single run without changing your config file:
Provider setup
Claude
Claude
Claude is the recommended provider for strict mode and CI/CD pipelines. It reliably follows structured instructions and consistently outputs the ConfigHow GGA calls it
STATUS: PASSED / STATUS: FAILED format.InstallationInstall the Claude Code CLI from claude.ai/code.Verify.gga
Gemini
Gemini
Google’s Gemini CLI. Built into Antigravity IDE — if you use Antigravity, Gemini is already available in your integrated terminal.InstallationInstall the Gemini CLI from github.com/google-gemini/gemini-cli.VerifyYou must be authenticated (How GGA calls it
gemini login) before GGA can use it.Config.gga
Codex
Codex
OpenAI’s Codex CLI, optimized for code tasks.InstallationVerifyConfigHow GGA calls it
.gga
OpenCode
OpenCode
OpenCode is a provider-agnostic AI coding CLI. You can use it with its default model or pin a specific model.InstallationInstall from opencode.ai.ConfigHow GGA calls it
Ollama (local)
Ollama (local)
Run models locally with Ollama. No API keys required and all data stays on your machine.InstallationInstall from ollama.ai or via Homebrew:Pull a modelConfigA model name is required — How GGA calls itGGA prefers the Ollama REST API (via
PROVIDER="ollama" without a model name will fail validation.Custom hostIf your Ollama instance is not running on localhost:11434, set OLLAMA_HOST:.gga
curl + python3) for clean output. It falls back to the ollama run CLI if those tools are not available.LM Studio (local)
LM Studio (local)
LM Studio runs models locally through an OpenAI-compatible HTTP API. No API keys required.SetupConfigCustom hostIf your LM Studio server is on a different host or port, set Verify the connectionHow GGA calls itGGA calls the OpenAI-compatible
Download LM Studio
Install from lmstudio.ai.
Start the local server
Open the Local Server tab in LM Studio and start the server. It listens on
http://localhost:1234/v1 by default.LMSTUDIO_HOST:.gga
chat/completions endpoint directly:curl and python3 must be available on your system. GGA uses python3 for safe JSON parsing; a sed/grep fallback is used if python3 is not available.GitHub Models
GitHub Models
Access dozens of hosted models — GPT-4o, GPT-4.1, DeepSeek R1, Grok 3, Phi-4, LLaMA, and more — using your existing GitHub account. No separate API keys needed.SetupConfigA model name is required — Both
PROVIDER="github" without a model will fail validation.Browse all available models at github.com/marketplace/models.How GGA calls itGGA retrieves a token via gh auth token and calls the GitHub Models inference endpoint:gh CLI and curl must be installed and available in PATH.