Skip to main content
GGA works with any AI provider you already have installed. Set PROVIDER in your .gga config to point to one of the supported options.

Provider overview

ProviderConfig valueCLI command usedInstallation
Claudeclaudeecho "prompt" | claude --printclaude.ai/code
Geminigeminigemini -p "prompt"github.com/google-gemini/gemini-cli
Codexcodexcodex exec "prompt"npm i -g @openai/codex
OpenCodeopencode or opencode:<model>opencode run "prompt"opencode.ai
Ollamaollama:<model>ollama run <model> "prompt"ollama.ai
LM Studiolmstudio or lmstudio:<model>HTTP API call to local serverlmstudio.ai
GitHub Modelsgithub:<model>HTTP API via gh auth tokengithub.com/marketplace/models
Override the provider for a single run without changing your config file:
GGA_PROVIDER="gemini" gga run

Provider setup

Claude is the recommended provider for strict mode and CI/CD pipelines. It reliably follows structured instructions and consistently outputs the STATUS: PASSED / STATUS: FAILED format.InstallationInstall the Claude Code CLI from claude.ai/code.Verify
echo "Say hello" | claude --print
Config
.gga
PROVIDER="claude"
How GGA calls it
printf '%s' "$prompt" | claude --print
Google’s Gemini CLI. Built into Antigravity IDE — if you use Antigravity, Gemini is already available in your integrated terminal.InstallationInstall the Gemini CLI from github.com/google-gemini/gemini-cli.Verify
echo "Say hello" | gemini
You must be authenticated (gemini login) before GGA can use it.Config
.gga
PROVIDER="gemini"
How GGA calls it
gemini -p "$prompt"
OpenAI’s Codex CLI, optimized for code tasks.Installation
npm i -g @openai/codex
Verify
codex exec "Say hello"
Config
.gga
PROVIDER="codex"
How GGA calls it
codex exec "$prompt"
OpenCode is a provider-agnostic AI coding CLI. You can use it with its default model or pin a specific model.InstallationInstall from opencode.ai.Config
PROVIDER="opencode"
How GGA calls it
# Default model
opencode run "$prompt"

# Specific model
opencode run --model "anthropic/claude-opus-4-5" "$prompt"
Run models locally with Ollama. No API keys required and all data stays on your machine.InstallationInstall from ollama.ai or via Homebrew:
brew install ollama
Pull a model
ollama pull llama3.2
ollama pull codellama
ollama pull qwen2.5-coder
ollama pull deepseek-coder
Config
PROVIDER="ollama:llama3.2"
A model name is required — PROVIDER="ollama" without a model name will fail validation.Custom hostIf your Ollama instance is not running on localhost:11434, set OLLAMA_HOST:
.gga
OLLAMA_HOST="http://192.168.1.100:11434"
PROVIDER="ollama:llama3.2"
How GGA calls itGGA prefers the Ollama REST API (via curl + python3) for clean output. It falls back to the ollama run CLI if those tools are not available.
# API path (preferred)
curl -s http://localhost:11434/api/generate \
  -d '{"model":"llama3.2","prompt":"...","stream":false}'

# CLI fallback
ollama run llama3.2 "$prompt"
Ollama is a pure LLM without file-reading tools. If your AGENTS.md uses references to other files (e.g. - UI guidelines: ui/AGENTS.md), Ollama cannot read them. Consolidate all rules into a single file when using Ollama.
LM Studio runs models locally through an OpenAI-compatible HTTP API. No API keys required.Setup
1

Download LM Studio

Install from lmstudio.ai.
2

Download a model

Use the model browser inside LM Studio to download a model.
3

Start the local server

Open the Local Server tab in LM Studio and start the server. It listens on http://localhost:1234/v1 by default.
4

Configure GGA

Set PROVIDER in your .gga file.
Config
PROVIDER="lmstudio"
Custom hostIf your LM Studio server is on a different host or port, set LMSTUDIO_HOST:
.gga
LMSTUDIO_HOST="http://localhost:8080/v1"
PROVIDER="lmstudio"
Verify the connection
curl http://localhost:1234/v1/models
How GGA calls itGGA calls the OpenAI-compatible chat/completions endpoint directly:
curl -s http://localhost:1234/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{"model":"local-model","messages":[{"role":"user","content":"..."}],"stream":false}'
curl and python3 must be available on your system. GGA uses python3 for safe JSON parsing; a sed/grep fallback is used if python3 is not available.
Access dozens of hosted models — GPT-4o, GPT-4.1, DeepSeek R1, Grok 3, Phi-4, LLaMA, and more — using your existing GitHub account. No separate API keys needed.Setup
1

Install GitHub CLI

brew install gh
# or see https://cli.github.com
2

Authenticate

gh auth login
3

Configure GGA

Set PROVIDER with your chosen model.
Config
PROVIDER="github:gpt-4o"
A model name is required — PROVIDER="github" without a model will fail validation.Browse all available models at github.com/marketplace/models.How GGA calls itGGA retrieves a token via gh auth token and calls the GitHub Models inference endpoint:
curl -sS https://models.inference.ai.azure.com/chat/completions \
  -H "Authorization: Bearer $(gh auth token)" \
  -H "Content-Type: application/json" \
  -d '{"model":"gpt-4o","messages":[...],"temperature":0.2}'
Both gh CLI and curl must be installed and available in PATH.

Build docs developers (and LLMs) love