Skip to main content

Overview

Skyvern requires a Large Language Model (LLM) provider to power its intelligent browser automation. This guide covers configuration for all supported LLM providers.

Supported LLM Providers

ProviderSupported ModelsBest For
OpenAIGPT-5, GPT-5.2, GPT-4.1, o3, o4-miniBest performance, latest models
AnthropicClaude 4 (Sonnet, Opus), Claude 4.5 (Haiku, Sonnet, Opus)Strong reasoning, vision support
Azure OpenAIAny GPT models (GPT-4o recommended)Enterprise deployments
AWS BedrockClaude 3.5, 3.7, 4, 4.5 (Sonnet, Opus)AWS-integrated environments
GeminiGemini 2.5 Pro/Flash, 3 Pro/FlashGoogle ecosystem
OllamaAny locally hosted modelLocal/offline deployments
OpenRouterAny available modelsMulti-model flexibility
Groqllama-3.1-8b-instantUltra-fast inference
OpenAI-CompatibleCustom endpoints via liteLLMSelf-hosted models

Quick Setup with CLI

The fastest way to configure LLMs is using the Skyvern CLI:
skyvern init llm
This interactive wizard will:
  1. Prompt you to select an LLM provider
  2. Request necessary API keys
  3. Generate the .env file with correct configuration

Provider Configuration

OpenAI

Environment Variables:
ENABLE_OPENAI=true
OPENAI_API_KEY=sk-your-api-key-here
LLM_KEY=OPENAI_GPT4O

# Optional: Custom endpoint
# OPENAI_API_BASE=https://api.openai.com/v1

# Optional: Organization ID
# OPENAI_ORGANIZATION=org-your-org-id
Recommended Models:
  • OPENAI_GPT5 - Latest GPT-5 model
  • OPENAI_GPT5_2 - GPT-5.2 variant
  • OPENAI_GPT4_1 - GPT-4.1
  • OPENAI_O3 - o3 model
  • OPENAI_O4_MINI - o4-mini (cost-effective)
  • OPENAI_GPT4O - GPT-4o (multimodal)
Docker Compose Example:
services:
  skyvern:
    environment:
      - ENABLE_OPENAI=true
      - LLM_KEY=OPENAI_GPT4O
      - OPENAI_API_KEY=${OPENAI_API_KEY}

Anthropic

Environment Variables:
ENABLE_ANTHROPIC=true
ANTHROPIC_API_KEY=sk-ant-your-api-key-here
LLM_KEY=ANTHROPIC_CLAUDE4.5_SONNET
Recommended Models:
  • ANTHROPIC_CLAUDE4.5_OPUS - Highest capability
  • ANTHROPIC_CLAUDE4.5_SONNET - Balanced performance
  • ANTHROPIC_CLAUDE4_OPUS - Claude 4 Opus
  • ANTHROPIC_CLAUDE4_SONNET - Claude 4 Sonnet
  • ANTHROPIC_CLAUDE3.7_SONNET - Claude 3.7
Docker Compose Example:
services:
  skyvern:
    environment:
      - ENABLE_ANTHROPIC=true
      - LLM_KEY=ANTHROPIC_CLAUDE4.5_SONNET
      - ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}

Azure OpenAI

Setup Steps:
  1. Login to Azure Portal
  2. Create an Azure Resource Group
  3. Create an OpenAI resource in the Resource Group
  4. Open “Azure AI Foundry” portal
  5. Navigate to “Shared Resources” > “Deployments”
  6. Deploy a base model (e.g., GPT-4o)
  7. Note the deployment name, API key, and endpoint
Environment Variables:
ENABLE_AZURE=true
LLM_KEY=AZURE_OPENAI
AZURE_DEPLOYMENT=your-deployment-name
AZURE_API_KEY=your-azure-api-key
AZURE_API_BASE=https://yourname.openai.azure.com/
AZURE_API_VERSION=2024-08-01-preview
Multi-Model Azure Configuration: You can configure multiple Azure deployments:
# Primary GPT-4o deployment
ENABLE_AZURE=true
AZURE_DEPLOYMENT=gpt4o-deployment
AZURE_API_KEY=key1
AZURE_API_BASE=https://yourname.openai.azure.com/
AZURE_API_VERSION=2024-08-01-preview

# GPT-4o Mini
ENABLE_AZURE_GPT4O_MINI=true
AZURE_GPT4O_MINI_DEPLOYMENT=gpt4o-mini-deployment
AZURE_GPT4O_MINI_API_KEY=key2
AZURE_GPT4O_MINI_API_BASE=https://yourname.openai.azure.com/
AZURE_GPT4O_MINI_API_VERSION=2024-08-01-preview

# GPT-5
ENABLE_AZURE_GPT5=true
AZURE_GPT5_DEPLOYMENT=gpt-5
AZURE_GPT5_API_KEY=key3
AZURE_GPT5_API_BASE=https://yourname.openai.azure.com/
AZURE_GPT5_API_VERSION=2025-01-01-preview
Docker Compose Example:
services:
  skyvern:
    environment:
      - ENABLE_AZURE=true
      - LLM_KEY=AZURE_OPENAI
      - AZURE_DEPLOYMENT=your-deployment
      - AZURE_API_KEY=${AZURE_API_KEY}
      - AZURE_API_BASE=https://yourname.openai.azure.com/
      - AZURE_API_VERSION=2024-08-01-preview

AWS Bedrock

Setup Steps:
  1. Create AWS IAM User
  2. Assign “AmazonBedrockFullAccess” policy
  3. Generate Access Key and Secret Key
  4. In Amazon Bedrock console, go to “Model Access”
  5. Enable “Claude 3.5 Sonnet v2” (or desired model)
Environment Variables:
ENABLE_BEDROCK=true
LLM_KEY=BEDROCK_ANTHROPIC_CLAUDE3.5_SONNET
AWS_REGION=us-west-2
AWS_ACCESS_KEY_ID=your-access-key
AWS_SECRET_ACCESS_KEY=your-secret-key
Recommended Models:
  • BEDROCK_ANTHROPIC_CLAUDE4.5_OPUS_INFERENCE_PROFILE
  • BEDROCK_ANTHROPIC_CLAUDE4.5_SONNET_INFERENCE_PROFILE
  • BEDROCK_ANTHROPIC_CLAUDE4_OPUS_INFERENCE_PROFILE
  • BEDROCK_ANTHROPIC_CLAUDE3.5_SONNET - v2 model
  • BEDROCK_ANTHROPIC_CLAUDE3.5_SONNET_V1 - v1 model
Docker Compose Example:
services:
  skyvern:
    environment:
      - ENABLE_BEDROCK=true
      - LLM_KEY=BEDROCK_ANTHROPIC_CLAUDE3.5_SONNET
      - AWS_REGION=us-west-2
      - AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
      - AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}

Gemini

Environment Variables:
ENABLE_GEMINI=true
GEMINI_API_KEY=your-gemini-api-key
LLM_KEY=GEMINI_2.5_PRO
Recommended Models:
  • GEMINI_2.5_PRO - Latest Pro model
  • GEMINI_2.5_FLASH - Fast inference
  • GEMINI_2.5_PRO_PREVIEW - Preview version
  • GEMINI_2.5_FLASH_PREVIEW - Preview flash
  • GEMINI_3.0_FLASH - Gemini 3.0
Docker Compose Example:
services:
  skyvern:
    environment:
      - ENABLE_GEMINI=true
      - LLM_KEY=GEMINI_2.5_PRO
      - GEMINI_API_KEY=${GEMINI_API_KEY}

Ollama (Local Models)

Prerequisites:
  1. Install Ollama: https://ollama.ai
  2. Pull a model: ollama pull qwen2.5:7b-instruct
  3. Start Ollama server (usually on port 11434)
Environment Variables:
ENABLE_OLLAMA=true
LLM_KEY=OLLAMA
OLLAMA_MODEL=qwen2.5:7b-instruct
OLLAMA_SERVER_URL=http://host.docker.internal:11434
OLLAMA_SUPPORTS_VISION=false
Vision Models: For vision-capable models (qwen3-vl, llava):
OLLAMA_MODEL=qwen3-vl
OLLAMA_SUPPORTS_VISION=true
Docker Compose Example:
services:
  skyvern:
    environment:
      - LLM_KEY=OLLAMA
      - ENABLE_OLLAMA=true
      - OLLAMA_MODEL=qwen2.5:7b-instruct
      - OLLAMA_SERVER_URL=http://host.docker.internal:11434
      - OLLAMA_SUPPORTS_VISION=false
    extra_hosts:
      - "host.docker.internal:host-gateway"  # Required for Docker

OpenRouter

Environment Variables:
ENABLE_OPENROUTER=true
LLM_KEY=OPENROUTER
OPENROUTER_API_KEY=your-openrouter-key
OPENROUTER_MODEL=mistralai/mistral-small-3.1-24b-instruct
Available Models: See https://openrouter.ai/models Docker Compose Example:
services:
  skyvern:
    environment:
      - ENABLE_OPENROUTER=true
      - LLM_KEY=OPENROUTER
      - OPENROUTER_API_KEY=${OPENROUTER_API_KEY}
      - OPENROUTER_MODEL=mistralai/mistral-small-3.1-24b-instruct

Groq

Environment Variables:
ENABLE_GROQ=true
LLM_KEY=GROQ
GROQ_API_KEY=your-groq-api-key
GROQ_MODEL=llama-3.1-8b-instant
Docker Compose Example:
services:
  skyvern:
    environment:
      - ENABLE_GROQ=true
      - LLM_KEY=GROQ
      - GROQ_API_KEY=${GROQ_API_KEY}
      - GROQ_MODEL=llama-3.1-8b-instant

Novita AI

Environment Variables:
ENABLE_NOVITA=true
NOVITA_API_KEY=your-novita-api-key

Volcengine (ByteDance Doubao)

Environment Variables:
ENABLE_VOLCENGINE=true
VOLCENGINE_API_KEY=your-volcengine-key
VOLCENGINE_API_BASE=https://ark.cn-beijing.volces.com/api/v3

OpenAI-Compatible (Custom Endpoints)

For self-hosted models or custom endpoints that follow OpenAI’s API format: Environment Variables:
ENABLE_OPENAI_COMPATIBLE=true
LLM_KEY=OPENAI_COMPATIBLE
OPENAI_COMPATIBLE_MODEL_NAME=yi-34b
OPENAI_COMPATIBLE_API_KEY=your-api-key
OPENAI_COMPATIBLE_API_BASE=https://api.together.xyz/v1

# Optional
OPENAI_COMPATIBLE_API_VERSION=2023-05-15
OPENAI_COMPATIBLE_MAX_TOKENS=4096
OPENAI_COMPATIBLE_TEMPERATURE=0.0
OPENAI_COMPATIBLE_SUPPORTS_VISION=true
Example Providers:
  • Together AI: https://api.together.xyz/v1
  • Local vLLM: http://localhost:8000/v1
  • Local Ollama (via liteLLM): http://localhost:11434/v1

Advanced Configuration

Primary and Secondary LLM

Skyvern supports using a cheaper/faster secondary LLM for smaller tasks:
# Primary LLM for complex reasoning
LLM_KEY=ANTHROPIC_CLAUDE4.5_SONNET

# Secondary LLM for simple tasks (selection, SVG conversion)
SECONDARY_LLM_KEY=OPENAI_GPT4O_MINI
If SECONDARY_LLM_KEY is empty, Skyvern uses the primary LLM for all tasks.

Max Tokens Override

For OpenRouter and Ollama, you can override max tokens:
LLM_CONFIG_MAX_TOKENS=128000

Multiple Providers

You can enable multiple providers and switch between them:
ENABLE_OPENAI=true
OPENAI_API_KEY=sk-...

ENABLE_ANTHROPIC=true
ANTHROPIC_API_KEY=sk-ant-...

# Choose which to use
LLM_KEY=ANTHROPIC_CLAUDE4.5_SONNET

Verification

After configuration, verify your LLM setup:
  1. Start Skyvern:
    docker compose up -d
    
  2. Check logs for LLM initialization:
    docker compose logs skyvern | grep -i "llm"
    
  3. Look for:
    INFO: Registered LLM provider: openai
    INFO: Using LLM model: OPENAI_GPT4O
    
  4. Test with a simple task in the UI

Troubleshooting

”LLM provider not enabled”

Ensure ENABLE_<PROVIDER>=true is set:
ENABLE_OPENAI=true

“Invalid API key”

Verify your API key:
  • No extra spaces or quotes
  • Key has correct permissions
  • Key is not expired

”Model not found”

Check:
  • Model name matches supported LLM keys exactly
  • Provider is enabled
  • For Azure: deployment name matches AZURE_DEPLOYMENT

High Costs

Optimize costs by:
  • Using SECONDARY_LLM_KEY for simple tasks
  • Choosing cost-effective models (GPT-4o-mini, Claude 3.5 Haiku)
  • Using Ollama for local inference (free)

Slow Performance

Improve speed with:
  • Faster models (GPT-4o-mini, Gemini Flash)
  • Groq for ultra-fast inference
  • Local Ollama with GPU acceleration

Development

# Fast, cost-effective
ENABLE_OPENAI=true
OPENAI_API_KEY=sk-...
LLM_KEY=OPENAI_GPT4O_MINI

Production

# Best performance
ENABLE_ANTHROPIC=true
ANTHROPIC_API_KEY=sk-ant-...
LLM_KEY=ANTHROPIC_CLAUDE4.5_SONNET
SECONDARY_LLM_KEY=OPENAI_GPT4O_MINI

Enterprise (Azure)

# Managed Azure OpenAI
ENABLE_AZURE=true
LLM_KEY=AZURE_OPENAI
AZURE_DEPLOYMENT=gpt4o-deployment
AZURE_API_KEY=...
AZURE_API_BASE=https://yourorg.openai.azure.com/
AZURE_API_VERSION=2024-08-01-preview

Local/Offline

# No internet required
ENABLE_OLLAMA=true
LLM_KEY=OLLAMA
OLLAMA_MODEL=qwen2.5:7b-instruct
OLLAMA_SERVER_URL=http://localhost:11434

Next Steps

Environment Variables

Complete configuration reference

Docker Setup

Deploy with Docker Compose

Storage Configuration

Configure S3/Azure storage

Build docs developers (and LLMs) love