Skip to main content

Overview

This page provides a comprehensive reference for all environment variables available in Skyvern self-hosted deployments. Variables are organized by category for easy navigation.
Create a .env file in your project root and configure these variables. For Docker Compose, the .env file is automatically loaded.

General Configuration

Application Environment

# Environment mode: local, development, staging, production
ENV=local

# Port for the Skyvern API server
PORT=8000

# Logging level: DEBUG, INFO, WARNING, ERROR, CRITICAL
LOG_LEVEL=INFO

# LiteLLM logging level (set to CRITICAL to reduce noise)
LITELLM_LOG=CRITICAL

# Analytics identifier (UUID generated if blank)
ANALYTICS_ID=anonymous

# Enable telemetry data collection
SKYVERN_TELEMETRY=true

Database Configuration

# PostgreSQL connection string
DATABASE_STRING=postgresql+psycopg://skyvern:skyvern@localhost/skyvern

# For Windows, use asyncpg driver:
# DATABASE_STRING=postgresql+asyncpg://skyvern@localhost/skyvern

# For Docker Compose (internal network):
# DATABASE_STRING=postgresql+psycopg://skyvern:skyvern@postgres:5432/skyvern
Connection String Format:
postgresql+psycopg://<user>:<password>@<host>:<port>/<database>

LLM Provider Configuration

OpenAI

# Enable OpenAI provider
ENABLE_OPENAI=false

# OpenAI API key
OPENAI_API_KEY=""

# Optional: Custom API base URL
# OPENAI_API_BASE="https://api.openai.com/v1"

# Optional: Organization ID
# OPENAI_ORGANIZATION="org-your-org-id"
Recommended Models:
  • OPENAI_GPT5
  • OPENAI_GPT5_2
  • OPENAI_GPT4_1
  • OPENAI_O3
  • OPENAI_O4_MINI
  • OPENAI_GPT4O
  • OPENAI_GPT4O_MINI

Anthropic

# Enable Anthropic provider
ENABLE_ANTHROPIC=false

# Anthropic API key
ANTHROPIC_API_KEY=""
Recommended Models:
  • ANTHROPIC_CLAUDE4.5_OPUS
  • ANTHROPIC_CLAUDE4.5_SONNET
  • ANTHROPIC_CLAUDE4.5_HAIKU
  • ANTHROPIC_CLAUDE4_OPUS
  • ANTHROPIC_CLAUDE4_SONNET
  • ANTHROPIC_CLAUDE3.7_SONNET

Azure OpenAI

# Enable Azure OpenAI provider
ENABLE_AZURE=false

# Azure deployment name (from Azure AI Foundry)
AZURE_DEPLOYMENT=""

# Azure API key (Key1 or Key2 from Azure Portal)
AZURE_API_KEY=""

# Azure endpoint URL
AZURE_API_BASE=""

# Azure API version
AZURE_API_VERSION=""
Azure GPT-4o Mini:
ENABLE_AZURE_GPT4O_MINI=false
AZURE_GPT4O_MINI_DEPLOYMENT=""
AZURE_GPT4O_MINI_API_KEY=""
AZURE_GPT4O_MINI_API_BASE=""
AZURE_GPT4O_MINI_API_VERSION=""
Azure GPT-5:
ENABLE_AZURE_GPT5=false
AZURE_GPT5_DEPLOYMENT="gpt-5"
AZURE_GPT5_API_KEY=""
AZURE_GPT5_API_BASE=""
AZURE_GPT5_API_VERSION="2025-01-01-preview"
Azure GPT-5 Mini:
ENABLE_AZURE_GPT5_MINI=false
AZURE_GPT5_MINI_DEPLOYMENT="gpt-5-mini"
AZURE_GPT5_MINI_API_KEY=""
AZURE_GPT5_MINI_API_BASE=""
AZURE_GPT5_MINI_API_VERSION="2025-01-01-preview"
Azure GPT-5 Nano:
ENABLE_AZURE_GPT5_NANO=false
AZURE_GPT5_NANO_DEPLOYMENT="gpt-5-nano"
AZURE_GPT5_NANO_API_KEY=""
AZURE_GPT5_NANO_API_BASE=""
AZURE_GPT5_NANO_API_VERSION="2025-01-01-preview"
Recommended Model: AZURE_OPENAI

AWS Bedrock

# Enable AWS Bedrock provider
ENABLE_BEDROCK=false

# AWS credentials (if not using IAM role)
AWS_ACCESS_KEY_ID=""
AWS_SECRET_ACCESS_KEY=""
AWS_REGION="us-west-2"
Recommended Models:
  • BEDROCK_ANTHROPIC_CLAUDE4.5_OPUS_INFERENCE_PROFILE
  • BEDROCK_ANTHROPIC_CLAUDE4.5_SONNET_INFERENCE_PROFILE
  • BEDROCK_ANTHROPIC_CLAUDE3.5_SONNET (v2)
  • BEDROCK_ANTHROPIC_CLAUDE3.5_SONNET_V1

Gemini

# Enable Gemini provider
ENABLE_GEMINI=false

# Gemini API key
GEMINI_API_KEY=""
Recommended Models:
  • GEMINI_2.5_PRO
  • GEMINI_2.5_FLASH
  • GEMINI_2.5_PRO_PREVIEW
  • GEMINI_2.5_FLASH_PREVIEW
  • GEMINI_3.0_FLASH

Novita AI

# Enable Novita AI provider
ENABLE_NOVITA=false

# Novita AI API key
NOVITA_API_KEY=""

Volcengine (ByteDance Doubao)

# Enable Volcengine provider
ENABLE_VOLCENGINE=false

# Volcengine API key
VOLCENGINE_API_KEY=""

# Volcengine API base URL
VOLCENGINE_API_BASE="https://ark.cn-beijing.volces.com/api/v3"

Ollama (Local Models)

# Enable Ollama provider
ENABLE_OLLAMA=false

# Ollama server URL
OLLAMA_SERVER_URL="http://host.docker.internal:11434"

# Model name to use
OLLAMA_MODEL="qwen2.5:7b-instruct"

# Enable vision support for vision models (qwen3-vl, llava)
OLLAMA_SUPPORTS_VISION=false
Recommended Model: OLLAMA

OpenRouter

# Enable OpenRouter provider
ENABLE_OPENROUTER=false

# OpenRouter API key
OPENROUTER_API_KEY=""

# Model name from OpenRouter
OPENROUTER_MODEL="mistralai/mistral-small-3.1-24b-instruct"

# Optional: Custom API base
# OPENROUTER_API_BASE="https://api.openrouter.ai/v1"
Recommended Model: OPENROUTER

Groq

# Enable Groq provider
ENABLE_GROQ=false

# Groq API key
GROQ_API_KEY=""

# Model name
GROQ_MODEL="llama-3.1-8b-instant"
Recommended Model: GROQ

OpenAI-Compatible (Custom Endpoints)

# Enable custom OpenAI-compatible endpoint
ENABLE_OPENAI_COMPATIBLE=false

# Model name
OPENAI_COMPATIBLE_MODEL_NAME="yi-34b"

# API key for the endpoint
OPENAI_COMPATIBLE_API_KEY=""

# Base URL for the endpoint
OPENAI_COMPATIBLE_API_BASE="https://api.together.xyz/v1"

# Optional: API version
# OPENAI_COMPATIBLE_API_VERSION="2023-05-15"

# Optional: Max tokens
# OPENAI_COMPATIBLE_MAX_TOKENS=4096

# Optional: Temperature
# OPENAI_COMPATIBLE_TEMPERATURE=0.0

# Optional: Vision support
# OPENAI_COMPATIBLE_SUPPORTS_VISION=true
Recommended Model: OPENAI_COMPATIBLE

General LLM Settings

# Primary LLM to use (required)
LLM_KEY=""

# Secondary LLM for simple tasks (optional)
# If empty, uses LLM_KEY for all tasks
SECONDARY_LLM_KEY=""

# Override max tokens (for OpenRouter and Ollama)
# LLM_CONFIG_MAX_TOKENS=128000

Browser Configuration

# Browser type: chromium-headless, chromium-headful, cdp-connect
BROWSER_TYPE="chromium-headful"

# For cdp-connect mode:
# BROWSER_REMOTE_DEBUGGING_URL="http://host.docker.internal:9222/"

# For local Chrome connection:
# CHROME_EXECUTABLE_PATH="/Applications/Google Chrome.app/Contents/MacOS/Google Chrome"

# Maximum retries for scraping operations
MAX_SCRAPING_RETRIES=0

# Timeout for browser actions (milliseconds)
BROWSER_ACTION_TIMEOUT_MS=5000

# Enable code block execution
ENABLE_CODE_BLOCK=true
Browser Types:
  • chromium-headless: No visible browser (production)
  • chromium-headful: Visible browser (development)
  • cdp-connect: Connect to existing Chrome instance

Workflow & Task Configuration

# Maximum steps per task run
MAX_STEPS_PER_RUN=50

# Enable log artifacts
ENABLE_LOG_ARTIFACTS=false

Storage Configuration

Local Storage (Default)

# Artifact storage path
ARTIFACT_STORAGE_PATH=/data/artifacts

# Video recording path
VIDEO_PATH=./videos

# HAR file path
HAR_PATH=/data/har

# Log file path
LOG_PATH=/data/log

S3 Storage

# Storage type: local, s3, azure_blob
ARTIFACT_STORAGE_TYPE=s3

# S3 bucket name
S3_BUCKET_NAME="skyvern-artifacts"

# S3 region
S3_REGION="us-east-1"

# AWS credentials (if not using IAM role)
AWS_ACCESS_KEY_ID=""
AWS_SECRET_ACCESS_KEY=""

# Optional: Custom S3 endpoint (for MinIO, etc.)
# S3_ENDPOINT_URL="https://s3.custom-domain.com"

# Optional: Server-side encryption
# S3_SERVER_SIDE_ENCRYPTION="AES256"

# Optional: Use SSL
# S3_USE_SSL=true

Azure Blob Storage

# Storage type
ARTIFACT_STORAGE_TYPE=azure_blob

# Azure storage account name
AZURE_STORAGE_ACCOUNT_NAME=""

# Container name
AZURE_STORAGE_CONTAINER_NAME="artifacts"

# Storage account key
AZURE_STORAGE_ACCOUNT_KEY=""

# Alternative: Connection string
# AZURE_STORAGE_CONNECTION_STRING="DefaultEndpointsProtocol=https;..."

# For Managed Identity:
# AZURE_CLIENT_ID="your-client-id"

Credential Management

Bitwarden Integration

# Bitwarden organization ID
SKYVERN_AUTH_BITWARDEN_ORGANIZATION_ID="your-org-id-here"

# Bitwarden master password
SKYVERN_AUTH_BITWARDEN_MASTER_PASSWORD="your-master-password-here"

# Bitwarden client ID
SKYVERN_AUTH_BITWARDEN_CLIENT_ID="user.your-client-id-here"

# Bitwarden client secret
SKYVERN_AUTH_BITWARDEN_CLIENT_SECRET="your-client-secret-here"

# Optional: Self-hosted Bitwarden server
# BITWARDEN_SERVER="http://localhost"
# BITWARDEN_SERVER_PORT=8002

# Optional: Bitwarden operation settings
# BITWARDEN_MAX_RETRIES=3
# BITWARDEN_TIMEOUT_SECONDS=60

1Password Integration

# 1Password service account token
OP_SERVICE_ACCOUNT_TOKEN=""

Redis Configuration

# Shared Redis URL for pub/sub, cache, etc.
# REDIS_URL="redis://localhost:6379/0"

# Notification registry type: local (default) or redis (multi-pod)
# NOTIFICATION_REGISTRY_TYPE=local

# Optional: Override Redis URL for notifications
# NOTIFICATION_REDIS_URL="redis://localhost:6379/1"

Skyvern API Configuration

# Skyvern base URL
SKYVERN_BASE_URL="http://localhost:8000"

# Skyvern API key (get from UI settings after deployment)
SKYVERN_API_KEY=""

Frontend Configuration

These variables are set in skyvern-frontend/.env for the UI service.
# WebSocket base URL
VITE_WSS_BASE_URL="ws://localhost:8000/api/v1"

# API base URL
VITE_API_BASE_URL="http://localhost:8000/api/v1"

# Artifact API base URL
VITE_ARTIFACT_API_BASE_URL="http://localhost:9090"

# Skyvern API key
VITE_SKYVERN_API_KEY=""

# Enable code block in UI
# VITE_ENABLE_CODE_BLOCK=true
For Remote Deployment:
# Using IP address
VITE_WSS_BASE_URL="ws://your-server-ip:8000/api/v1"
VITE_API_BASE_URL="http://your-server-ip:8000/api/v1"
VITE_ARTIFACT_API_BASE_URL="http://your-server-ip:9090"

# Using domain with HTTPS
VITE_WSS_BASE_URL="wss://api.yourdomain.com/api/v1"
VITE_API_BASE_URL="https://api.yourdomain.com/api/v1"
VITE_ARTIFACT_API_BASE_URL="https://artifact.yourdomain.com"

Complete Example Configuration

Minimal Configuration (Development)

# .env
ENV=local

# Database
DATABASE_STRING=postgresql+psycopg://skyvern:skyvern@localhost/skyvern

# LLM
ENABLE_OPENAI=true
OPENAI_API_KEY=sk-your-key-here
LLM_KEY=OPENAI_GPT4O

# Browser
BROWSER_TYPE=chromium-headful

# Logging
LOG_LEVEL=INFO

Production Configuration (Docker Compose)

# .env
ENV=production

# Database (Docker Compose internal network)
DATABASE_STRING=postgresql+psycopg://skyvern:skyvern@postgres:5432/skyvern

# LLM (Primary and Secondary)
ENABLE_ANTHROPIC=true
ANTHROPIC_API_KEY=sk-ant-your-key
LLM_KEY=ANTHROPIC_CLAUDE4.5_SONNET

ENABLE_OPENAI=true
OPENAI_API_KEY=sk-your-key
SECONDARY_LLM_KEY=OPENAI_GPT4O_MINI

# Browser
BROWSER_TYPE=chromium-headless
MAX_STEPS_PER_RUN=50

# Storage (S3)
ARTIFACT_STORAGE_TYPE=s3
S3_BUCKET_NAME=skyvern-artifacts
S3_REGION=us-east-1
AWS_ACCESS_KEY_ID=your-access-key
AWS_SECRET_ACCESS_KEY=your-secret-key

# Logging
LOG_LEVEL=INFO
ENABLE_LOG_ARTIFACTS=true

# API
PORT=8000
SKYVERN_API_KEY=your-api-key

Enterprise Configuration (Kubernetes + Azure)

# .env
ENV=production

# Database (External Azure PostgreSQL)
DATABASE_STRING=postgresql+psycopg://skyvernadmin:[email protected]:5432/skyvern

# LLM (Azure OpenAI)
ENABLE_AZURE=true
LLM_KEY=AZURE_OPENAI
AZURE_DEPLOYMENT=gpt4o-deployment
AZURE_API_KEY=your-azure-key
AZURE_API_BASE=https://yourorg.openai.azure.com/
AZURE_API_VERSION=2024-08-01-preview

# Browser
BROWSER_TYPE=chromium-headless
MAX_STEPS_PER_RUN=100

# Storage (Azure Blob with Managed Identity)
ARTIFACT_STORAGE_TYPE=azure_blob
AZURE_STORAGE_ACCOUNT_NAME=skyvernartifacts
AZURE_STORAGE_CONTAINER_NAME=artifacts
AZURE_CLIENT_ID=your-managed-identity-id

# Redis (Azure Redis Cache)
REDIS_URL=rediss://:[email protected]:6380/0
NOTIFICATION_REGISTRY_TYPE=redis

# Logging
LOG_LEVEL=WARNING
ENABLE_LOG_ARTIFACTS=true

# API
PORT=8000
SKYVERN_BASE_URL=https://api.skyvern.yourorg.com
SKYVERN_API_KEY=your-production-api-key

# Telemetry
SKYVERN_TELEMETRY=false

Validation

After configuring your .env file, validate it:
# Check for syntax errors
cat .env

# Test with Docker Compose
docker compose config

# Start services and check logs
docker compose up -d
docker compose logs skyvern | head -50
Look for successful initialization messages:
INFO: Environment: local
INFO: Database connected: postgresql+psycopg://skyvern@localhost/skyvern
INFO: Registered LLM provider: openai
INFO: Using LLM model: OPENAI_GPT4O
INFO: Browser type: chromium-headful
INFO: Skyvern API server started on port 8000

Security Best Practices

Never commit your .env file to version control! Add it to .gitignore:
echo ".env" >> .gitignore
  1. Use strong, unique passwords for database and API keys
  2. Rotate API keys regularly (every 90 days)
  3. Use IAM roles instead of access keys when possible (AWS, Azure)
  4. Enable encryption for storage (S3 SSE, Azure encryption)
  5. Restrict network access using security groups, network policies
  6. Use secrets management (AWS Secrets Manager, Azure Key Vault) for production
  7. Enable audit logging for compliance

Next Steps

Docker Setup

Deploy with Docker Compose

Kubernetes Deployment

Production Kubernetes setup

LLM Configuration

Configure LLM providers

Storage Configuration

Set up S3 or Azure storage

Build docs developers (and LLMs) love