This page provides a comprehensive reference for all environment variables available in Skyvern self-hosted deployments. Variables are organized by category for easy navigation.
Create a .env file in your project root and configure these variables. For Docker Compose, the .env file is automatically loaded.
# Environment mode: local, development, staging, productionENV=local# Port for the Skyvern API serverPORT=8000# Logging level: DEBUG, INFO, WARNING, ERROR, CRITICALLOG_LEVEL=INFO# LiteLLM logging level (set to CRITICAL to reduce noise)LITELLM_LOG=CRITICAL# Analytics identifier (UUID generated if blank)ANALYTICS_ID=anonymous# Enable telemetry data collectionSKYVERN_TELEMETRY=true
# Enable Azure OpenAI providerENABLE_AZURE=false# Azure deployment name (from Azure AI Foundry)AZURE_DEPLOYMENT=""# Azure API key (Key1 or Key2 from Azure Portal)AZURE_API_KEY=""# Azure endpoint URLAZURE_API_BASE=""# Azure API versionAZURE_API_VERSION=""
# Enable AWS Bedrock providerENABLE_BEDROCK=false# AWS credentials (if not using IAM role)AWS_ACCESS_KEY_ID=""AWS_SECRET_ACCESS_KEY=""AWS_REGION="us-west-2"
# Enable Volcengine providerENABLE_VOLCENGINE=false# Volcengine API keyVOLCENGINE_API_KEY=""# Volcengine API base URLVOLCENGINE_API_BASE="https://ark.cn-beijing.volces.com/api/v3"
# Enable Ollama providerENABLE_OLLAMA=false# Ollama server URLOLLAMA_SERVER_URL="http://host.docker.internal:11434"# Model name to useOLLAMA_MODEL="qwen2.5:7b-instruct"# Enable vision support for vision models (qwen3-vl, llava)OLLAMA_SUPPORTS_VISION=false
# Enable OpenRouter providerENABLE_OPENROUTER=false# OpenRouter API keyOPENROUTER_API_KEY=""# Model name from OpenRouterOPENROUTER_MODEL="mistralai/mistral-small-3.1-24b-instruct"# Optional: Custom API base# OPENROUTER_API_BASE="https://api.openrouter.ai/v1"
# Enable custom OpenAI-compatible endpointENABLE_OPENAI_COMPATIBLE=false# Model nameOPENAI_COMPATIBLE_MODEL_NAME="yi-34b"# API key for the endpointOPENAI_COMPATIBLE_API_KEY=""# Base URL for the endpointOPENAI_COMPATIBLE_API_BASE="https://api.together.xyz/v1"# Optional: API version# OPENAI_COMPATIBLE_API_VERSION="2023-05-15"# Optional: Max tokens# OPENAI_COMPATIBLE_MAX_TOKENS=4096# Optional: Temperature# OPENAI_COMPATIBLE_TEMPERATURE=0.0# Optional: Vision support# OPENAI_COMPATIBLE_SUPPORTS_VISION=true
# Primary LLM to use (required)LLM_KEY=""# Secondary LLM for simple tasks (optional)# If empty, uses LLM_KEY for all tasksSECONDARY_LLM_KEY=""# Override max tokens (for OpenRouter and Ollama)# LLM_CONFIG_MAX_TOKENS=128000
These variables are set in skyvern-frontend/.env for the UI service.
# WebSocket base URLVITE_WSS_BASE_URL="ws://localhost:8000/api/v1"# API base URLVITE_API_BASE_URL="http://localhost:8000/api/v1"# Artifact API base URLVITE_ARTIFACT_API_BASE_URL="http://localhost:9090"# Skyvern API keyVITE_SKYVERN_API_KEY=""# Enable code block in UI# VITE_ENABLE_CODE_BLOCK=true
For Remote Deployment:
# Using IP addressVITE_WSS_BASE_URL="ws://your-server-ip:8000/api/v1"VITE_API_BASE_URL="http://your-server-ip:8000/api/v1"VITE_ARTIFACT_API_BASE_URL="http://your-server-ip:9090"# Using domain with HTTPSVITE_WSS_BASE_URL="wss://api.yourdomain.com/api/v1"VITE_API_BASE_URL="https://api.yourdomain.com/api/v1"VITE_ARTIFACT_API_BASE_URL="https://artifact.yourdomain.com"
# Check for syntax errorscat .env# Test with Docker Composedocker compose config# Start services and check logsdocker compose up -ddocker compose logs skyvern | head -50
Look for successful initialization messages:
INFO: Environment: localINFO: Database connected: postgresql+psycopg://skyvern@localhost/skyvernINFO: Registered LLM provider: openaiINFO: Using LLM model: OPENAI_GPT4OINFO: Browser type: chromium-headfulINFO: Skyvern API server started on port 8000