Skip to main content
The Portkey AI Gateway is fully compatible with the OpenAI API surface. You can point any OpenAI SDK client at the gateway URL and it will route your requests through the gateway without any other code changes. This works for both the official OpenAI SDKs and the Portkey-native SDK (portkey-ai), which adds convenience helpers for headers and configs.

Installation

1

Start the gateway

Run the gateway locally or use Portkey Cloud.
npx @portkey-ai/gateway
The gateway listens at http://localhost:8787/v1.
2

Install the SDK

pip install portkey-ai

Basic setup

from portkey_ai import Portkey

client = Portkey(
    provider="openai",
    Authorization="sk-***"  # your provider API key
)

response = client.chat.completions.create(
    messages=[{"role": "user", "content": "Hello!"}],
    model="gpt-4o-mini"
)

print(response.choices[0].message.content)
When using the local gateway, you pass your provider API key directly to the SDK. When using Portkey Cloud, set a Portkey API key and use virtual keys to manage your provider credentials centrally.

Attaching a config for routing and guardrails

Configs let you define routing rules, retries, fallbacks, and guardrails as a JSON object. Attach a config to the client to apply it to every request.
from portkey_ai import Portkey

client = Portkey(
    provider="openai",
    Authorization="sk-***"
)

# Define routing and guardrail rules
config = {
    "retry": {"attempts": 5},
    "output_guardrails": [{
        "default.contains": {"operator": "none", "words": ["restricted"]},
        "deny": True
    }]
}

# Attach the config to the client
client = client.with_options(config=config)

response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "What is the capital of France?"}]
)

Using Portkey Cloud (hosted gateway)

When using the hosted gateway at https://api.portkey.ai/v1, authenticate with a Portkey API key and reference your provider credentials through virtual keys.
from portkey_ai import Portkey

client = Portkey(
    api_key="YOUR_PORTKEY_API_KEY",     # Portkey account key
    virtual_key="YOUR_VIRTUAL_KEY"       # saved provider credential
)

response = client.chat.completions.create(
    messages=[{"role": "user", "content": "Hello!"}],
    model="gpt-4o-mini"
)

Real-world use case: fallback between providers

Route to Anthropic Claude if OpenAI is unavailable, with automatic retries.
from portkey_ai import Portkey

client = Portkey(
    api_key="YOUR_PORTKEY_API_KEY"
)

fallback_config = {
    "strategy": {"mode": "fallback"},
    "targets": [
        {
            "virtual_key": "YOUR_OPENAI_VIRTUAL_KEY",
            "override_params": {"model": "gpt-4o-mini"}
        },
        {
            "virtual_key": "YOUR_ANTHROPIC_VIRTUAL_KEY",
            "override_params": {"model": "claude-3-haiku-20240307"}
        }
    ]
}

client = client.with_options(config=fallback_config)

response = client.chat.completions.create(
    messages=[{"role": "user", "content": "Summarize the benefits of fallback routing."}],
    model="gpt-4o-mini"  # overridden per target
)

print(response.choices[0].message.content)
Store your config as a saved config in the Portkey dashboard and reference it by ID with config="cfg_xxxx" to keep your application code clean.

Build docs developers (and LLMs) love