Run the gateway
Start the gateway locally with a single command. Node.js is required.The gateway starts two endpoints:
- API:
http://localhost:8787/v1 - Console:
http://localhost:8787/public/
Make your first request
Send a chat completion through the gateway. The gateway accepts your provider API key directly via the
Authorization header — no Portkey account required.Swap
provider="openai" for any of the 250+ supported providers — the request shape stays the same.Add routing and guardrails
Config objects let you attach routing rules, reliability settings, and guardrails to any client. Pass a config to with_options (Python) or withOptions (JavaScript) to apply it.python
retry— automatically retries failed or blocked requests with exponential backoff, up to the specified number of attempts.output_guardrails— evaluates the LLM response before returning it. Here, any response containing the word “Apple” is denied and the request retries.
View logs in the Gateway Console
Open http://localhost:8787/public/ to see all requests logged in real time.The console shows request and response bodies, latency, provider used, model, token counts, and whether any guardrails fired. No configuration needed — logging is on by default.
Next steps
Supported providers
Browse all 250+ providers, including OpenAI, Anthropic, Gemini, Bedrock, Ollama, and more.
Routing & configs
Define fallbacks, load balancing, retries, and conditional routing in a single JSON config.
Guardrails
Validate LLM inputs and outputs with 50+ built-in checks or bring your own plugin.
Deployment
Deploy to Docker, Node.js, Cloudflare Workers, AWS EC2, or use the managed Portkey Cloud.