Use Portkey as the LLM backend in LangChain chains, agents, and pipelines.
LangChain is a framework for building LLM-powered applications using composable chains and agents. Because the Portkey AI Gateway is OpenAI-compatible, you can use it as the LLM backend in any LangChain application by setting the base_url.This gives your LangChain app automatic retries, fallbacks, load balancing, caching, and guardrails — without changing your chain logic.
Set base_url on ChatOpenAI to point at the gateway.
from langchain_openai import ChatOpenAIllm = ChatOpenAI( api_key="sk-***", # your OpenAI API key base_url="http://localhost:8787/v1", model="gpt-4o-mini",)response = llm.invoke("What is the Portkey AI Gateway?")print(response.content)
A retrieval-augmented generation pipeline that falls back to a secondary model if the primary is unavailable.
from langchain_openai import ChatOpenAIfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_core.output_parsers import StrOutputParserimport jsonfallback_config = { "strategy": {"mode": "fallback"}, "targets": [ { "provider": "openai", "api_key": "sk-***", "override_params": {"model": "gpt-4o-mini"} }, { "provider": "anthropic", "api_key": "sk-ant-***", "override_params": {"model": "claude-3-haiku-20240307"} } ]}llm = ChatOpenAI( api_key="dummy", base_url="http://localhost:8787/v1", model="gpt-4o-mini", default_headers={ "x-portkey-config": json.dumps(fallback_config) })prompt = ChatPromptTemplate.from_template( "Answer the question using only the context below.\n\nContext: {context}\n\nQuestion: {question}")rag_chain = prompt | llm | StrOutputParser()result = rag_chain.invoke({ "context": "Portkey is an AI gateway that routes requests to 250+ LLMs.", "question": "What does Portkey do?"})print(result)
All requests made through LangChain are logged and traceable in the Portkey dashboard when using the hosted gateway. Use trace_id in your headers to group related requests.