This guide covers Langchain Python. For JS, see Langchain JS.
Portkey extends Langchain’s ChatOpenAI
to effortlessly work with 1600+ LLMs (Anthropic, Gemini, Mistral, etc.) without needing different SDKs. Portkey enhances your Langchain apps with interoperability, reliability, speed, cost-efficiency, and deep observability.
Getting Started
Integrate Portkey into Langchain easily.
1. Install Packages
pip install -U langchain-openai portkey-ai
langchain-openai
includes langchain-core
. Install langchain
or other specific packages if you need more components.
2. Basic Setup: ChatOpenAI
with Portkey
Configure ChatOpenAI
to route requests via Portkey using your Portkey API Key and createHeaders
method.
from langchain_openai import ChatOpenAI
from portkey_ai import PORTKEY_GATEWAY_URL, createHeaders
import os
# Use environment variables for API keys
PORTKEY_API_KEY = os.environ.get("PORTKEY_API_KEY")
PROVIDER_API_KEY = os.environ.get("OPENAI_API_KEY") # Example: OpenAI API Key
# Configure Portkey headers
portkey_headers = createHeaders(
api_key=PORTKEY_API_KEY,
provider="openai" # Specify target LLM provider
)
llm = ChatOpenAI(
api_key=PROVIDER_API_KEY, # Provider's API key
base_url=PORTKEY_GATEWAY_URL, # Route via Portkey
default_headers=portkey_headers, # Portkey specific headers
model="gpt-4o" # Specify provider model
)
# response = llm.invoke("Tell me a joke about AI.")
# print(response.content)
Key ChatOpenAI
Parameters:
api_key
: The underlying provider’s API key.
base_url
: Set to PORTKEY_GATEWAY_URL
to route via Portkey.
default_headers
: Uses createHeaders
for your PORTKEY_API_KEY
. Can also include a virtual_key
(for provider credentials) or a config
ID (for advanced routing).
All LLM calls via this llm
instance now use Portkey, starting with observability.
Langchain requests now appear in Portkey Logs
This setup enables Portkey’s advanced features for Langchain.
Key Portkey Features for Langchain
Routing Langchain requests via Portkey’s ChatOpenAI
interface unlocks powerful capabilities:
Multi-LLM Integration
Use ChatOpenAI
for OpenAI, Anthropic, Gemini, Mistral, and more. Switch providers easily with Virtual Keys or Configs.
Advanced Caching
Reduce latency and costs with Portkey’s Simple, Semantic, or Hybrid caching, enabled via Configs.
Enhanced Reliability
Build robust apps with retries, timeouts, fallbacks, and load balancing, configured in Portkey.
Full Observability
Get deep insights: LLM usage, costs, latency, and errors are automatically logged in Portkey.
Prompt Management
Manage, version, and use prompts from Portkey’s Prompt Library within Langchain.
Secure Virtual Keys
Securely manage LLM provider API keys using Portkey Virtual Keys in your Langchain setup.
1. Multi-LLM Integration
Portkey simplifies using different LLM providers. ChatOpenAI
becomes your universal client for numerous models.
Mechanism: Portkey Headers with Virtual Keys
Switch LLMs by changing the virtual_key
in createHeaders
and the model
in ChatOpenAI
. Portkey manages provider specifics.
Example: Anthropic (Claude)
-
Create Anthropic Virtual Key: In Portkey, add your Anthropic API key and get the Virtual Key ID.
-
Update Langchain code:
from langchain_openai import ChatOpenAI
from portkey_ai import PORTKEY_GATEWAY_URL, createHeaders
import os
PORTKEY_API_KEY = os.environ.get("PORTKEY_API_KEY")
ANTHROPIC_PROVIDER = os.environ.get("ANTHROPIC_PROVIDER")
portkey_anthropic_headers = createHeaders(
api_key=PORTKEY_API_KEY,
provider="@anthropic"
)
llm_claude = ChatOpenAI(
api_key="placeholder_anthropic_key", # Placeholder; Portkey uses Virtual Key
base_url=PORTKEY_GATEWAY_URL,
default_headers=portkey_anthropic_headers,
model="claude-3-5-sonnet-latest" # Anthropic model
)
# response_claude = llm_claude.invoke("Haiku principles?")
# print(response_claude.content)
Example: Vertex (Gemini)
-
Create Google Virtual Key: In Portkey, add your Vertex AI credentials and get the Virtual Key ID.
-
Update Langchain code:
from langchain_openai import ChatOpenAI
from portkey_ai import PORTKEY_GATEWAY_URL, createHeaders
import os
PORTKEY_API_KEY = os.environ.get("PORTKEY_API_KEY")
GOOGLE_PROVIDER = os.environ.get("GOOGLE_PROVIDER")
portkey_gemini_headers = createHeaders(
api_key=PORTKEY_API_KEY,
provider="@google"
)
llm_gemini = ChatOpenAI(
api_key="placeholder_google_key", # Placeholder; Portkey uses Virtual Key
base_url=PORTKEY_GATEWAY_URL,
default_headers=portkey_gemini_headers,
model="gemini-2.5-pro-preview" # Gemini model
)
# response_gemini = llm_gemini.invoke("Zero-knowledge proofs?")
# print(response_gemini.content)
Core ChatOpenAI
structure remains. Only virtual_key
and model
change. Portkey maps responses to the OpenAI format Langchain expects. This extends to Mistral, Cohere, Azure, Bedrock via their Virtual Keys.
2. Advanced Caching
Portkey’s caching reduces latency and LLM costs. Enable it via a Portkey Config object or a saved Config ID in createHeaders
.
A Portkey Config can specify mode
(simple
, semantic
, hybrid
) and max_age
(cache duration).
Example: Semantic Caching
-
Define/Save Portkey Config: Create a Config in Portkey (e.g., langchain-semantic-cache
) specifying caching strategy.
// Example Portkey Config JSON for semantic cache
{
"cache": { "mode": "semantic", "max_age": "2h" },
"provider":"@your_openai_virtual_key_id",
"override_params": { "model": "gpt-4o" }
}
Assume saved Config ID is cfg_semantic_cache_123
.
-
Use Config ID in createHeaders
:
from langchain_openai import ChatOpenAI
from portkey_ai import PORTKEY_GATEWAY_URL, createHeaders
import os
PORTKEY_API_KEY = os.environ.get("PORTKEY_API_KEY")
PORTKEY_CONFIG_ID = "cfg_semantic_cache_123"
portkey_cached_headers = createHeaders(
api_key=PORTKEY_API_KEY,
provider="@openai"
)
llm_cached = ChatOpenAI(
api_key="placeholder_key", # Config can handle auth via virtual_key
base_url=PORTKEY_GATEWAY_URL,
default_headers=portkey_cached_headers
)
# response1 = llm_cached.invoke("Capital of France?")
# response2 = llm_cached.invoke("Tell me France's capital.")
Similar requests can now hit the cache. Monitor cache performance in your Portkey dashboard.
3. Enhanced Reliability
Portkey improves Langchain app resilience via Configs:
- Retries: Auto-retry failed LLM requests.
- Fallbacks: Define backup LLMs if a primary fails.
- Load Balancing: Distribute requests across keys or models.
- Timeouts: Set max request durations.
Example: Fallbacks with Retries
-
Define/Save Portkey Config: Create a Config for retries and fallbacks (e.g., gpt-4o
then claude-3-sonnet
).
// Example Portkey Config for reliability
{
"strategy": { "mode": "fallback" },
"targets": [
{ "override_params": {"model": "gpt-4o"}, "provider":"@vk_openai", "retry": {"count": 2} },
{ "override_params": {"model": "claude-3-sonnet-20240229"}, "provider":"@vk_anthropic" }
]
}
Assume saved Config ID is cfg_reliable_123
.
-
Use Config ID in createHeaders
:
from langchain_openai import ChatOpenAI
from portkey_ai import PORTKEY_GATEWAY_URL, createHeaders
import os
PORTKEY_API_KEY = os.environ.get("PORTKEY_API_KEY")
PORTKEY_RELIABLE_CONFIG_ID = "cfg_reliable_123"
portkey_reliable_headers = createHeaders(
api_key=PORTKEY_API_KEY,
provider="@openai"
)
llm_reliable = ChatOpenAI(
api_key="placeholder_key",
base_url=PORTKEY_GATEWAY_URL,
default_headers=portkey_reliable_headers
)
# response = llm_reliable.invoke("Poem on resilient AI.")
Offload complex logic to Portkey Configs, keeping Langchain code clean.
4. Full Observability
Routing Langchain ChatOpenAI
via Portkey provides instant, comprehensive observability:
- Logged Requests: Detailed logs of requests, responses, latencies, costs.
- Tracing: Understand call lifecycles.
- Performance Analytics: Monitor metrics, track usage.
- Debugging: Pinpoint errors quickly.
This is crucial for monitoring and optimizing production Langchain apps.
Langchain requests are automatically logged in Portkey
5. Prompt Management
Portkey’s Prompt Library helps manage prompts effectively:
- Version Control: Store and track prompt changes.
- Parameterized Prompts: Use variables with mustache templating.
- Sandbox: Test prompts with different LLMs in Portkey.
Using Portkey Prompts in Langchain
- Create prompt in Portkey, get
Prompt ID
.
- Use Portkey SDK to render prompt with variables.
- Transform rendered prompt to Langchain message format.
- Pass messages to Portkey-configured
ChatOpenAI
.
import os
from langchain_openai import ChatOpenAI
from langchain_core.messages import SystemMessage, HumanMessage
from portkey_ai import PORTKEY_GATEWAY_URL, createHeaders, Portkey
PORTKEY_API_KEY = os.environ.get("PORTKEY_API_KEY")
client = Portkey(api_key=PORTKEY_API_KEY)
PROMPT_ID = "pp-story-generator" # Your Portkey Prompt ID
rendered_prompt = client.prompts.render(
prompt_id=PROMPT_ID,
variables={"character": "brave knight", "object": "magic sword"}
).data
langchain_messages = []
if rendered_prompt and rendered_prompt.prompt:
for msg in rendered_prompt.prompt:
if msg.get("role") == "user": langchain_messages.append(HumanMessage(content=msg.get("content")))
elif msg.get("role") == "system": langchain_messages.append(SystemMessage(content=msg.get("content")))
portkey_headers = createHeaders(api_key=PORTKEY_API_KEY, provider="@openai")
llm_portkey_prompt = ChatOpenAI(
api_key="placeholder_key",
base_url=PORTKEY_GATEWAY_URL,
default_headers=portkey_headers,
model=rendered_prompt.model if rendered_prompt and rendered_prompt.model else "gpt-4o"
)
# if langchain_messages: response = llm_portkey_prompt.invoke(langchain_messages)
Manage prompts centrally in Portkey for versioning and collaboration.
6. Secure Virtual Keys
Portkey’s Virtual Keys are vital for secure, flexible LLM ops with Langchain.
Benefits:
- Secure Credentials: Store provider API keys in Portkey’s vault. Code uses Virtual Key IDs.
- Easy Configuration: Switch providers/keys by changing
virtual_key
in createHeaders
.
- Access Control: Manage Virtual Key permissions in Portkey.
- Auditability: Track usage via Portkey logs.
Using Virtual Keys boosts security and simplifies config management.
Langchain Embeddings
Create embeddings with OpenAIEmbeddings
via Portkey.
from langchain_openai import OpenAIEmbeddings
from portkey_ai import PORTKEY_GATEWAY_URL, createHeaders
import os
PORTKEY_API_KEY = os.environ.get("PORTKEY_API_KEY")
portkey_headers = createHeaders(api_key=PORTKEY_API_KEY, provider="@openai")
embeddings_model = OpenAIEmbeddings(
api_key="placeholder_key",
base_url=PORTKEY_GATEWAY_URL,
default_headers=portkey_headers,
model="text-embedding-3-small"
)
# embeddings = embeddings_model.embed_documents(["Hello world!", "Test."])
Portkey supports OpenAI embeddings via Langchain’s OpenAIEmbeddings
. For other providers (Cohere, Gemini), use the Portkey SDK directly (docs).
Langchain Chains & Prompts
Standard Langchain Chains and PromptTemplates work seamlessly with Portkey-configured ChatOpenAI
instances. Portkey features (logging, caching) apply automatically.
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from portkey_ai import PORTKEY_GATEWAY_URL, createHeaders
import os
PORTKEY_API_KEY = os.environ.get("PORTKEY_API_KEY")
portkey_headers = createHeaders(api_key=PORTKEY_API_KEY, provider="@openai")
chat_llm = ChatOpenAI(
api_key="placeholder_key",
base_url=PORTKEY_GATEWAY_URL,
default_headers=portkey_headers,
model="gpt-4o"
)
prompt = ChatPromptTemplate.from_messages([
("system", "You are a world-class technical writer."),
("user", "{input}")
])
chain = prompt | chat_llm
# response = chain.invoke({"input": "Explain API gateways simply."})
All chain requests via chat_llm
are processed by Portkey.
This concludes the main features. Redundant examples have been removed for clarity.
Responses are generated using AI and may contain mistakes.