LangGraph is a library for building stateful, multi-actor applications with LLMs, designed to make developing complex agent workflows easier. It provides a flexible framework to create directed graphs where nodes process information and edges define the flow between them.
Portkey enhances LangGraph with production-readiness features, turning your experimental agent workflows into robust systems by providing:
Complete observability of every agent step, tool use, and state transition
Built-in reliability with fallbacks, retries, and load balancing
Cost tracking and optimization to manage your AI spend
Access to 1600+ LLMs through a single integration
Guardrails to keep agent behavior safe and compliant
Version-controlled prompts for consistent agent performance
Depending on your use case, you may also need additional packages:
For search capabilities: pip install langchain_community
For memory functionality: pip install langgraph[checkpoint]
Generate API Key
Create a Portkey API key with optional budget/rate limits from the Portkey dashboard. You can attach configurations for reliability, caching, and more to this key.
3
Configure LangChain with Portkey
For a simple setup, configure a LangChain ChatOpenAI instance to use Portkey:
Copy
Ask AI
from langchain_openai import ChatOpenAIfrom portkey_ai import createHeaders, PORTKEY_GATEWAY_URL# Set up LangChain model with Portkeyllm = ChatOpenAI( api_key="dummy", # This is just a placeholder base_url="https://api.portkey.ai/v1", default_headers=createHeaders( api_key="YOUR_PORTKEY_API_KEY", virtual_key="YOUR_PROVIDER_VIRTUAL_KEY", trace_id="unique-trace-id", # Optional, for request tracing metadata={ # Optional, for request segmentation "app_env": "production", "_user": "user_123" # Optional Special _user field for user analytics } ))
What are Virtual Keys? Virtual keys in Portkey securely store your LLM provider API keys (OpenAI, Anthropic, etc.) in an encrypted vault. They allow for easier key rotation and budget management. Learn more about virtual keys here.
2. Reliability - Keep Your Agents Running Smoothly
When running agents in production, things can go wrong - API rate limits, network issues, or provider outages. Portkey’s reliability features ensure your agents keep running smoothly even when problems occur.
Enable fallback in your LangGraph agents by using a Portkey Config:
Portkey’s Prompt Engineering Studio helps you create, manage, and optimize the prompts used in your LangGraph agents. Instead of hardcoding prompts or instructions, use Portkey’s prompt rendering API to dynamically fetch and apply your versioned prompts.
Manage prompts in Portkey's Prompt Library
Prompt Playground is a place to compare, test and deploy perfect prompts for your AI application. It’s where you experiment with different models, test variables, compare outputs, and refine your prompt engineering strategy before deploying to production. It allows you to:
Iteratively develop prompts before using them in your agents
Test prompts with different variables and models
Compare outputs between different prompt versions
Collaborate with team members on prompt development
This visual environment makes it easier to craft effective prompts for each step in your LangGraph agent’s workflow.
Track individual users through your LangGraph agents using Portkey’s metadata system.
What is Metadata in Portkey?
Metadata allows you to associate custom data with each request, enabling filtering, segmentation, and analytics. The special _user field is specifically designed for user tracking.
Copy
Ask AI
from langchain_openai import ChatOpenAIfrom portkey_ai import createHeaders# Configure LLM with user trackingllm = ChatOpenAI( api_key="dummy", base_url="https://api.portkey.ai/v1", default_headers=createHeaders( api_key="YOUR_PORTKEY_API_KEY", virtual_key="YOUR_OPENAI_VIRTUAL_KEY", metadata={ "_user": "user_123", # Special _user field for user analytics "user_tier": "premium", "user_company": "Acme Corp", "session_id": "abc-123", "graph_id": "search_workflow" } ))
Filter Analytics by User
With metadata in place, you can filter analytics by user and analyze performance metrics on a per-user basis:
Filter analytics by user
This enables:
Per-user cost tracking and budgeting
Personalized user analytics
Team or organization-level metrics
Environment-specific monitoring (staging vs. production)
LangGraph works with multiple LLM providers, and Portkey extends this capability by providing access to over 200 LLMs through a unified interface. You can easily switch between different models without changing your core agent logic:
Copy
Ask AI
from langchain_openai import ChatOpenAIfrom portkey_ai import createHeaders# OpenAI configurationopenai_llm = ChatOpenAI( api_key="dummy", base_url="https://api.portkey.ai/v1", default_headers=createHeaders( api_key="YOUR_PORTKEY_API_KEY", virtual_key="YOUR_OPENAI_VIRTUAL_KEY" ))# Anthropic configurationanthropic_llm = ChatOpenAI( api_key="dummy", base_url="https://api.portkey.ai/v1", default_headers=createHeaders( api_key="YOUR_PORTKEY_API_KEY", virtual_key="YOUR_ANTHROPIC_VIRTUAL_KEY" ))# Choose which LLM to use based on your needsactive_llm = openai_llm # or anthropic_llm# Use in your LangGraph nodesdef chatbot(state): return {"messages": [active_llm.invoke(state["messages"])]}
Portkey provides access to LLMs from providers including:
OpenAI (GPT-4o, GPT-4 Turbo, etc.)
Anthropic (Claude 3.5 Sonnet, Claude 3 Opus, etc.)
After setting up your Portkey API key with the attached config, connect it to your LangGraph agents:
Copy
Ask AI
from langchain_openai import ChatOpenAIfrom portkey_ai import createHeaders# Configure LLM with your Portkey API keyllm = ChatOpenAI( api_key="dummy", base_url="https://api.portkey.ai/v1", default_headers=createHeaders( api_key="YOUR_PORTKEY_API_KEY" # The API key with attached config from step 3 ))
As your AI usage scales, controlling which teams can access specific models becomes crucial. Portkey Configs provide this control layer with features like:
After distributing API keys to your team members, your enterprise-ready LangGraph setup is ready to go. Each team member can now use their designated API keys with appropriate access levels and budget controls.
Portkey adds production-readiness to LangGraph through comprehensive observability (traces, logs, metrics), reliability features (fallbacks, retries, caching), and access to 1600+ LLMs through a unified interface. This makes it easier to debug, optimize, and scale your agent workflows.
Can I use Portkey with existing LangGraph applications?
Yes! Portkey integrates seamlessly with existing LangGraph applications. You just need to replace your LLM initialization code with the Portkey-enabled version. The rest of your graph code remains unchanged.
Does Portkey work with all LangGraph features?
Portkey supports all LangGraph features, including tools, memory, conditional routing, and complex multi-node workflows. It adds observability and reliability without limiting any of the framework’s functionality.
How do I filter logs and traces for specific graph runs?
Portkey allows you to add custom metadata and trace IDs to your LLM calls, which you can then use for filtering. Add fields like graph_id, workflow_type, or session_id to easily find and analyze specific graph executions.
Can I use LangGraph's memory features with Portkey?
Yes! The examples in this documentation show how to use LangGraph’s MemorySaver checkpointer with Portkey-enabled LLMs. All the memory and state management features work seamlessly with Portkey.