LangGraph is a library for building stateful, multi-actor applications with LLMs, designed to make developing complex agent workflows easier. It provides a flexible framework to create directed graphs where nodes process information and edges define the flow between them.
Portkey enhances LangGraph with production-readiness features, turning your experimental agent workflows into robust systems by providing:
Complete observability of every agent step, tool use, and state transition
Built-in reliability with fallbacks, retries, and load balancing
Cost tracking and optimization to manage your AI spend
Access to 1600+ LLMs through a single integration
Guardrails to keep agent behavior safe and compliant
Version-controlled prompts for consistent agent performance
Depending on your use case, you may also need additional packages:
For search capabilities: pip install langchain_community
For memory functionality: pip install langgraph[checkpoint]
Generate API Key
Create a Portkey API key with optional budget/rate limits from the Portkey dashboard. You can attach configurations for reliability, caching, and more to this key.
3
Configure LangChain with Portkey
For a simple setup, configure a LangChain ChatOpenAI instance to use Portkey:
Copy
Ask AI
from langchain_openai import ChatOpenAIfrom portkey_ai import createHeaders, PORTKEY_GATEWAY_URL# Set up LangChain model with Portkeyllm = ChatOpenAI( api_key="dummy", # This is just a placeholder base_url="https://api.portkey.ai/v1", default_headers=createHeaders( api_key="YOUR_PORTKEY_API_KEY", provider="@YOUR_PROVIDER", trace_id="unique-trace-id", # Optional, for request tracing metadata={ # Optional, for request segmentation "app_env": "production", "_user": "user_123" # Optional Special _user field for user analytics } ))
Portkey provides comprehensive observability for your LangGraph agents, helping you understand exactly what’s happening during each execution.
Traces provide a hierarchical view of your agent’s execution, showing the sequence of LLM calls, tool invocations, and state transitions.
Copy
Ask AI
# Add trace_id to enable hierarchical tracing in Portkeyllm = ChatOpenAI( api_key="dummy", base_url="https://api.portkey.ai/v1", default_headers=createHeaders( api_key="YOUR_PORTKEY_API_KEY", provider="@YOUR_LLM_PROVIDER", trace_id="unique-session-id", # Add unique trace ID metadata={"request_type": "user_query"} ))
LangGraph also offers its own tracing via LangSmith, which can be used alongside Portkey for even more detailed workflow insights.
Traces provide a hierarchical view of your agent’s execution, showing the sequence of LLM calls, tool invocations, and state transitions.
Copy
Ask AI
# Add trace_id to enable hierarchical tracing in Portkeyllm = ChatOpenAI( api_key="dummy", base_url="https://api.portkey.ai/v1", default_headers=createHeaders( api_key="YOUR_PORTKEY_API_KEY", provider="@YOUR_LLM_PROVIDER", trace_id="unique-session-id", # Add unique trace ID metadata={"request_type": "user_query"} ))
LangGraph also offers its own tracing via LangSmith, which can be used alongside Portkey for even more detailed workflow insights.
Portkey logs every interaction with LLMs, including:
Complete request and response payloads
Latency and token usage metrics
Cost calculations
Tool calls and function executions
All logs can be filtered by metadata, trace IDs, models, and more, making it easy to debug specific agent runs.
Portkey provides built-in dashboards that help you:
Track cost and token usage across all agent runs
Analyze performance metrics like latency and success rates
Identify bottlenecks in your agent workflows
Compare different agent configurations and LLMs
You can filter and segment all metrics by custom metadata to analyze specific agent types, user groups, or use cases.
Add custom metadata to your LangGraph agent calls to enable powerful filtering and segmentation:
Copy
Ask AI
llm = ChatOpenAI( api_key="dummy", base_url="https://api.portkey.ai/v1", default_headers=createHeaders( api_key="YOUR_PORTKEY_API_KEY", provider="@YOUR_LLM_PROVIDER", metadata={ "agent_type": "search_agent", "environment": "production", "_user": "user_123", # Special _user field for user analytics "graph_id": "complex_workflow" } ))
This metadata can be used to filter logs, traces, and metrics on the Portkey dashboard, allowing you to analyze specific agent runs, users, or environments.
2. Reliability - Keep Your Agents Running Smoothly
When running agents in production, things can go wrong - API rate limits, network issues, or provider outages. Portkey’s reliability features ensure your agents keep running smoothly even when problems occur.
Enable fallback in your LangGraph agents by using a Portkey Config:
Portkey’s Prompt Engineering Studio helps you create, manage, and optimize the prompts used in your LangGraph agents. Instead of hardcoding prompts or instructions, use Portkey’s prompt rendering API to dynamically fetch and apply your versioned prompts.
Manage prompts in Portkey's Prompt Library
Prompt Playground is a place to compare, test and deploy perfect prompts for your AI application. It’s where you experiment with different models, test variables, compare outputs, and refine your prompt engineering strategy before deploying to production. It allows you to:
Iteratively develop prompts before using them in your agents
Test prompts with different variables and models
Compare outputs between different prompt versions
Collaborate with team members on prompt development
This visual environment makes it easier to craft effective prompts for each step in your LangGraph agent’s workflow.
Prompt Playground is a place to compare, test and deploy perfect prompts for your AI application. It’s where you experiment with different models, test variables, compare outputs, and refine your prompt engineering strategy before deploying to production. It allows you to:
Iteratively develop prompts before using them in your agents
Test prompts with different variables and models
Compare outputs between different prompt versions
Collaborate with team members on prompt development
This visual environment makes it easier to craft effective prompts for each step in your LangGraph agent’s workflow.
The Prompt Render API retrieves your prompt templates with all parameters configured:
Copy
Ask AI
from portkey_ai import Portkeyfrom langchain_openai import ChatOpenAIfrom portkey_ai import createHeaders# Initialize Portkey clientsportkey_admin = Portkey(api_key="YOUR_PORTKEY_API_KEY")# Retrieve prompt using the render APIprompt_data = portkey_admin.prompts.render( prompt_id="YOUR_PROMPT_ID", variables={ "user_input": "Tell me about artificial intelligence" }).data.dict()# Set up LLM with rendered system promptllm = ChatOpenAI( api_key="dummy", base_url="https://api.portkey.ai/v1", default_headers=createHeaders( api_key="YOUR_PORTKEY_API_KEY", provider="@YOUR_OPENAI_PROVIDER", ))# Define chatbot node with the rendered system promptdef chatbot(state): messages = state["messages"] # Add the system prompt from Portkey to the beginning of the conversation all_messages = [ {"role": "system", "content": prompt_data["messages"][0]["content"]}, *messages ] return {"messages": [llm.invoke(all_messages)]}
You can:
Create multiple versions of the same prompt
Compare performance between versions
Roll back to previous versions if needed
Specify which version to use in your code:
Copy
Ask AI
# Use a specific prompt versionprompt_data = portkey_admin.prompts.render( prompt_id="YOUR_PROMPT_ID@version_number", variables={ "user_input": "Tell me about quantum computing" })
Portkey prompts use Mustache-style templating for easy variable substitution:
Copy
Ask AI
You are an AI assistant specialized in {{agent_role}}.User question: {{user_input}}Please respond in a {{tone}} tone and include {{required_elements}}.
Track individual users through your LangGraph agents using Portkey’s metadata system.
What is Metadata in Portkey?
Metadata allows you to associate custom data with each request, enabling filtering, segmentation, and analytics. The special _user field is specifically designed for user tracking.
Copy
Ask AI
from langchain_openai import ChatOpenAIfrom portkey_ai import createHeaders# Configure LLM with user trackingllm = ChatOpenAI( api_key="dummy", base_url="https://api.portkey.ai/v1", default_headers=createHeaders( api_key="YOUR_PORTKEY_API_KEY", provider="@YOUR_OPENAI_PROVIDER", metadata={ "_user": "user_123", # Special _user field for user analytics "user_tier": "premium", "user_company": "Acme Corp", "session_id": "abc-123", "graph_id": "search_workflow" } ))
Filter Analytics by User
With metadata in place, you can filter analytics by user and analyze performance metrics on a per-user basis:
Filter analytics by user
This enables:
Per-user cost tracking and budgeting
Personalized user analytics
Team or organization-level metrics
Environment-specific monitoring (staging vs. production)
LangGraph works with multiple LLM providers, and Portkey extends this capability by providing access to over 200 LLMs through a unified interface. You can easily switch between different models without changing your core agent logic:
Copy
Ask AI
from langchain_openai import ChatOpenAIfrom portkey_ai import createHeaders# OpenAI configurationopenai_llm = ChatOpenAI( api_key="dummy", base_url="https://api.portkey.ai/v1", default_headers=createHeaders( api_key="YOUR_PORTKEY_API_KEY", provider="@YOUR_OPENAI_PROVIDER" ))# Anthropic configurationanthropic_llm = ChatOpenAI( api_key="dummy", base_url="https://api.portkey.ai/v1", default_headers=createHeaders( api_key="YOUR_PORTKEY_API_KEY", provider="@YOUR_ANTHROPIC_PROVIDER" ))# Choose which LLM to use based on your needsactive_llm = openai_llm # or anthropic_llm# Use in your LangGraph nodesdef chatbot(state): return {"messages": [active_llm.invoke(state["messages"])]}
Portkey provides access to LLMs from providers including:
OpenAI (GPT-4o, GPT-4 Turbo, etc.)
Anthropic (Claude 3.5 Sonnet, Claude 3 Opus, etc.)
Reliability: Ensuring consistent service across all users
Portkey adds a comprehensive governance layer to address these enterprise needs. Let’s implement these controls step by step.
1
Create Integration
To create a new LLM integration:
Go to Integrations in the Portkey App. Set budget / rate limits, model access if required and save the integration.
This creates a “Portkey Provider” that you can then use in any of your Portkey requests without having to send auth details for that LLM provider again.
2
Create Config
Configs in Portkey define how your requests are routed, with features like advanced routing, fallbacks, and retries.
After setting up your Portkey API key with the attached config, connect it to your LangGraph agents:
Copy
Ask AI
from langchain_openai import ChatOpenAIfrom portkey_ai import createHeaders# Configure LLM with your Portkey API keyllm = ChatOpenAI( api_key="dummy", base_url="https://api.portkey.ai/v1", default_headers=createHeaders( api_key="YOUR_PORTKEY_API_KEY" # The API key with attached config from step 3 ))
As your AI usage scales, controlling which teams can access specific models becomes crucial. Portkey Configs provide this control layer with features like:
After distributing API keys to your team members, your enterprise-ready LangGraph setup is ready to go. Each team member can now use their designated API keys with appropriate access levels and budget controls.
Portkey adds production-readiness to LangGraph through comprehensive observability (traces, logs, metrics), reliability features (fallbacks, retries, caching), and access to 1600+ LLMs through a unified interface. This makes it easier to debug, optimize, and scale your agent workflows.
Can I use Portkey with existing LangGraph applications?
Yes! Portkey integrates seamlessly with existing LangGraph applications. You just need to replace your LLM initialization code with the Portkey-enabled version. The rest of your graph code remains unchanged.
Does Portkey work with all LangGraph features?
Portkey supports all LangGraph features, including tools, memory, conditional routing, and complex multi-node workflows. It adds observability and reliability without limiting any of the framework’s functionality.
How do I filter logs and traces for specific graph runs?
Portkey allows you to add custom metadata and trace IDs to your LLM calls, which you can then use for filtering. Add fields like graph_id, workflow_type, or session_id to easily find and analyze specific graph executions.
Can I use LangGraph's memory features with Portkey?
Yes! The examples in this documentation show how to use LangGraph’s MemorySaver checkpointer with Portkey-enabled LLMs. All the memory and state management features work seamlessly with Portkey.