LangSmith is LangChain’s observability platform that helps you debug, test, evaluate, and monitor your LLM applications. When combined with Portkey, you get the best of both worlds: LangSmith’s detailed observability and Portkey’s advanced AI gateway features.
This integration allows you to:
Track all LLM requests in LangSmith while routing through Portkey
Use Portkey’s 1600+ LLM providers with LangSmith observability
Implement advanced features like caching, fallbacks, and load balancing
Maintain detailed traces and analytics in both platforms
Quick Start Integration
Since Portkey provides an OpenAI-compatible API, integrating with LangSmith is straightforward using LangSmith’s OpenAI wrapper.
Installation
pip install portkey-ai langsmith
Basic Setup
import os
from portkey_ai import Portkey
from langsmith.wrappers import wrap_openai
# Set your LangSmith credentials
os.environ[ "LANGSMITH_TRACING" ] = "true"
os.environ[ "LANGSMITH_API_KEY" ] = "YOUR_LANGSMITH_API_KEY"
os.environ[ "LANGSMITH_ENDPOINT" ] = "https://api.smith.langchain.com"
os.environ[ "LANGSMITH_PROJECT" ] = "your-project-name" # Optional
# Initialize Portkey client wrapped with LangSmith
client = wrap_openai(Portkey( api_key = "YOUR_PORTKEY_API_KEY" ))
# Make a request
response = client.chat.completions.create(
model = "@openai-integration/gpt-4o-mini" ,
messages = [{ "role" : "user" , "content" : "Hello, world!" }],
)
print (response.choices[ 0 ].message.content)
This integration automatically logs requests to both LangSmith and Portkey, giving you observability data in both platforms.
Using Portkey Features with LangSmith
1. LLM Integrations
LLM Integrations in Portkey allow you to securely manage API keys and set usage limits. Use them with LangSmith for better security:
from langsmith.wrappers import wrap_openai
from portkey_ai import Portkey
client = wrap_openai(Portkey( api_key = "YOUR_PORTKEY_API_KEY" ))
response = client.chat.completions.create(
model = "@openai-integration/gpt-4o" ,
messages = [{ "role" : "user" , "content" : "Explain quantum computing" }]
)
2. Multiple Providers
Switch between 1600+ LLM providers while maintaining LangSmith observability:
OpenAI Anthropic Azure OpenAI client = wrap_openai(Portkey( api_key = "YOUR_PORTKEY_API_KEY" ))
response = client.chat.completions.create(
model = "@openai-integration/gpt-4o" ,
messages = [{ "role" : "user" , "content" : "Hello!" }]
)
3. Advanced Routing with Configs
Use Portkey’s config system for advanced features while tracking in LangSmith:
from portkey_ai import Portkey
from langsmith.wrappers import wrap_openai
# Create a config in Portkey dashboard first, then reference it
client = wrap_openai(Portkey( api_key = "YOUR_PORTKEY_API_KEY" , config = "your-config-id" ))
Example config for fallback between providers:
{
"strategy" : {
"mode" : "fallback"
},
"targets" : [
{
"integration" : "@openai-integration" ,
"override_params" : { "model" : "gpt-4o" }
},
{
"integration" : "@anthropic-integration" ,
"override_params" : { "model" : "claude-3-opus-20240229" }
}
]
}
4. Caching for Cost Optimization
Enable caching to reduce costs while maintaining full observability:
from portkey_ai import Portkey
from langsmith.wrappers import wrap_openai
config = {
"cache" : {
"mode" : "semantic" ,
"max_age" : 3600
},
"provider" : "@YOUR_PROVIDER"
}
client = wrap_openai(Portkey( api_key = "YOUR_PORTKEY_API_KEY" , config = config))
Add custom metadata visible in both LangSmith and Portkey:
from langsmith.wrappers import wrap_openai
from portkey_ai import Portkey
client = wrap_openai(Portkey( api_key = "YOUR_PORTKEY_API_KEY" ,
metadata = {
"user_id" : "user_123" ,
"session_id" : "session_456" ,
"environment" : "production"
})
)
def traced_llm_call ( question : str ):
return client.chat.completions.create(
model = "@openai-integration/gpt-4o" ,
messages = [{ "role" : "user" , "content" : question}],
)
response = traced_llm_call( "What is 1729?" )
Observability Features
With this integration, you get:
In LangSmith:
Request/response logging
Latency tracking
Token usage analytics
Cost calculation
Trace visualization
In Portkey:
Request logs with provider details
Advanced analytics across providers
Cost tracking and budgets
Performance metrics
Custom dashboards
Token usage analytics
Migration Guide
If you’re already using LangSmith with OpenAI, migrating to use Portkey is simple:
from openai import OpenAI
from langsmith.wrappers import wrap_openai
client = wrap_openai(OpenAI( api_key = "YOUR_OPENAI_KEY" ))
Next Steps
Resources