Traceloop’s OpenLLMetry is an open source project that allows you to easily start monitoring and debugging the execution of your LLM app.
Traceloop’s non-intrusive instrumentation combined with Portkey’s intelligent gateway provides comprehensive observability without modifying your application code, while adding routing intelligence, caching, and failover capabilities.
Set up the OpenAI client to use Portkey’s intelligent gateway:
Copy
Ask AI
from openai import OpenAIfrom portkey_ai import createHeaders# Use Portkey's gateway for intelligent routingclient = OpenAI( api_key="YOUR_OPENAI_API_KEY", # Or use a dummy value with virtual keys base_url="https://api.portkey.ai/v1", default_headers=createHeaders( api_key="YOUR_PORTKEY_API_KEY", virtual_key="YOUR_VIRTUAL_KEY" # Optional: Use Portkey's secure key management ))
Your LLM calls are now automatically traced by Traceloop and enhanced by Portkey:
Copy
Ask AI
# Make calls through Portkey's gateway# Traceloop automatically instruments the callresponse = client.chat.completions.create( model="gpt-4", messages=[ { "role": "user", "content": "Explain the benefits of OpenTelemetry for LLM applications" } ], temperature=0.7)print(response.choices[0].message.content)# You now get:# 1. Automatic, non-intrusive tracing from Traceloop# 2. Gateway features from Portkey (caching, fallbacks, routing)# 3. Combined insights in Portkey's dashboard