Enhance LLM observability with automatic tracing and intelligent gateway routing
MLflow Tracing is a feature that enhances LLM observability in your Generative AI (GenAI) applications by capturing detailed information about the execution of your application’s services. Tracing provides a way to record the inputs, outputs, and metadata associated with each intermediate step of a request, enabling you to easily pinpoint the source of bugs and unexpected behaviors.
MLflow offers automatic, no-code-added integrations with over 20 popular GenAI libraries, providing immediate observability with just a single line of code. Combined with Portkey’s intelligent gateway, you get comprehensive tracing enriched with routing decisions and performance optimizations.
Set up the OpenAI client to use Portkey’s intelligent gateway:
Copy
Ask AI
from openai import OpenAIfrom portkey_ai import createHeaders# Use Portkey's gateway for intelligent routingclient = OpenAI( api_key="YOUR_OPENAI_API_KEY", # Or use a dummy value with virtual keys base_url="https://api.portkey.ai/v1", default_headers=createHeaders( api_key="YOUR_PORTKEY_API_KEY", virtual_key="YOUR_VIRTUAL_KEY" # Optional: Use Portkey's secure key management ))
Now your LLM calls are automatically traced by MLflow and enhanced by Portkey:
Copy
Ask AI
# Make calls through Portkey's gateway# MLflow instruments the call, Portkey adds gateway intelligenceresponse = client.chat.completions.create( model="gpt-4", messages=[ { "role": "user", "content": "Explain the importance of tracing in LLM applications" } ], temperature=0.7)print(response.choices[0].message.content)# You now get:# 1. Automatic tracing from MLflow# 2. Gateway features from Portkey (caching, fallbacks, routing)# 3. Combined insights in Portkey's dashboard