Arize Phoenix is an open-source AI observability platform designed to help developers debug, monitor, and evaluate LLM applications. Phoenix provides powerful visualization tools and uses OpenInference instrumentation to automatically capture detailed traces of your AI system’s behavior.
Phoenix’s OpenInference instrumentation combined with Portkey’s intelligent gateway provides comprehensive debugging capabilities with automatic trace collection, while adding routing optimization and resilience features to your LLM calls.
Why Arize Phoenix + Portkey?
Visual Debugging Powerful UI for exploring traces, spans, and debugging LLM behavior
OpenInference Standard Industry-standard semantic conventions for AI/LLM observability
Evaluation Tools Built-in tools for evaluating model performance and behavior
Gateway Intelligence Portkey adds caching, fallbacks, and load balancing to every request
Quick Start
Prerequisites
Python
Portkey account with API key
OpenAI API key (or use Portkey’s virtual keys)
Step 1: Install Dependencies
Install the required packages for Phoenix and Portkey integration:
pip install arize-phoenix-otel openai openinference-instrumentation-openai portkey-ai
Set up the environment variables to send traces to Portkey:
import os
# Configure Portkey endpoint and authentication
os.environ[ "OTEL_EXPORTER_OTLP_ENDPOINT" ] = "https://api.portkey.ai/v1/logs/otel"
os.environ[ "OTEL_EXPORTER_OTLP_HEADERS" ] = "x-portkey-api-key=YOUR_PORTKEY_API_KEY"
Step 3: Register Phoenix and Instrument OpenAI
Initialize Phoenix and enable OpenAI instrumentation:
from phoenix.otel import register
from openinference.instrumentation.openai import OpenAIInstrumentor
# Configure Phoenix tracer
register( set_global_tracer_provider = False )
# Instrument OpenAI
OpenAIInstrumentor().instrument()
Set up the OpenAI client with Portkey’s gateway:
from openai import OpenAI
from portkey_ai import createHeaders
# Use Portkey's gateway for intelligent routing
client = OpenAI(
api_key = "YOUR_OPENAI_API_KEY" , # Or use a dummy value with virtual keys
base_url = "https://api.portkey.ai/v1" ,
default_headers = createHeaders(
api_key = "YOUR_PORTKEY_API_KEY" ,
virtual_key = "YOUR_VIRTUAL_KEY" # Optional: Use Portkey's secure key management
)
)
Step 5: Make Instrumented LLM Calls
Your LLM calls are now automatically traced by Phoenix and enhanced by Portkey:
# Make calls with automatic tracing
response = client.chat.completions.create(
messages = [
{
"role" : "user" ,
"content" : "How does Phoenix help with AI debugging?" ,
}
],
model = "gpt-4" ,
temperature = 0.7
)
print (response.choices[ 0 ].message.content)
# Phoenix captures:
# - Input/output pairs
# - Token usage
# - Latency metrics
# - Model parameters
#
# Portkey adds:
# - Gateway routing decisions
# - Cache hit/miss data
# - Fallback information
Complete Example
Here’s a full working example:
from phoenix.otel import register
from openinference.instrumentation.openai import OpenAIInstrumentor
import os
from openai import OpenAI
from portkey_ai import createHeaders
# Step 1: Configure Portkey endpoint
os.environ[ "OTEL_EXPORTER_OTLP_ENDPOINT" ] = "https://api.portkey.ai/v1/logs/otel"
os.environ[ "OTEL_EXPORTER_OTLP_HEADERS" ] = "x-portkey-api-key=YOUR_PORTKEY_API_KEY"
# Step 2: Register Phoenix and instrument OpenAI
register( set_global_tracer_provider = False )
OpenAIInstrumentor().instrument()
# Step 3: Configure Portkey Gateway
client = OpenAI(
api_key = "YOUR_OPENAI_API_KEY" ,
base_url = "https://api.portkey.ai/v1" ,
default_headers = createHeaders(
api_key = "YOUR_PORTKEY_API_KEY" ,
virtual_key = "YOUR_VIRTUAL_KEY"
)
)
# Step 4: Make instrumented calls
response = client.chat.completions.create(
messages = [
{ "role" : "system" , "content" : "You are a helpful AI assistant." },
{ "role" : "user" , "content" : "Explain how observability helps in production AI systems" }
],
model = "gpt-4" ,
temperature = 0.7
)
print (response.choices[ 0 ].message.content)
OpenInference Instrumentation
Phoenix uses OpenInference semantic conventions for AI observability:
Automatic Capture
Messages : Full conversation history with roles and content
Model Info : Model name, temperature, and other parameters
Token Usage : Input/output token counts for cost tracking
Errors : Detailed error information when requests fail
Latency : End-to-end request timing
Supported Providers
Phoenix can instrument multiple LLM providers:
OpenAI
Anthropic
Bedrock
Vertex AI
Azure OpenAI
And more through OpenInference instrumentors
Configuration Options
Custom Span Attributes
Add custom attributes to your traces:
from opentelemetry import trace
tracer = trace.get_tracer( __name__ )
with tracer.start_as_current_span( "custom_operation" ) as span:
span.set_attribute( "user.id" , "user123" )
span.set_attribute( "session.id" , "session456" )
# Your LLM call here
response = client.chat.completions.create( ... )
Sampling Configuration
Control trace sampling for production environments:
from opentelemetry.sdk.trace.sampling import TraceIdRatioBased
# Sample 10% of traces
register(
set_global_tracer_provider = False ,
sampler = TraceIdRatioBased( 0.1 )
)
Troubleshooting
Common Issues
Traces not appearing in Portkey
Ensure both OTEL_EXPORTER_OTLP_ENDPOINT and OTEL_EXPORTER_OTLP_HEADERS are correctly set
Missing instrumentation data
Make sure to call OpenAIInstrumentor().instrument() before creating your OpenAI client
Phoenix UI not showing traces
If using Phoenix UI locally, ensure Phoenix is running and properly configured
Next Steps
See Your Traces in Action
Once configured, navigate to the Portkey dashboard to see your Phoenix instrumentation combined with gateway intelligence:
Responses are generated using AI and may contain mistakes.