Pydantic Logfire is a modern observability platform from the creators of Pydantic, designed specifically for Python applications. It provides automatic instrumentation for popular libraries including OpenAI, Anthropic, and other LLM providers, making it an excellent choice for AI application monitoring.
Logfire’s automatic instrumentation combined with Portkey’s intelligent gateway creates a powerful observability stack where every trace is enriched with routing decisions, cache performance, and cost optimization data.
Why Logfire + Portkey?
Zero-Code OpenAI Instrumentation Logfire automatically instruments OpenAI SDK calls without any code changes
Gateway Intelligence Portkey adds routing context, fallback decisions, and cache performance to every trace
Python-First Design Built by the Pydantic team specifically for Python developers
Real-Time Insights See traces immediately with actionable optimization opportunities
Quick Start
Prerequisites
Python
Portkey account with API key
OpenAI API key (or add it to Model Catalog )
Step 1: Install Dependencies
Install the required packages for Logfire and Portkey integration:
pip install logfire openai portkey-ai
Step 2: Basic Setup - Send Traces to Portkey
First, let’s configure Logfire to send traces to Portkey’s OpenTelemetry endpoint:
import os
import logfire
# Configure OpenTelemetry export to Portkey
os.environ[ "OTEL_EXPORTER_OTLP_ENDPOINT" ] = "https://api.portkey.ai/v1/logs/otel"
os.environ[ "OTEL_EXPORTER_OTLP_HEADERS" ] = "x-portkey-api-key=YOUR_PORTKEY_API_KEY"
# Initialize Logfire
logfire.configure(
service_name = 'my-llm-app' ,
send_to_logfire = False , # Disable sending to Logfire cloud
)
# Instrument OpenAI globally
logfire.instrument_openai()
Step 3: Complete Setup - Use Portkey’s Gateway
For the best experience, route your LLM calls through Portkey’s gateway to get automatic optimizations:
import logfire
import os
from portkey_ai import createHeaders
from openai import OpenAI
# Configure OpenTelemetry export
os.environ[ "OTEL_EXPORTER_OTLP_ENDPOINT" ] = "https://api.portkey.ai/v1/logs/otel"
os.environ[ "OTEL_EXPORTER_OTLP_HEADERS" ] = "x-portkey-api-key=YOUR_PORTKEY_API_KEY"
# Initialize Logfire
logfire.configure(
service_name = 'my-llm-app' ,
send_to_logfire = False ,
)
# Create OpenAI client with Portkey's gateway
client = OpenAI(
api_key = "PORTKEY_API_KEY" ,
base_url = "https://api.portkey.ai/v1" ,
default_headers = createHeaders(
api_key = "PORTKEY_API_KEY" ,
provider = "@openai-prod" # Your AI Provider slug from Model Catalog
)
)
# Instrument the Portkey-configured client
logfire.instrument_openai(client)
Step 4: Make Instrumented LLM Calls
Now your LLM calls are automatically traced by Logfire and enhanced by Portkey:
# Simple chat completion - automatically traced
response = client.chat.completions.create(
model = "gpt-4o" ,
messages = [{ "role" : "user" , "content" : "Explain the benefits of observability in LLM applications" }],
temperature = 0.7
)
print (response.choices[ 0 ].message.content)
Next Steps
Configure Gateway Set up intelligent routing, fallbacks, and caching
Model Catalog Manage AI providers, credentials, and model access centrally
View Analytics Analyze costs, performance, and usage patterns
Set Up Budget & Rate Limts Set Rate and Budget Limits per model/user/api-key
See Your Traces in Action
Once configured, navigate to the Portkey dashboard to see your Logfire instrumentation combined with gateway intelligence: