Extend Portkey’s powerful AI Gateway with Arize Phoenix for unified LLM observability, tracing, and analytics across your ML stack.
Portkey is a production-grade AI Gateway and Observability platform for AI applications. It offers built-in observability, reliability features and over 40+ key LLM metrics. For teams standardizing observability in Arize Phoenix, Portkey also supports seamless integration.
Portkey provides comprehensive observability out-of-the-box. This integration is for teams who want to consolidate their ML observability in Arize Phoenix alongside Portkey’s AI Gateway capabilities.
Arize Phoenix brings observability to LLM workflows with tracing, prompt debugging, and performance monitoring.
Thanks to Phoenix’s OpenInference instrumentation, Portkey can emit structured traces automatically — no extra setup needed. This gives you clear visibility into every LLM call, making it easier to debug and improve your app.
With this integration, you can route LLM traffic through Portkey and gain deep observability in Arize Phoenix—bringing together the best of gateway orchestration and ML observability.
Install the required packages to enable Arize Phoenix integration with your Portkey deployment:
Configure Arize Phoenix
First, set up the Arize OpenTelemetry configuration:
Enable Portkey Instrumentation
Initialize the Portkey instrumentor to format traces for Arize:
Configure Portkey AI Gateway
Set up Portkey with all its powerful features:
Here’s a complete working example that connects Portkey’s AI Gateway with Arize Phoenix for centralized monitoring:
Learn how to use Portkey’s Universal API to orchestrate multiple LLMs in a structured debate while tracking performance and evaluating outputs with Arize.
While Arize Phoenix provides observability, Portkey delivers a complete AI infrastructure platform. Here’s everything you get with Portkey:
Access OpenAI, Anthropic, Google, Cohere, Mistral, Llama, and 1600+ models through a single unified API. No more managing different SDKs or endpoints.
Use the same code to call any LLM provider. Switch between models and providers without changing your application code.
Secure vault for API keys with budget limits, rate limiting, and access controls. Never expose raw API keys in your code.
Define routing strategies, model parameters, and reliability settings in reusable configurations. Version control your AI infrastructure.
Automatically switch to backup providers when primary fails. Define fallback chains across multiple providers.
Distribute requests across multiple API keys or providers based on custom weights and strategies.
Configurable retry logic with exponential backoff for transient failures and rate limits.
Set custom timeouts to prevent hanging requests and improve application responsiveness.
Route requests to different models based on content, metadata, or custom conditions.
Gradually roll out new models or providers with percentage-based traffic splitting.
Intelligent caching that understands semantic similarity. Reduce costs by up to 90% on repeated queries.
Set spending limits per API key, team, or project. Get alerts before hitting limits.
Set spending limits per API key, team, or project. Get alerts before hitting limits.
Real-time cost tracking across all providers with detailed breakdowns by model, user, and feature.
Track 40+ metrics including latency, tokens, costs, cache hits, error rates, and more in real-time.
Full request/response logging with advanced filtering, search, and export capabilities.
Trace requests across your entire AI pipeline with correlation IDs and custom metadata.
Set up alerts on any metric with webhook, email, or Slack notifications.
Automatically detect and redact sensitive information like SSN, credit cards, and personal data.
Block harmful, toxic, or inappropriate content in real-time based on custom policies.
Fine-grained RBAC with team management, user permissions, and audit logs.
Enterprise-grade security with SOC2 Type II certification and GDPR compliance.
Complete audit trail of all API usage, configuration changes, and user actions.
Zero data retention options and deployment in your own VPC for maximum privacy.
SAML 2.0 support for Okta, Azure AD, Google Workspace, and custom IdPs.
Multi-workspace support with hierarchical teams and department-level controls.
99.9% uptime SLA with dedicated support and custom deployment options.
Deploy Portkey in your own AWS, Azure, or GCP environment with full control.
Discover all AI Gateway capabilities beyond observability
Secure your API keys and set budgets
Configure fallbacks, load balancing, and more
Use Portkey’s native observability features
Need help? Join our Discord community