Learn how to integrate Portkey’s enterprise features with any OpenAI Compliant project for enhanced observability, reliability and governance.
Portkey enhances any OpenAI API compliant project by adding enterprise-grade features like observability, reliability, rate limiting, access control, and budget management—all without requiring code changes.
It is a drop-in replacement for your existing OpenAI-compatible applications. This guide explains how to integrate Portkey with minimal changes to your project settings.
While OpenAI (or any other provider) provides an API for AI model access. Commercial usage often require additional features like:
Portkey allows you to use 1600+ LLMs with your n8n setup, with minimal configuration required. Let’s set up the core components in Portkey that you’ll need for integration.
Create an Integration
Navigate to the Integrations section on Portkey’s Sidebar. This is where you’ll connect your LLM providers.
In your next step you’ll see workspace provisioning options. You can select the default “Shared Team Workspace” if this is your first time OR chose your current one.
Configure Models
On the model provisioning page:
Click Create Integration to complete the integration
Copy the Provider Slug
Once your Integration is created:
@openai-dev/gpt-4o
)We recommend clicking the Run Test Request
button on this step to verify your integration. If you see the error: You do not have enough permissions to execute this request
, you’ll need to create a User API Key for this step to work properly.
You can create one here. You should be able to see simple chat request output on this step.
This is your unique identifier - you’ll need it for the next step. This slug is basically @your-provider-slug/your-model-name
Create Default Config
Portkey’s config is a JSON object used to define routing rules for requests to your gateway. You can create these configs in the Portkey app and reference them in requests via the config ID. For this setup, we’ll create a simple config using your provider (OpenAI) and model (gpt-4o).
Configure Portkey API Key
Finally, create a Portkey API key:
Save your API key securely - you’ll need it for n8n integration.
🎉 Voila, Setup complete! You now have everything needed to integrate Portkey with your application.
You can integrate Portkey with any OpenAI API-compatible project through a simple configuration change. This integration enables advanced monitoring, security features, and analytics for your LLM applications. Here’s how you do it:
Locate LLM Settings Navigate to your project’s LLM settings page and find the OpenAI configuration section (usually labeled ‘OpenAI-Compatible’ or ‘Generic OpenAI’).”
Configure Base URL Set the base URL to:
Add API Key Enter your Portkey API key in the appropriate field. You can generate this key from your Portkey dashboard under API Keys section.
Configure Model Settings If your integration allows direct model configuration, you can specify it in the LLM settings. Otherwise, create a configuration object:
Why Enterprise Governance? If you are using Your Project inside your orgnaization, you need to consider several governance aspects:
Portkey adds a comprehensive governance layer to address these enterprise
Enterprise Implementation Guide
Step 1: Implement Budget Controls & Rate Limits
Model Catalog enables you to have granular control over LLM access at the team/department level. This helps you:
Step 2: Define Model Access Rules
As your AI usage scales, controlling which teams can access specific models becomes crucial. You can simply manage AI models in your org by provisioning model at the top integration level.
Step 4: Set Routing Configuration
Portkey allows you to control your routing logic very simply with it’s Configs feature. Portkey Configs provide this control layer with things like:
Here’s a basic configuration to load-balance requests to OpenAI and Anthropic:
Create your config on the Configs page in your Portkey dashboard. You’ll need the config ID for connecting to Your Project’s setup.
Configs can be updated anytime to adjust controls without affecting running applications.
Step 4: Implement Access Controls
Create User-specific API keys that automatically:
Create API keys through:
Example using Python SDK:
For detailed key management instructions, see our API Keys documentation.
Step 5: Deploy & Monitor
After distributing API keys to your engineering teams, your enterprise-ready Your Project setup is ready to go. Each developer can now use their designated API keys with appropriate access levels and budget controls. Apply your governance setup using the integration steps from earlier sections Monitor usage in Portkey dashboard:
Your Project now has:
Now that you have set up your enterprise-grade Project environment, let’s explore the comprehensive features Portkey provides to ensure secure, efficient, and cost-effective AI operations.
Using Portkey you can track 40+ key metrics including cost, token usage, response time, and performance across all your LLM providers in real time. You can also filter these metrics based on custom metadata that you can set in your configs. Learn more about metadata here.
Portkey’s logging dashboard provides detailed logs for every request made to your LLMs. These logs include:
You can easily switch between 250+ LLMs. Call various LLMs such as Anthropic, Gemini, Mistral, Azure OpenAI, Google Vertex AI, AWS Bedrock, and many more by simply changing the virtual key
in your default config
object.
Using Portkey, you can add custom metadata to your LLM requests for detailed tracking and analytics. Use metadata tags to filter logs, track usage, and attribute costs across departments and teams.
Set and manage spending limits across teams and departments. Control costs with granular budget limits and usage tracking.
Enterprise-grade SSO integration with support for SAML 2.0, Okta, Azure AD, and custom providers for secure authentication.
Hierarchical organization structure with workspaces, teams, and role-based access control for enterprise-scale deployments.
Comprehensive access control rules and detailed audit logging for security compliance and usage tracking.
Automatically switch to backup targets if the primary target fails.
Route requests to different targets based on specified conditions.
Distribute requests across multiple targets based on defined weights.
Enable caching of responses to improve performance and reduce costs.
Automatic retry handling with exponential backoff for failed requests
Set and manage budget limits across teams and departments. Control costs with granular budget limits and usage tracking.
Protect your Project’s data and enhance reliability with real-time checks on LLM inputs and outputs. Leverage guardrails to:
Implement real-time protection for your LLM interactions with automatic detection and filtering of sensitive content, PII, and custom security rules. Enable comprehensive data protection while maintaining compliance with organizational policies.
How do I update my Virtual Key limits after creation?
You can update your Virtual Key limits at any time from the Portkey dashboard:1. Go to Virtual Keys section2. Click on the Virtual Key you want to modify3. Update the budget or rate limits4. Save your changes
Can I use multiple LLM providers with the same API key?
Yes! You can create multiple Virtual Keys (one for each provider) and attach them to a single config. This config can then be connected to your API key, allowing you to use multiple providers through a single API key.
How do I track costs for different teams?
Portkey provides several ways to track team costs:
What happens if a team exceeds their budget limit?
When a team reaches their budget limit:
Join our Community
For enterprise support and custom features, contact our enterprise team.