OpenAI Agents SDK enables the development of complex AI agents with tools, planning, and memory capabilities. Portkey enhances OpenAI Agents with observability, reliability, and production-readiness features.
Portkey turns your experimental OpenAI Agents into production-ready systems by providing:
Complete observability of every agent step, tool use, and interaction
Built-in reliability with fallbacks, retries, and load balancing
Cost tracking and optimization to manage your AI spend
Access to 1600+ LLMs through a single integration
Guardrails to keep agent behavior safe and compliant
Version-controlled prompts for consistent agent performance
For a simple setup, we’ll use the global client approach:
Copy
Ask AI
import { Agent, run } from '@openai/agents';import { setDefaultOpenAIClient, setOpenAIAPI, setTracingDisabled } from '@openai/agents';import { OpenAI } from 'openai';import { PORTKEY_GATEWAY_URL, createHeaders } from 'portkey-ai';// Set up Portkey as the global clientconst portkey = new OpenAI({ baseURL: PORTKEY_GATEWAY_URL, apiKey: process.env.PORTKEY_API_KEY!, defaultHeaders: createHeaders({ virtualKey: "YOUR_OPENAI_VIRTUAL_KEY" })});// Register as the SDK-wide defaultsetDefaultOpenAIClient(portkey);setOpenAIAPI('chat_completions'); // Responses API → ChatsetTracingDisabled(true); // Optional: disable OpenAI's tracing
What are Virtual Keys? Virtual keys in Portkey securely store your LLM provider API keys (OpenAI, Anthropic, etc.) in an encrypted vault. They allow for easier key rotation and budget management. Learn more about virtual keys here.
Let’s create a simple question-answering agent with OpenAI Agents SDK and Portkey. This agent will respond directly to user messages using a language model:
Copy
Ask AI
import { Agent, run } from '@openai/agents';import { setDefaultOpenAIClient, setOpenAIAPI, setTracingDisabled } from '@openai/agents';import { OpenAI } from 'openai';import { PORTKEY_GATEWAY_URL, createHeaders } from 'portkey-ai';// Set up Portkey as the global clientconst portkey = new OpenAI({ baseURL: PORTKEY_GATEWAY_URL, apiKey: process.env.PORTKEY_API_KEY!, defaultHeaders: createHeaders({ virtualKey: "YOUR_OPENAI_VIRTUAL_KEY" })});// Register as the SDK-wide defaultsetDefaultOpenAIClient(portkey);setOpenAIAPI('chat_completions'); // Responses API → ChatsetTracingDisabled(true); // Optional: disable OpenAI's tracing// Create agent with any supported modelconst agent = new Agent({ name: "Assistant", instructions: "You are a helpful assistant.", model: "gpt-4o"});// Run the agentconst result = await run(agent, "Tell me about quantum computing.");console.log(result.finalOutput);
In this example:
We set up Portkey as the global client for OpenAI Agents SDK
We create a simple agent with instructions and a model
We run the agent with a user query
We print the final output
Visit your Portkey dashboard to see detailed logs of this agent’s execution!
Research Agent with Tools: Here’s a more comprehensive agent that can use tools to perform tasks.
Copy
Ask AI
import { Agent, run, tool } from '@openai/agents';import { setDefaultOpenAIClient } from '@openai/agents';import { OpenAI } from 'openai';import { PORTKEY_GATEWAY_URL, createHeaders } from 'portkey-ai';import { z } from 'zod';// Configure Portkey clientconst portkey = new OpenAI({ apiKey: process.env.PORTKEY_API_KEY!, baseURL: PORTKEY_GATEWAY_URL, defaultHeaders: createHeaders({ virtualKey: "YOUR_OPENAI_VIRTUAL_KEY" })});setDefaultOpenAIClient(portkey);// Define agent tools using the tool() helperconst getWeatherTool = tool({ name: 'get_weather', description: 'Get the current weather for a given city', parameters: z.object({ city: z.string(), unit: z.enum(['celsius', 'fahrenheit']).nullable().optional() }), async execute({ city, unit = 'fahrenheit' }) { return `The weather in ${city} is 72°${unit === 'celsius' ? 'C' : 'F'} and sunny.`; }});const searchWebTool = tool({ name: 'search_web', description: 'Search the web for information', parameters: z.object({ query: z.string() }), async execute({ query }) { return `Found information about: ${query}`; }});// Create agent with toolsconst agent = new Agent({ name: "Research Assistant", instructions: "You are a helpful assistant that can search for information and check the weather.", model: "gpt-4o", tools: [getWeatherTool, searchWebTool]});// Run the agentconst result = await run( agent, "What's the weather in San Francisco and find information about Golden Gate Bridge?");console.log(result.finalOutput);
Visit your Portkey dashboard to see the complete execution flow visualized!
2. Reliability - Keep Your Agents Running Smoothly
When running agents in production, things can go wrong - API rate limits, network issues, or provider outages. Portkey’s reliability features ensure your agents keep running smoothly even when problems occur.
It’s this simple to enable fallback in your OpenAI Agents:
Copy
Ask AI
import { createHeaders, PORTKEY_GATEWAY_URL } from 'portkey-ai';import { OpenAI } from 'openai';import { setDefaultOpenAIClient } from '@openai/agents';// Create a config with fallbacks, It's recommended that you create the Config in Portkey App rather than hard-code the config JSON directlyconst config = { "strategy": { "mode": "fallback" }, "targets": [ { "provider": "openai", "override_params": {"model": "gpt-4o"} }, { "provider": "anthropic", "override_params": {"model": "claude-3-opus-20240229"} } ]};// Configure Portkey client with fallback configconst portkey = new OpenAI({ baseURL: PORTKEY_GATEWAY_URL, apiKey: process.env.PORTKEY_API_KEY!, defaultHeaders: createHeaders({ config: config })});setDefaultOpenAIClient(portkey);
This configuration will automatically try Claude if the GPT-4o request fails, ensuring your agent can continue operating.
Portkey’s Prompt Engineering Studio helps you create, manage, and optimize the prompts used in your OpenAI Agents. Instead of hardcoding prompts or instructions, use Portkey’s prompt rendering API to dynamically fetch and apply your versioned prompts.
Manage prompts in Portkey's Prompt Library
Prompt Playground is a place to compare, test and deploy perfect prompts for your AI application. It’s where you experiment with different models, test variables, compare outputs, and refine your prompt engineering strategy before deploying to production. It allows you to:
Iteratively develop prompts before using them in your agents
Test prompts with different variables and models
Compare outputs between different prompt versions
Collaborate with team members on prompt development
This visual environment makes it easier to craft effective prompts for each step in your OpenAI Agents agent’s workflow.
Guardrails ensure your OpenAI Agents operate safely and respond appropriately in all situations.
Why Use Guardrails?
OpenAI Agents can experience various failure modes:
Generating harmful or inappropriate content
Leaking sensitive information like PII
Hallucinating incorrect information
Generating outputs in incorrect formats
Portkey’s guardrails protect against these issues by validating both inputs and outputs.
Implementing Guardrails
Copy
Ask AI
import { createHeaders, PORTKEY_GATEWAY_URL } from 'portkey-ai';import { OpenAI } from 'openai';import { setDefaultOpenAIClient } from '@openai/agents';// Create a config with input and output guardrails, It's recommended you create Config in Portkey App and pass the config ID in the clientconst config = { "virtual_key": "openai-xxx", "input_guardrails": ["guardrails-id-xxx", "guardrails-id-yyy"], "output_guardrails": ["guardrails-id-xxx"]};// Configure OpenAI client with guardrailsconst portkey = new OpenAI({ baseURL: PORTKEY_GATEWAY_URL, apiKey: process.env.PORTKEY_API_KEY!, defaultHeaders: createHeaders({ config: config, virtualKey: "YOUR_OPENAI_VIRTUAL_KEY" })});setDefaultOpenAIClient(portkey);
Track individual users through your OpenAI Agents using Portkey’s metadata system.
What is Metadata in Portkey?
Metadata allows you to associate custom data with each request, enabling filtering, segmentation, and analytics. The special _user field is specifically designed for user tracking.
Copy
Ask AI
import { createHeaders, PORTKEY_GATEWAY_URL } from 'portkey-ai';import { OpenAI } from 'openai';import { setDefaultOpenAIClient } from '@openai/agents';// Configure client with user trackingconst portkey = new OpenAI({ baseURL: PORTKEY_GATEWAY_URL, apiKey: process.env.PORTKEY_API_KEY!, defaultHeaders: createHeaders({ virtualKey: "YOUR_LLM_PROVIDER_VIRTUAL_KEY", metadata: { "_user": "user_123", // Special _user field for user analytics "user_name": "John Doe", "user_tier": "premium", "user_company": "Acme Corp" } })});setDefaultOpenAIClient(portkey);
Filter Analytics by User
With metadata in place, you can filter analytics by user and analyze performance metrics on a per-user basis:
Filter analytics by user
This enables:
Per-user cost tracking and budgeting
Personalized user analytics
Team or organization-level metrics
Environment-specific monitoring (staging vs. production)
With Portkey, you can easily switch between different LLMs in your OpenAI Agents without changing your core agent logic.
Copy
Ask AI
// Configure Portkey with different LLM providersimport { createHeaders, PORTKEY_GATEWAY_URL } from 'portkey-ai';import { OpenAI } from 'openai';import { setDefaultOpenAIClient, Agent, run } from '@openai/agents';// Using OpenAIconst openaiConfig = { "provider": "openai", "api_key": "YOUR_OPENAI_API_KEY", "override_params": { "model": "gpt-4o" }};// Using Anthropicconst anthropicConfig = { "provider": "anthropic", "api_key": "YOUR_ANTHROPIC_API_KEY", "override_params": { "model": "claude-3-opus-20240229" }};// Choose which config to useconst activeConfig = openaiConfig; // or anthropicConfig// Configure OpenAI client with chosen providerconst portkey = new OpenAI({ baseURL: PORTKEY_GATEWAY_URL, apiKey: process.env.PORTKEY_API_KEY!, defaultHeaders: createHeaders({ config: activeConfig })});setDefaultOpenAIClient(portkey);// Create and run agent - no changes needed in agent codeconst agent = new Agent({ name: "Assistant", instructions: "You are a helpful assistant.", // The model specified here will be used as a reference but the actual model // is determined by the activeConfig model: "gpt-4o"});const result = await run(agent, "Tell me about quantum computing.");console.log(result.finalOutput);
Portkey provides access to over 200 LLMs through a unified interface, including:
OpenAI (GPT-4o, GPT-4 Turbo, etc.)
Anthropic (Claude 3.5 Sonnet, Claude 3 Opus, etc.)
Reliability: Ensuring consistent service across all users
Portkey adds a comprehensive governance layer to address these enterprise needs. Let’s implement these controls step by step.
Enterprise Implementation Guide
Portkey allows you to use 1600+ LLMs with your OpenAI Agents setup, with minimal configuration required. Let’s set up the core components in Portkey that you’ll need for integration.
1
Create Virtual Key
Virtual Keys are Portkey’s secure way to manage your LLM provider API keys. Think of them like disposable credit cards for your LLM API keys, providing essential controls like:
Budget limits for API usage
Rate limiting capabilities
Secure API key storage
To create a virtual key:
Go to Virtual Keys in the Portkey App. Save and copy the virtual key ID
Save your virtual key ID - you’ll need it for the next step.
2
Create Default Config
Configs in Portkey are JSON objects that define how your requests are routed. They help with implementing features like advanced routing, fallbacks, and retries.
We need to create a default config to route our requests to the virtual key created in Step 1.
Save your API key securely - you’ll need it for OpenAI Agents integration.
4
Once you have created your API Key after attaching default config, you can directly pass the API key + base URL in the OpenAI client. Here’s how:
Copy
Ask AI
import { createHeaders, PORTKEY_GATEWAY_URL } from 'portkey-ai';import { OpenAI } from 'openai';const client = new OpenAI({ apiKey: "YOUR_PORTKEY_API_KEY", // Your Portkey API Key from Step 3 baseURL: PORTKEY_GATEWAY_URL});// your rest of the code remains same
As your AI usage scales, controlling which teams can access specific models becomes crucial. Portkey Configs provide this control layer with features like:
After distributing API keys to your team members, your enterprise-ready OpenAI Agents setup is ready to go. Each team member can now use their designated API keys with appropriate access levels and budget controls.
Apply your governance setup using the integration steps from earlier sections
Monitor usage in Portkey dashboard:
Portkey adds production-readiness to OpenAI Agents through comprehensive observability (traces, logs, metrics), reliability features (fallbacks, retries, caching), and access to 1600+ LLMs through a unified interface. This makes it easier to debug, optimize, and scale your agent applications.
Can I use Portkey with existing OpenAI Agents?
Yes! Portkey integrates seamlessly with existing OpenAI Agents. You only need to replace your client initialization code with the Portkey-enabled version. The rest of your agent code remains unchanged.
Does Portkey work with all OpenAI Agents features?
Portkey supports all OpenAI Agents SDK features, including tool use, memory, planning, and more. It adds observability and reliability without limiting any of the SDK’s functionality.
How does Portkey handle streaming in OpenAI Agents?
Portkey fully supports streaming responses in OpenAI Agents. You can enable streaming by using the appropriate methods in the OpenAI Agents SDK, and Portkey will properly track and log the streaming interactions.
How do I filter logs and traces for specific agent runs?
Portkey allows you to add custom metadata to your agent runs, which you can then use for filtering. Add fields like agent_name, agent_type, or session_id to easily find and analyze specific agent executions.
Can I use my own API keys with Portkey?
Yes! Portkey uses your own API keys for the various LLM providers. It securely stores them as virtual keys, allowing you to easily manage and rotate keys without changing your code.