Learn to integrate OpenAI with Portkey, enabling seamless completions, prompt management, and advanced functionalities like streaming, function calling and fine-tuning.
Portkey has native integrations with OpenAI SDKs for Node.js, Python, and its REST APIs. For OpenAI integration using other frameworks, explore our partnerships, including Langchain, LlamaIndex, among others.
import OpenAI from 'openai';import { PORTKEY_GATEWAY_URL, createHeaders } from 'portkey-ai'const openai = new OpenAI({ apiKey: 'xx', baseURL: PORTKEY_GATEWAY_URL, defaultHeaders: createHeaders({ apiKey: "PORTKEY_API_KEY", virtualKey: "OPENAI_VIRTUAL_KEY" })});async function main() { const completion = await openai.chat.completions.create({ messages: [{ role: 'user', content: 'Say this is a test' }], model: 'gpt-4o', }); console.log(chatCompletion.choices);}main();
This request will be automatically logged by Portkey. You can view this in your logs dashboard. Portkey logs the tokens utilized, execution time, and cost for each request. Additionally, you can delve into the details to review the precise request and response data.
Portkey supports OpenAI’s new “developer” role in chat completions. With o1 models and newer, the developer role replaces the previous system role.
OpenAI has released a new Responses API that combines the best of both Chat Completions and Assistants APIs. Portkey fully supports this new API, enabling you to use it with both the Portkey SDK and OpenAI SDK.
from portkey_ai import Portkeyportkey = Portkey( api_key="PORTKEY_API_KEY", virtual_key="OPENAI_VIRTUAL_KEY")response = portkey.responses.create( model="gpt-4.1", input="Tell me a three sentence bedtime story about a unicorn.")print(response)
from portkey_ai import Portkeyportkey = Portkey( api_key="PORTKEY_API_KEY", virtual_key="OPENAI_VIRTUAL_KEY")response = portkey.responses.create( model="gpt-4.1", input="Tell me a three sentence bedtime story about a unicorn.")print(response)
import Portkey from 'portkey-ai';const portkey = new Portkey({ apiKey: "PORTKEY_API_KEY", virtualKey: "OPENAI_VIRTUAL_KEY"});async function main() { const response = await portkey.responses.create({ model: "gpt-4.1", input: "Tell me a three sentence bedtime story about a unicorn." }); console.log(response);}main();
from openai import OpenAIfrom portkey_ai import PORTKEY_GATEWAY_URL, createHeadersclient = OpenAI( api_key="OPENAI_API_KEY", base_url=PORTKEY_GATEWAY_URL, default_headers=createHeaders( provider="openai", api_key="PORTKEY_API_KEY", virtual_key="OPENAI_VIRTUAL_KEY" ))response = client.responses.create( model="gpt-4.1", input="Tell me a three sentence bedtime story about a unicorn.")print(response)
import OpenAI from 'openai';import { PORTKEY_GATEWAY_URL, createHeaders } from 'portkey-ai'const openai = new OpenAI({ baseURL: PORTKEY_GATEWAY_URL, defaultHeaders: createHeaders({ provider: "openai", apiKey: "PORTKEY_API_KEY", virtualKey: "OPENAI_VIRTUAL_KEY" })});async function main() { const response = await openai.responses.create({ model: "gpt-4.1", input: "Tell me a three sentence bedtime story about a unicorn." }); console.log(response);}main();
The Responses API provides a more flexible foundation for building agentic applications with built-in tools that execute automatically.
Portkey allows you to track user IDs passed with the user parameter in OpenAI requests, enabling you to monitor user-level costs, requests, and more.
const chatCompletion = await portkey.chat.completions.create({ messages: [{ role: "user", content: "Say this is a test" }], model: "gpt-4o", user: "user_12345",});
const chatCompletion = await portkey.chat.completions.create({ messages: [{ role: "user", content: "Say this is a test" }], model: "gpt-4o", user: "user_12345",});
response = portkey.chat.completions.create( model="gpt-4o", messages=[{ role: "user", content: "Say this is a test" }] user="user_123456")
When you include the user parameter in your requests, Portkey logs will display the associated user ID, as shown in the image below:
In addition to the user parameter, Portkey allows you to send arbitrary custom metadata with your requests. This powerful feature enables you to associate additional context or information with each request, which can be useful for analysis, debugging, or other custom use cases.
Portkey also supports creating and managing prompt templates in the prompt library. This enables the collaborative development of prompts directly through the user interface.
Create a prompt template with variables and set the hyperparameters.
Use this prompt in your codebase using the Portkey SDK.
import Portkey from 'portkey-ai'const portkey = new Portkey({ apiKey: "PORTKEY_API_KEY",})// Make the prompt creation call with the variablesconst promptCompletion = await portkey.prompts.completions.create({ promptID: "Your Prompt ID", variables: { // The variables specified in the prompt }})
// We can also override the hyperparametersconst promptCompletion = await portkey.prompts.completions.create({ promptID: "Your Prompt ID", variables: { // The variables specified in the prompt }, max_tokens: 250, presence_penalty: 0.2})
import Portkey from 'portkey-ai'const portkey = new Portkey({ apiKey: "PORTKEY_API_KEY",})// Make the prompt creation call with the variablesconst promptCompletion = await portkey.prompts.completions.create({ promptID: "Your Prompt ID", variables: { // The variables specified in the prompt }})
// We can also override the hyperparametersconst promptCompletion = await portkey.prompts.completions.create({ promptID: "Your Prompt ID", variables: { // The variables specified in the prompt }, max_tokens: 250, presence_penalty: 0.2})
from portkey_ai import Portkeyclient = Portkey( api_key="PORTKEY_API_KEY", # defaults to os.environ.get("PORTKEY_API_KEY"))prompt_completion = client.prompts.completions.create( prompt_id="Your Prompt ID", variables={ # The variables specified in the prompt })print(prompt_completion)# We can also override the hyperparametersprompt_completion = client.prompts.completions.create( prompt_id="Your Prompt ID", variables={ # The variables specified in the prompt }, max_tokens=250, presence_penalty=0.2)print(prompt_completion)
curl -X POST "https://api.portkey.ai/v1/prompts/:PROMPT_ID/completions" \-H "Content-Type: application/json" \-H "x-portkey-api-key: $PORTKEY_API_KEY" \-d '{ "variables": { # The variables to use }, "max_tokens": 250, # Optional "presence_penalty": 0.2 # Optional}'
Observe how this streamlines your code readability and simplifies prompt updates via the UI without altering the codebase.
Portkey supports OpenAI’s Realtime API with a seamless integration. This allows you to use Portkey’s logging, cost tracking, and guardrail features while using the Realtime API.
You can also stream responses from the Responses API:
response = portkey.responses.create( model="gpt-4.1", instructions="You are a helpful assistant.", input="Hello!", stream=True)for event in response: print(event)
response = portkey.responses.create( model="gpt-4.1", instructions="You are a helpful assistant.", input="Hello!", stream=True)for event in response: print(event)
const response = await portkey.responses.create({ model: "gpt-4.1", instructions: "You are a helpful assistant.", input: "Hello!", stream: true});for await (const event of response) { console.log(event);}
response = client.responses.create( model="gpt-4.1", instructions="You are a helpful assistant.", input="Hello!", stream=True)for event in response: print(event)
const response = await openai.responses.create({ model: "gpt-4.1", instructions: "You are a helpful assistant.", input: "Hello!", stream: true});for await (const event of response) { console.log(event);}
Function calls within your OpenAI or Portkey SDK operations remain standard. These logs will appear in Portkey, highlighting the utilized functions and their outputs.
Additionally, you can define functions within your prompts and invoke the portkey.prompts.completions.create method as above.
The Responses API also supports function calling with the same powerful capabilities:
tools = [ { "type": "function", "name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA" }, "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]} }, "required": ["location", "unit"] } }]response = portkey.responses.create( model="gpt-4.1", tools=tools, input="What is the weather like in Boston today?", tool_choice="auto")print(response)
tools = [ { "type": "function", "name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA" }, "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]} }, "required": ["location", "unit"] } }]response = portkey.responses.create( model="gpt-4.1", tools=tools, input="What is the weather like in Boston today?", tool_choice="auto")print(response)
const tools = [ { type: "function", name: "get_current_weather", description: "Get the current weather in a given location", parameters: { type: "object", properties: { location: { type: "string", description: "The city and state, e.g. San Francisco, CA" }, unit: { type: "string", enum: ["celsius", "fahrenheit"] } }, required: ["location", "unit"] } }];const response = await portkey.responses.create({ model: "gpt-4.1", tools: tools, input: "What is the weather like in Boston today?", tool_choice: "auto"});console.log(response);
tools = [ { "type": "function", "name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA" }, "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]} }, "required": ["location", "unit"] } }]response = client.responses.create( model="gpt-4.1", tools=tools, input="What is the weather like in Boston today?", tool_choice="auto")print(response)
const tools = [ { type: "function", name: "get_current_weather", description: "Get the current weather in a given location", parameters: { type: "object", properties: { location: { type: "string", description: "The city and state, e.g. San Francisco, CA" }, unit: { type: "string", enum: ["celsius", "fahrenheit"] } }, required: ["location", "unit"] } }];const response = await openai.responses.create({ model: "gpt-4.1", tools: tools, input: "What is the weather like in Boston today?", tool_choice: "auto"});console.log(response);
Portkey supports multiple modalities for OpenAI and you can make image generation requests through Portkey’s AI Gateway the same way as making completion calls.
// Define the OpenAI client as shown aboveconst image = await openai.images.generate({ model:"dall-e-3", prompt:"Lucy in the sky with diamonds", size:"1024x1024"})
// Define the OpenAI client as shown aboveconst image = await openai.images.generate({ model:"dall-e-3", prompt:"Lucy in the sky with diamonds", size:"1024x1024"})
# Define the OpenAI client as shown aboveimage = openai.images.generate( model="dall-e-3", prompt="Lucy in the sky with diamonds", size="1024x1024")
Portkey’s fast AI gateway captures the information about the request on your Portkey Dashboard. On your logs screen, you’d be able to see this request with the request and response.
Log view for an image generation request on OpenAI
More information on image generation is available in the API Reference.
File search enables quick retrieval from your knowledge base across multiple file types:
response = portkey.responses.create( model="gpt-4.1", tools=[{ "type": "file_search", "vector_store_ids": ["vs_1234567890"], "max_num_results": 20, "filters": { # Optional - filter by metadata "type": "eq", "key": "document_type", "value": "report" } }], input="What are the attributes of an ancient brown dragon?")print(response)
response = portkey.responses.create( model="gpt-4.1", tools=[{ "type": "file_search", "vector_store_ids": ["vs_1234567890"], "max_num_results": 20, "filters": { # Optional - filter by metadata "type": "eq", "key": "document_type", "value": "report" } }], input="What are the attributes of an ancient brown dragon?")print(response)
const response = await portkey.responses.create({ model: "gpt-4.1", tools: [{ type: "file_search", vector_store_ids: ["vs_1234567890"], max_num_results: 20, filters: { // Optional - filter by metadata type: "eq", key: "document_type", value: "report" } }], input: "What are the attributes of an ancient brown dragon?"});console.log(response);
This tool requires you to first create a vector store and upload files to it. Supports various file formats including PDFs, DOCXs, TXT, and more. Results include file citations in the response.
Portkey also supports the Computer Use Assistant (CUA) tool, which helps agents control computers or virtual machines through screenshots and actions. This feature is available for select developers as a research preview on premium tiers.
Managing OpenAI Projects & Organizations in Portkey
When integrating OpenAI with Portkey, you can specify your OpenAI organization and project IDs along with your API key. This is particularly useful if you belong to multiple organizations or are accessing projects through a legacy user API key.
Specifying the organization and project IDs helps you maintain better control over your access rules, usage, and costs.
In Portkey, you can add your Org & Project details by,
When selecting OpenAI from the dropdown menu while creating a virtual key, Portkey automatically displays optional fields for the organization ID and project ID alongside the API key field.
Portkey takes budget management a step further than OpenAI. While OpenAI allows setting budget limits per project, Portkey enables you to set budget limits for each virtual key you create. For more information on budget limits, refer to this documentation: