Azure OpenAI is a great alternative to accessing the best models including GPT-4 and more in your private environments. Portkey provides complete support for Azure OpenAI.
With Portkey, you can take advantage of features like fast AI gateway access, observability, prompt management, and more, all while ensuring the secure management of your LLM API keys through a virtual key system.
Create a resource in the Azure portal here. (This will be your Resource Name)
Deploy a model in Azure OpenAI Studio here. (This will be your Deployment Name)
Select your Foundation Model from the dropdowon on the modal.
Now, on Azure OpenAI studio, go to any playground (chat or completions), click on a UI element called “View code”. Note down the API version & API key from here. (This will be your Azure API Version & Azure API Key)
When you input these details, the foundation model will be auto populated. More details in this guide.
If you do not want to add your Azure details to Portkey, you can also directly pass them while instantiating the Portkey client. More on that here.
Set up Portkey with your virtual key as part of the initialization configuration. You can create a virtual key for Azure in the Portkey UI.
import Portkey from 'portkey-ai'const portkey = new Portkey({ apiKey: "PORTKEY_API_KEY", // defaults to process.env["PORTKEY_API_KEY"] virtualKey: "AZURE_VIRTUAL_KEY" // Your Azure Virtual Key})
import Portkey from 'portkey-ai'const portkey = new Portkey({ apiKey: "PORTKEY_API_KEY", // defaults to process.env["PORTKEY_API_KEY"] virtualKey: "AZURE_VIRTUAL_KEY" // Your Azure Virtual Key})
from portkey_ai import Portkeyportkey = Portkey( api_key="PORTKEY_API_KEY", # Replace with your Portkey API key virtual_key="AZURE_VIRTUAL_KEY" # Replace with your virtual key for Azure)
Use the Portkey instance to send requests to your Azure deployments. You can also override the virtual key directly in the API call if needed.
const chatCompletion = await portkey.chat.completions.create({ messages: [{ role: 'user', content: 'Say this is a test' }], model: 'gpt4', // This would be your deployment or model name});console.log(chatCompletion.choices);
const chatCompletion = await portkey.chat.completions.create({ messages: [{ role: 'user', content: 'Say this is a test' }], model: 'gpt4', // This would be your deployment or model name});console.log(chatCompletion.choices);
completion = portkey.chat.completions.create( messages= [{ "role": 'user', "content": 'Say this is a test' }], model= 'custom_model_name')print(completion.choices)
You can manage all prompts to Azure OpenAI in the Prompt Library. All the current models of OpenAI are supported and you can easily start testing different prompts.
Once you’re ready with your prompt, you can use the portkey.prompts.completions.create interface to use the prompt in your application.
Portkey supports multiple modalities for Azure OpenAI and you can make image generation requests through Portkey’s AI Gateway the same way as making completion calls.
import Portkey from 'portkey-ai'const portkey = new Portkey({ apiKey: "PORTKEY_API_KEY", virtualKey: "DALL-E_VIRTUAL_KEY" // Referencing a Dall-E Azure deployment with Virtual Key})const image = await portkey.images.generate({ prompt:"Lucy in the sky with diamonds", size:"1024x1024"})
import Portkey from 'portkey-ai'const portkey = new Portkey({ apiKey: "PORTKEY_API_KEY", virtualKey: "DALL-E_VIRTUAL_KEY" // Referencing a Dall-E Azure deployment with Virtual Key})const image = await portkey.images.generate({ prompt:"Lucy in the sky with diamonds", size:"1024x1024"})
from portkey_ai import Portkeyportkey = Portkey( api_key="PORTKEY_API_KEY", virtual_key="DALL-E_VIRTUAL_KEY" # Referencing a Dall-E Azure deployment with Virtual Key)image = portkey.images.generate( prompt="Lucy in the sky with diamonds", size="1024x1024")
Portkey’s fast AI gateway captures the information about the request on your Portkey Dashboard. On your logs screen, you’d be able to see this request with the request and response.
Log view for an image generation request on Azure OpenAI
More information on image generation is available in the API Reference.
If you have configured fine-grained access for Azure OpenAI and need to use JSON web token (JWT) in the Authorization header instead of the regular API Key, you can use the forwardHeaders parameter to do this.
import Portkey from 'portkey-ai'const portkey = new Portkey({ apiKey: "PORTKEY_API_KEY", provider: "azure-openai", azureResourceName: "AZURE_RESOURCE_NAME", azureDeploymendId: "AZURE_DEPLOYMENT_NAME", azureApiVersion: "AZURE_API_VERSION", azureModelName: "AZURE_MODEL_NAME", Authorization: "Bearer JWT_KEY", // Pass your JWT here forwardHeaders: [ "Authorization" ]})
import Portkey from 'portkey-ai'const portkey = new Portkey({ apiKey: "PORTKEY_API_KEY", provider: "azure-openai", azureResourceName: "AZURE_RESOURCE_NAME", azureDeploymendId: "AZURE_DEPLOYMENT_NAME", azureApiVersion: "AZURE_API_VERSION", azureModelName: "AZURE_MODEL_NAME", Authorization: "Bearer JWT_KEY", // Pass your JWT here forwardHeaders: [ "Authorization" ]})
import Portkey from 'portkey-ai'const portkey = new Portkey({ api_key = "PORTKEY_API_KEY", provider = "azure-openai", azure_resource_name = "AZURE_RESOURCE_NAME", azure_deploymend_id = "AZURE_DEPLOYMENT_NAME", azure_api_version = "AZURE_API_VERSION", azure_model_name = "AZURE_MODEL_NAME", Authorization = "Bearer API_KEY", # Pass your JWT here forward_headers= [ "Authorization" ])
For further questions on custom Azure deployments or fine-grained access tokens, reach out to us on support@portkey.ai