Learn how to integrate Azure AI Foundry with Portkey to access a wide range of AI models with enhanced observability and reliability features.
Azure AI Foundry provides a unified platform for enterprise AI operations, model building, and application development. With Portkey, you can seamlessly integrate with various models available on Azure AI Foundry and take advantage of features like observability, prompt management, fallbacks, and more.
To integrate Azure AI Foundry with Portkey, you’ll need to create a virtual key. Virtual keys securely store your Azure AI Foundry credentials in Portkey’s vault, allowing you to use a simple identifier in your code instead of handling sensitive authentication details directly.
Navigate to the Virtual Keys section in Portkey and select “Azure AI Foundry” as your provider.
You can create a virtual key for Azure AI Foundry using one of three authentication methods. Each method requires different information from your Azure deployment:
The recommended authentication mode using API Keys:
Required parameters:
API Key: Your Azure AI Foundry API key
Azure Foundry URL: The base endpoint URL for your deployment, formatted according to your deployment type:
For AI Services: https://your-resource-name.services.ai.azure.com/models
For Managed: https://your-model-name.region.inference.ml.azure.com/score
For Serverless: https://your-model-name.region.models.ai.azure.com
Azure API Version: The API version to use (e.g., “2024-05-01-preview”). This is required if you have api version in your deployment url. For example:
If your URL is https://mycompany-ai.westus2.services.ai.azure.com/models?api-version=2024-05-01-preview, the API version is 2024-05-01-preview
Azure Deployment Name: (Optional) Required only when a single resource contains multiple deployments.
Once you’ve created your virtual key, you can start making requests to Azure AI Foundry models through Portkey.
Install the Portkey SDK with npm
Copy
Ask AI
npm install portkey-ai
Copy
Ask AI
import Portkey from 'portkey-ai';const client = new Portkey({ apiKey: 'PORTKEY_API_KEY', provider:'@AZURE_FOUNDRY_PROVIDER'});async function main() { const response = await client.chat.completions.create({ messages: [{ role: "user", content: "Tell me about cloud computing" }], model: "DeepSeek-V3-0324", // Replace with your deployed model name }); console.log(response.choices[0].message.content);}main();
Get consistent, parseable responses in specific formats:
Copy
Ask AI
const response = await portkey.chat.completions.create({ model: "cohere-command-a", // Use a model that supports response formats messages: [ { role: "system", content: "You are a helpful assistant." }, { role: "user", content: "List the top 3 cloud providers with their main services" } ], response_format: { type: "json_object" }, temperature: 0});console.log(JSON.parse(response.choices[0].message.content));
You can manage all prompts to Azure AI Foundry in the Prompt Library. Once you’ve created and tested a prompt in the library, use the portkey.prompts.completions.create interface to use the prompt in your application.
Copy
Ask AI
const promptCompletion = await portkey.prompts.completions.create({ promptID: "Your Prompt ID", variables: { // The variables specified in the prompt }})