Learn how to integrate Azure AI Foundry with Portkey to access a wide range of AI models with enhanced observability and reliability features.
Azure AI Foundry provides a unified platform for enterprise AI operations, model building, and application development. With Portkey, you can seamlessly integrate with various models available on Azure AI Foundry and take advantage of features like observability, prompt management, fallbacks, and more.
Understanding Azure AI Foundry Deployments
Azure AI Foundry offers three different ways to deploy models, each with unique endpoints and configurations:
- AI Services: Azure-managed models accessed through Azure AI Services endpoints
- Managed: User-managed deployments running on dedicated Azure compute resources
- Serverless: Seamless, scalable deployment without managing infrastructure
You can learn more about the Azure AI Foundry deployment here.
Azure OpenAI
If you’re specifically looking to use OpenAI models on Azure, you might want to use Azure OpenAI instead, which is optimized for OpenAI models.
Integrate
To integrate Azure AI Foundry with Portkey, you’ll need to create a virtual key. Virtual keys securely store your Azure AI Foundry credentials in Portkey’s vault, allowing you to use a simple identifier in your code instead of handling sensitive authentication details directly.
Navigate to the Virtual Keys section in Portkey and select “Azure AI Foundry” as your provider.
Creating Your Virtual Key
You can create a virtual key for Azure AI Foundry using one of three authentication methods. Each method requires different information from your Azure deployment:
The recommended authentication mode using API Keys:
Required parameters:
-
API Key: Your Azure AI Foundry API key
-
Azure Foundry URL: The base endpoint URL for your deployment, formatted according to your deployment type:
- For AI Services:
https://your-resource-name.services.ai.azure.com/models
- For Managed:
https://your-model-name.region.inference.ml.azure.com/score
- For Serverless:
https://your-model-name.region.models.ai.azure.com
- For AI Services:
-
Azure API Version: The API version to use (e.g., “2024-05-01-preview”). This is required if you have api version in your deployment url. For example:
- If your URL is
https://mycompany-ai.westus2.services.ai.azure.com/models?api-version=2024-05-01-preview
, the API version is2024-05-01-preview
- If your URL is
-
Azure Deployment Name: (Optional) Required only when a single resource contains multiple deployments.
The recommended authentication mode using API Keys:
Required parameters:
-
API Key: Your Azure AI Foundry API key
-
Azure Foundry URL: The base endpoint URL for your deployment, formatted according to your deployment type:
- For AI Services:
https://your-resource-name.services.ai.azure.com/models
- For Managed:
https://your-model-name.region.inference.ml.azure.com/score
- For Serverless:
https://your-model-name.region.models.ai.azure.com
- For AI Services:
-
Azure API Version: The API version to use (e.g., “2024-05-01-preview”). This is required if you have api version in your deployment url. For example:
- If your URL is
https://mycompany-ai.westus2.services.ai.azure.com/models?api-version=2024-05-01-preview
, the API version is2024-05-01-preview
- If your URL is
-
Azure Deployment Name: (Optional) Required only when a single resource contains multiple deployments.
For managed Azure deployments:
Required parameters:
-
Azure Managed ClientID: Your managed client ID
-
Azure Foundry URL: The base endpoint URL for your deployment, formatted according to your deployment type:
- For AI Services:
https://your-resource-name.services.ai.azure.com/models
- For Managed:
https://your-model-name.region.inference.ml.azure.com/score
- For Serverless:
https://your-model-name.region.models.ai.azure.com
- For AI Services:
-
Azure API Version: The API version to use (e.g., “2024-05-01-preview”). This is required if you have api version in your deployment url. Examples:
- If your URL is
https://mycompany-ai.westus2.services.ai.azure.com/models?api-version=2024-05-01-preview
, the API version is2024-05-01-preview
- If your URL is
-
Azure Deployment Name: (Optional) Required only when a single resource contains multiple deployments.
To use this authentication your azure application need to have the role of: conginitive services user
.
Enterprise-level authentication with Azure Entra ID:
Required parameters:
-
Azure Entra ClientID: Your Azure Entra client ID
-
Azure Entra Secret: Your client secret
-
Azure Entra Tenant ID: Your tenant ID
-
Azure Foundry URL: The base endpoint URL for your deployment, formatted according to your deployment type:
- For AI Services:
https://your-resource-name.services.ai.azure.com/models
- For Managed:
https://your-model-name.region.inference.ml.azure.com/score
- For Serverless:
https://your-model-name.region.models.ai.azure.com
- For AI Services:
-
Azure API Version: The API version to use (e.g., “2024-05-01-preview”). This is required if you have api version in your deployment url. Examples:
- If your URL is
https://mycompany-ai.westus2.services.ai.azure.com/models?api-version=2024-05-01-preview
, the API version is2024-05-01-preview
- If your URL is
-
Azure Deployment Name: (Optional) Required only when a single resource contains multiple deployments. Common in Managed deployments.
You can Learn more about these Azure Entra Resources here
Sample Request
Once you’ve created your virtual key, you can start making requests to Azure AI Foundry models through Portkey.
Install the Portkey SDK with npm
Install the Portkey SDK with npm
Install the Portkey SDK with pip
Advanced Features
Function Calling
Azure AI Foundry supports function calling (tool calling) for compatible models. Here’s how to implement it with Portkey:
Vision Capabilities
Process images alongside text using Azure AI Foundry’s vision capabilities:
Structured Outputs
Get consistent, parseable responses in specific formats:
Relationship with Azure OpenAI
For Azure OpenAI specific models and deployments, we recommend using the existing Azure OpenAI provider in Portkey:
Azure OpenAI Integration
Learn how to integrate Azure OpenAI with Portkey for access to OpenAI models hosted on Azure.
Portkey Features with Azure AI Foundry
Setting Up Fallbacks
Create fallback configurations to ensure reliability when working with Azure AI Foundry models:
Load Balancing Between Models
Distribute requests across multiple models for optimal performance:
Conditional Routing
Route requests based on specific conditions like user type or content requirements:
Managing Prompts with Azure AI Foundry
You can manage all prompts to Azure AI Foundry in the Prompt Library. Once you’ve created and tested a prompt in the library, use the portkey.prompts.completions.create
interface to use the prompt in your application.
Next Steps
Explore these additional resources to make the most of your Azure AI Foundry integration with Portkey:
Add Metadata
Learn how to add custom metadata to your Azure AI Foundry requests.
Gateway Configs
Configure advanced gateway features for your Azure AI Foundry requests.
Request Tracing
Trace your Azure AI Foundry requests for better observability.
Setup Fallbacks
Create fallback configurations between different providers.