Use Portkey with CrewAI to take your AI Agents to production
CrewAI is a framework for orchestrating role-playing, autonomous AI agents designed to solve complex, open-ended tasks through collaboration. It provides a robust structure for agents to work together, leverage tools, and exchange insights to accomplish sophisticated objectives.
Portkey enhances CrewAI with production-readiness features, turning your experimental agent crews into robust systems by providing:
Learn more about CrewAI’s core concepts and features
Install the required packages
Generate API Key
Create a Portkey API key with optional budget/rate limits from the Portkey dashboard. You can also attach configurations for reliability, caching, and more to this key. More on this later.
Configure CrewAI with Portkey
The integration is simple - you just need to update the LLM configuration in your CrewAI setup:
Portkey provides comprehensive observability for your CrewAI agents, helping you understand exactly what’s happening during each execution.
Traces provide a hierarchical view of your crew’s execution, showing the sequence of LLM calls, tool invocations, and state transitions.
When running crews in production, things can go wrong - API rate limits, network issues, or provider outages. Portkey’s reliability features ensure your agents keep running smoothly even when problems occur.
It’s simple to enable fallback in your CrewAI setup by using a Portkey Config:
This configuration will automatically try Claude if the GPT-4o request fails, ensuring your crew can continue operating.
Handles temporary failures automatically. If an LLM call fails, Portkey will retry the same request for the specified number of times - perfect for rate limits or network blips.
Prevent your agents from hanging. Set timeouts to ensure you get responses (or can fail gracefully) within your required timeframes.
Send different requests to different providers. Route complex reasoning to GPT-4, creative tasks to Claude, and quick responses to Gemini based on your needs.
Keep running even if your primary provider fails. Automatically switch to backup providers to maintain availability.
Spread requests across multiple API keys or providers. Great for high-volume crew operations and staying within rate limits.
Portkey’s Prompt Engineering Studio helps you create, manage, and optimize the prompts used in your CrewAI agents. Instead of hardcoding prompts or instructions, use Portkey’s prompt rendering API to dynamically fetch and apply your versioned prompts.
Manage prompts in Portkey's Prompt Library
Prompt Playground is a place to compare, test and deploy perfect prompts for your AI application. It’s where you experiment with different models, test variables, compare outputs, and refine your prompt engineering strategy before deploying to production. It allows you to:
This visual environment makes it easier to craft effective prompts for each step in your CrewAI agents’ workflow.
Learn more about Portkey’s prompt management features
Guardrails ensure your CrewAI agents operate safely and respond appropriately in all situations.
Why Use Guardrails?
CrewAI agents can experience various failure modes:
Portkey’s guardrails add protections for both inputs and outputs.
Implementing Guardrails
Portkey’s guardrails can:
Explore Portkey’s guardrail features to enhance agent safety
Track individual users through your CrewAI agents using Portkey’s metadata system.
What is Metadata in Portkey?
Metadata allows you to associate custom data with each request, enabling filtering, segmentation, and analytics. The special _user
field is specifically designed for user tracking.
Filter Analytics by User
With metadata in place, you can filter analytics by user and analyze performance metrics on a per-user basis:
Filter analytics by user
This enables:
Explore how to use custom metadata to enhance your analytics
Implement caching to make your CrewAI agents more efficient and cost-effective:
Simple caching performs exact matches on input prompts, caching identical requests to avoid redundant model executions.
CrewAI supports multiple LLM providers, and Portkey extends this capability by providing access to over 200 LLMs through a unified interface. You can easily switch between different models without changing your core agent logic:
Portkey provides access to LLMs from providers including:
See the full list of LLM providers supported by Portkey
Why Enterprise Governance? If you are using CrewAI inside your organization, you need to consider several governance aspects:
Portkey adds a comprehensive governance layer to address these enterprise needs. Let’s implement these controls step by step.
Create LLM Integrations
Go to Integrations in the Portkey App, choose your LLM and connect to it. Save and copy the generated provider ID.
Create Default Config
Configs in Portkey define how your requests are routed, with features like advanced routing, fallbacks, and retries.
To create your config:
Configure Portkey API Key
Now create a Portkey API key and attach the config you created in Step 2:
Step 2
Connect to CrewAI
After setting up your Portkey API key with the attached config, connect it to your CrewAI agents:
Step 1: Implement Budget Controls & Rate Limits
Enable granular control over LLM access at the team/department level. This helps you:
Step 2: Define Model Access Rules
As your AI usage scales, controlling which teams can access specific models becomes crucial. Portkey Configs provide this control layer with features like:
Here’s a basic configuration to route requests to OpenAI, specifically using GPT-4o:
Create your config on the Configs page in your Portkey dashboard.
Configs can be updated anytime to adjust controls without affecting running applications.
Step 3: Implement Access Controls
Create User-specific API keys that automatically:
Create API keys through:
Example using Python SDK:
For detailed key management instructions, see our API Keys documentation.
Step 4: Deploy & Monitor
After distributing API keys to your team members, your enterprise-ready CrewAI setup is ready to go. Each team member can now use their designated API keys with appropriate access levels and budget controls.
Monitor usage in Portkey dashboard:
Your CrewAI integration now has:
How does Portkey enhance CrewAI?
Portkey adds production-readiness to CrewAI through comprehensive observability (traces, logs, metrics), reliability features (fallbacks, retries, caching), and access to 1600+ LLMs through a unified interface. This makes it easier to debug, optimize, and scale your agent applications.
Can I use Portkey with existing CrewAI applications?
Yes! Portkey integrates seamlessly with existing CrewAI applications. You just need to update your LLM configuration code with the Portkey-enabled version. The rest of your agent and crew code remains unchanged.
Does Portkey work with all CrewAI features?
Portkey supports all CrewAI features, including agents, tools, human-in-the-loop workflows, and all task process types (sequential, hierarchical, etc.). It adds observability and reliability without limiting any of the framework’s functionality.
Can I track usage across multiple agents in a crew?
Yes, Portkey allows you to use a consistent trace_id
across multiple agents in a crew to track the entire workflow. This is especially useful for complex crews where you want to understand the full execution path across multiple agents.
How do I filter logs and traces for specific crew runs?
Portkey allows you to add custom metadata to your LLM configuration, which you can then use for filtering. Add fields like crew_name
, crew_type
, or session_id
to easily find and analyze specific crew executions.
Can I use my own API keys with Portkey?
Yes! Portkey uses your own API keys for the various LLM providers. It securely stores them, allowing you to easily manage and rotate keys without changing your code.
Official CrewAI documentation
Get personalized guidance on implementing this integration