OpenAI Agents (Python)

OpenAI Agents (Type Script)

Pydantic AI

Autogen

CrewAI

Agno AI
Llama Index

LangChain

LangGraph

Langroid

OpenAI Swarm

Control Flow

Strands Agents
Bring Your Agent

Integrate Portkey with your agents with just 2 lines of code
Langchain
Get Started with Portkey x Agent Cookbooks
Key Production Features
By routing your agent’s requests through Portkey, you make your agents production-grade with the following features.1. Interoperability
Easily switch between LLM providers. Call various LLMs such as Anthropic, Gemini, Mistral, Azure OpenAI, Google Vertex AI, AWS Bedrock and much more by simply changing theprovider
and API key
in the LLM object.
2. Caching
Improve performance and reduce costs on your Agent’s LLM calls by storing past responses in the Portkey cache. Choose between Simple and Semantic cache modes in your Portkey’s gateway config.3. Reliability
Set up fallbacks between different LLMs or providers, load balance your requests across multiple instances or API keys, set automatic retries, and request timeouts. Ensure your agents’ resilience with advanced reliability features.4. Observability
Portkey automatically logs key details about your agent runs, including cost, tokens used, response time, etc. For agent-specific observability, add Trace IDs to the request headers for each agent. This enables filtering analytics by Trace IDs, ensuring deeper monitoring and analysis.5. Logs
Access a dedicated section to view records of action executions, including parameters, outcomes, and errors. Filter logs of your agent run based on multiple parameters such as trace ID, model, tokens used, metadata, etc.