- Full-stack observability and tracing for all requests
- Interoperability across 250+ LLMS
- Built-in 50+ SOTA guardrails
- Simple & semantic caching to save costs & time
- Route requests conditionally and make them robust with fallbacks, load-balancing, automatic retries, and more
- Continuous improvement based on user feedback
Getting Started
1. Installation
2. Import & Configure Portkey Object
Sign up for Portkey and get your API key, and configure Portkey provider in your Vercel app:Portkey’s configs are a powerful way to manage & govern your app’s behaviour. Learn more about Configs here.
Using Vercel Functions
Portkey provider works with all of Vercel functionsgenerateText
& streamText
.
Here’s how to use them with Portkey:
Portkey supports
chatModel
and completionModel
to easily handle chatbots or text completions. In the above examples, we used portkey.chatModel
for generateText and portkey.completionModel
for streamText.Tool Calling with Portkey
Portkey supports Tool calling with Vercel AI SDK. Here’s how-Portkey Features
Portkey Helps you make your Vercel app more robust and reliable. The portkey config is a modular way to make it work for you in whatever way you want.Interoperability
Portkey allows you to easily switch between 250+ AI models by simply changing the model name in your configuration. This flexibility enables you to adapt to the evolving AI landscape without significant code changes.Observability
Portkey’s OpenTelemetry-compliant observability suite gives you complete control over all your requests. And Portkey’s analytics dashboards provide 40+ key insights you’re looking for including cost, tokens, latency, etc. Fast.