This guide covers the integration for the Javascript / Typescript flavour of Langchain. Docs for the Python Langchain integration are here.
- Are context-aware: connect a language model to sources of context (prompt instructions, few shot examples, content to ground its response in, etc.)
- Reason: rely on a language model to reason (about how to answer based on provided context, what actions to take, etc.)
Quick Start Integration
Install the Portkey and Langchain SDKs to get started.Since Portkey is fully compatible with the OpenAI signature, you can connect to the Portkey Ai Gateway through the
ChatOpenAI
interface.- Set the
baseURL
asPORTKEY_GATEWAY_URL
- Add
defaultHeaders
to consume the headers needed by Portkey using thecreateHeaders
helper method.

Using Virtual Keys for Multiple Models
Portkey supports Virtual Keys which are an easy way to store and manage API keys in a secure vault. Lets try using a Virtual Key to make LLM calls.1. Create a Virtual Key in your Portkey account and the id
Let’s try creating a new Virtual Key for Mistral like this
2. Use Virtual Keys in the Portkey Headers
ThevirtualKey
parameter sets the authentication and provider for the AI provider being used. In our case we’re using the Mistral Virtual key.
Notice that the
apiKey
can be left blank as that authentication won’t be used.ChatOpenAI
class making it a single interface to call any provider and any model.
Embeddings
Embeddings in Langchain through Portkey work the same way as the Chat Models using theOpenAIEmbeddings
class. Let’s try to create an embedding using OpenAI’s embedding model
Chains & Prompts
Chains enable the integration of various Langchain concepts for simultaneous execution while Langchain supports Prompt Templates to construct inputs for language models. Lets see how this would work with PortkeyUsing Advanced Routing
The Portkey AI Gateway brings capabilities like load-balancing, fallbacks, experimentation and canary testing to Langchain through a configuration-first approach. Let’s take an example where we might want to split traffic between gpt-4 and claude-opus 50:50 to test the two large models. The gateway configuration for this would look like the following:config
in our requests being made from langchain.
gpt-4
and claude-3-opus-20240229
in the ratio of the defined weights.
You can find more config examples here.
Agents & Tracing
A powerful capability of Langchain is creating Agents. The challenge with agentic workflows is that prompts are often abstracted out and it’s hard to get a visibility into what the agent is doing. This also makes debugging harder. Connect the Portkey configuration to theChatOpenAI
model and we’d be able to use all the benefits of the AI gateway as shown above.
Also, Portkey would capture the logs from the agent API calls giving us full visibility.
