Portkey provides a robust and secure gateway to seamlessly integrate open-source and fine-tuned LLMs from Predibase into your applications. With Portkey, you can leverage powerful features like fast AI gateway, caching, observability, prompt management, and more, while securely managing your LLM API keys through a virtual key system.
Using Portkey, you can call your Predibase models in the familar OpenAI-spec and try out your existing pipelines on Predibase fine-tuned models with 2 LOC change.
To use Predibase with Portkey, get your API key from here, then add it to Portkey to create the virtual key.
import Portkey from 'portkey-ai'const portkey = new Portkey({ apiKey: "PORTKEY_API_KEY", // defaults to process.env["PORTKEY_API_KEY"] virtualKey: "VIRTUAL_KEY" // Your Predibase Virtual Key})
import Portkey from 'portkey-ai'const portkey = new Portkey({ apiKey: "PORTKEY_API_KEY", // defaults to process.env["PORTKEY_API_KEY"] virtualKey: "VIRTUAL_KEY" // Your Predibase Virtual Key})
from portkey_ai import Portkeyportkey = Portkey( api_key="PORTKEY_API_KEY", # Replace with your Portkey API key virtual_key="VIRTUAL_KEY" # Replace with your virtual key for Predibase)
import OpenAI from "openai";import { PORTKEY_GATEWAY_URL, createHeaders } from "portkey-ai";const portkey = new OpenAI({ baseURL: PORTKEY_GATEWAY_URL, defaultHeaders: createHeaders({ apiKey: "PORTKEY_API_KEY", virtualKey: "PREDIBASE_VIRTUAL_KEY", }),});
Predibase expects your account tenant ID along with the API key in each request. With Portkey, you can send your Tenand ID with the user param while making your request.
const chatCompletion = await portkey.chat.completions.create({ messages: [{ role: 'user', content: 'Say this is a test' }], model: 'llama-3-8b ', user: 'PREDIBASE_TENANT_ID'});console.log(chatCompletion.choices);
const chatCompletion = await portkey.chat.completions.create({ messages: [{ role: 'user', content: 'Say this is a test' }], model: 'llama-3-8b ', user: 'PREDIBASE_TENANT_ID'});console.log(chatCompletion.choices);
completion = portkey.chat.completions.create( messages= [{ "role": 'user', "content": 'Say this is a test' }], model= 'llama-3-8b', user= "PREDIBASE_TENANT_ID")print(completion)
You can enforce JSON schema for all Predibase models - just set the response_format to json_object and pass the relevant schema while making your request. Portkey logs will show your JSON output separately