Provider slug: inference-net
Portkey SDK Integration with Inference.net
Portkey provides a consistent API to interact with models from various providers. To integrate Inference.net with Portkey:
1. Install the Portkey SDK
npm install --save portkey-ai
npm install --save portkey-ai
2. Initialize Portkey with Inference.net Authorization
- Set
provider
name as inference-net
- Pass your API key with
Authorization
header
import Portkey from 'portkey-ai'
const portkey = new Portkey({
apiKey: "PORTKEY_API_KEY", // defaults to process.env["PORTKEY_API_KEY"]
provider: "inference-net",
Authorization: "Bearer INFERENCE-NET API KEY"
})
import Portkey from 'portkey-ai'
const portkey = new Portkey({
apiKey: "PORTKEY_API_KEY", // defaults to process.env["PORTKEY_API_KEY"]
provider: "inference-net",
Authorization: "Bearer INFERENCE-NET API KEY"
})
from portkey_ai import Portkey
portkey = Portkey(
api_key="PORTKEY_API_KEY", # Replace with your Portkey API key
provider="inference-net",
Authorization="Bearer INFERENCE-NET API KEY"
)
3. Invoke Chat Completions
const chatCompletion = await portkey.chat.completions.create({
messages: [{ role: 'user', content: 'Say this is a test' }],
model: 'llama3',
});
console.log(chatCompletion.choices);
const chatCompletion = await portkey.chat.completions.create({
messages: [{ role: 'user', content: 'Say this is a test' }],
model: 'llama3',
});
console.log(chatCompletion.choices);
completion = portkey.chat.completions.create(
messages= [{ "role": 'user', "content": 'Say this is a test' }],
model= 'llama3'
)
print(completion)
Supported Models
Find more info about models supported by Inference.net here:
Inference.net
Next Steps
The complete list of features supported in the SDK are available on the link below.
You’ll find more information in the relevant sections:
- Add metadata to your requests
- Add gateway configs to your Inference.net requests
- Tracing Inference.net requests
- Setup a fallback from OpenAI to Inference.net