Integrate vLLM-hosted custom models with Portkey and take them to production
Expose your vLLM Server
Install the Portkey SDK
Initialize Portkey with vLLM custom URL
customHost
(by default, vLLM is on http://localhost:8000/v1
)provider
as openai
since the server follows OpenAI API schema.custom_host
here.Invoke Chat Completions
OpenAI
)Custom Host
field