How does it work?
Let’s consider a use case where, given a candidate profile and a job description, the LLM is expected to output candidate notes in a specific JSON format.This is how our raw prompt looks:
Let’s define our variables:
As you can see, we have added variablesfew_shot_examples
, profile
, and jd
in the above examples.
And now let’s add some examples with the expected JSON structure:
{{few_shot_examples}}
is a placeholder for the few-shot learning examples, which are dynamically provided and can be updated as needed. This allows the LLM to adapt its responses to the provided examples, facilitating versatile and context-aware outputs.
Putting it all together in Portkey’s prompt manager:
- Go to the “Prompts” page on https://app.portkey.ai/ and Create a new Prompt template with your preferred AI provider.
- Selecting Chat mode will enable the Raw Prompt feature:

- Click on it and paste the raw prompt code from above. And that’s it! You have your dynamically updatable few shot prompt template ready to deploy.
Deploying the Prompt with Portkey
Deploying your prompt template to an API is extremely easy with Portkey. You can use our Prompt Completions API to use the prompt we created.few_shot_examples
variable, and start using the prompt template in production!