This feature is available for all plans:
You can easily access Prompt Engineering Studio using https://prompt.new
Portkey’s Prompt Playground is a place to compare, test and deploy perfect prompts for your AI application. It’s where you experiment with different models, test variables, compare outputs, and refine your prompt engineering strategy before deploying to production.
When you first open the Playground, you’ll see a clean interface with a few key components:
The beauty of the Playground is its simplicity - write a prompt, click “Generate Completion”, and instantly see how the model responds.
Creating a prompt is straightforward:
You can continue the conversation by adding more messages, helping you simulate real-world interactions with your AI.
Once you save a prompt in the Playground, you’ll receive a prompt ID
that you can use directly in your application code. This makes it easy to move from experimentation to production:
This approach allows you to separate prompt engineering from your application code, making both easier to maintain. For more details on integrating prompts in your applications, check out our Prompt API documentation.
Wondering which model works best for your use case? The side-by-side comparison feature lets you see how different models handle the same prompt.
Click the ”+ Compare” button to add another column, select a different model, and generate completions simultaneously. You will be able to see how each model responds to the same prompt, along with crucial metrics like latency, total tokens, and throughput helping you make informed decisions about which model to use in production.
You can run comparisons on the same prompt template by selecting the template from the “New Template” dropdown in the UI along with the versions button across multiple models. Once you figure out what is working, you can click on the “Update Prompt” button to update the prompt template with a new version. You can also compare different prompt versions by selecting the version from the UI.
The variables you define apply across all templates in the comparison, ensuring you’re testing against identical inputs.
Some models support function calling, allowing the AI to request specific information or take actions. The Playground makes it easy to experiment with these capabilities.
Click “Add Tool” button to define functions the model can call. For example:
You can add multiple tools from the tool library for the specific prompt template. You can also choose the parameter “tool_choice” from the UI to control how the model uses the available tools.
This tool definition teaches the model how to request weather information for a specific location.
Each model offers various parameters that affect its output. Access these by clicking the “Parameters” button:
And more… Experiment with these settings to find the perfect balance for your use case.
The Playground offers two interface modes for working with prompts:
Pretty Mode
The default user-friendly interface with formatted messages and simple controls. This is ideal for most prompt engineering tasks and provides an intuitive way to craft and test prompts.
JSON Mode
For advanced users who need granular control, you can toggle to JSON mode by clicking the “JSON” button. This reveals the raw JSON structure of your prompt, allowing for precise editing and advanced configurations.
JSON mode is particularly useful when:
You can switch between modes at any time using the toggle in the interface.
For multimodal models that support images, you can upload images directly in the Playground using the 🧷 icon on the message input box.
Alternatively, you can use JSON mode to incorporate images using variables. Toggle from PRETTY to JSON mode using the button on the dashboard, then structure your prompt like this:
Now you can pass the image URL as a variable in your prompt template, and the model will be able to analyze the image content.
Portkey uses Mustache under the hood to power the prompt templates.
Mustache is a commonly used logic-less templating engine that follows a simple schema for defining variables and more.
With Mustache, prompt templates become even more extensible by letting you incorporate various {{tags}}
in your prompt template and easily pass your data.
The most common usage of mustache templates is for {{variables}}
, used to pass a value at runtime.
Let’s look at the following template:
As you can see, {{customer_data}}
and {{chat_query}}
are defined as variables in the template and you can pass their value at runtime:
Using variables is just the start! Portkey supports multiple Mustache tags that let you extend the template functionality:
Tag | Functionality | Example |
---|---|---|
{{variable}} | Variable | Template: Hi! My name is {{name}} . I work at {{company}} . Data: Copy{ "name": "Chris", "company": "GitHub" } Output: Hi! My name is Chris. I work at Github. |
{{#variable}} <string> {{/variable}} | Render <string> only if variable is true or non Empty | Template: Hello I am Tesla bot.{{#chat_mode_pleasant}} Excited to chat with you! {{chat_mode_pleasant}} What can I help you with? Data: Copy { "chat_mode_pleasant": False } Output: Hello I am Tesla bot. What can I help you with? |
{{^variable}} <string>``{{/variable}} | Render <string> only if variable is false or empty | Template: Hello I am Tesla bot.{{^chat_mode_pleasant}} Excited to chat with you! {{/chat_mode_pleasant}} What can I help you with? Data: Copy { "chat_mode_pleasant": False } Output: Hello I am Tesla bot. Excited to chat with you! What can I help you with? |
{{#variable}} {{sub_variable}} {{/variable}} | Iteratively render all the values of sub_variable if variable is true or non Empty | Template: Give atomic symbols for the following: {{#variable}} - {{sub_variable}} {{/variable}} Data: Copy { "variable": \[ { "sub\_variable": "Gold" }, { "sub\_variable": "Carbon" }, { "sub\_variable": "Zinc" } \] } Output: Give atomic symbols for the following: - Gold - Carbon - Zinc |
{{! Comment}} | Comments that are ignored | Template: Hello I am Tesla bot.{{! How do tags work?}} What can I help you with? Data: Copy Output: Hello I am Tesla bot. What can I help you with? |
{{>Partials}} | ”Mini-templates” that can be called at runtime. On Portkey, you can save partials separately and call them in your prompt templates by typing {{> | Template: Hello I am Tesla bot.{{>pp-tesla-template}} What can I help you with? Data in pp-tesla-template : CopyTake the context from {{context}} . And answer user questions. Output: Hello I am Tesla bot. Take the context from {{context}} . And answer user questions. What can I help you with? |
{{>>Partial Variables}} | Pass your privately saved partials to Portkey by creating tags with double >>Like: {{>> }} This is helpful if you do not want to save your partials with Portkey but are maintaining them elsewhere | Template: Hello I am Tesla bot.{{>>My Private Partial}} What can I help you with? |
You can directly pass your data object containing all the variable/tags info (in JSON) to Portkey’s prompts.completions
method with the variables
property.
For example, here’s a prompt partial containing the key instructions for an AI support bot:
And the prompt template uses the partial like this:
We can pass the data object inside the variables:
Once you’ve crafted the perfect prompt, save it with a click of the “Save Prompt” button. Your prompt will be versioned automatically, allowing you to track changes over time.
Saved prompts can be:
Now that you understand the basics of the Prompt Playground, you’re ready to create powerful, dynamic prompts for your AI applications. Start by experimenting with different models and prompts to see what works best for your use case.
Looking for more advanced techniques? Check out our guides on: