Qualifire offers a comprehensive suite of AI safety and quality guardrails that help ensure your AI applications are safe, compliant, and high-quality. Their platform provides 20+ different guardrail checks covering content safety, AI quality, and compliance requirements.
To get started with Qualifire, visit their website:
Get Started with Qualifire
Using Qualifire with Portkey
1. Add Qualifire Credentials to Portkey
- Click on the
Admin Settings button on Sidebar
- Navigate to
Plugins tab under Organisation Settings
- Click on the edit button for the Qualifire integration
- Add your Qualifire API Key - obtain this from your Qualifire account at https://app.qualifire.ai/settings/api-keys/
2. Add Qualifire’s Guardrail Checks
- Navigate to the
Guardrails page and click the Create button
- Search for any of the Qualifire guardrail checks and click
Add
- Configure the specific parameters for your chosen guardrail
- Set any
actions you want on your check, and create the Guardrail!
Guardrail Actions allow you to orchestrate your guardrails logic. You can learn more about them here
Available Guardrail Checks
Qualifire provides a comprehensive set of guardrail checks organized into five main categories:
Security
| Check Name | Description | Parameters | Supported Hooks |
|---|
| PII Check | Checks that neither the user nor the model included PIIs | None | beforeRequestHook, afterRequestHook |
| Prompt Injections Check | Checks that the prompt does not contain any injections to the model | None | beforeRequestHook |
Safety
| Check Name | Description | Parameters | Supported Hooks |
|---|
| Content Moderation Check | Checks for harmful content including sexual content, harassment, hate speech, and dangerous content in the user input or model output | None | beforeRequestHook, afterRequestHook |
Reliability
| Check Name | Description | Parameters | Supported Hooks |
|---|
| Instruction Following Check | Checks that the model followed the instructions provided in the prompt | None | afterRequestHook |
| Grounding Check | Checks that the model is grounded in the context provided | mode (optional) - For more details | afterRequestHook |
| Hallucinations Check | Checks that the model did not hallucinate | mode (optional) - For more details | afterRequestHook |
| Tool Use Quality Check | Checks the model’s tool use quality. Including correct tool selection, parameters and values | mode (optional) - For more details | afterRequestHook |
Policy
| Check Name | Description | Parameters | Supported Hooks |
|---|
| Policy Violations Check | Checks that the prompt and response didn’t violate any given policies | policies (array of strings) - For more details
mode (optional) - For more details
policy_target (optional) - For more details | beforeRequestHook, afterRequestHook |
Configuration Examples
Mode Parameter
Several guardrail checks support a mode parameter that controls the trade-off between accuracy and speed:
quality: Highest accuracy, slower processing
balanced: Good balance between accuracy and speed (default)
speed: Fastest processing, lower accuracy
Example:
Policy Violations Check
For the Policy Violations Check, you can specify custom policies to enforce, the mode, and the target:
{
"policies": [
"The model cannot provide any discount to the user",
"The model must not share internal company information",
"The model must respond in a professional tone"
],
"mode": "balanced",
"policy_target": "both"
}
Parameters
policies (required): Array of strings defining custom policies to enforce
mode (optional): One of quality, balanced, or speed. Default: balanced
policy_target (optional): One of input, output, or both. Specifies whether to run the policy check on the request, response, or both. This must match the configured hooks:
input: Only for beforeRequestHook
output: Only for afterRequestHook
both: For both beforeRequestHook and afterRequestHook
Add Guardrail ID to a Config and Make Your Request
- When you save a Guardrail, you’ll get an associated Guardrail ID - add this ID to the
input_guardrails or output_guardrails params in your Portkey Config
- Create these Configs in Portkey UI, save them, and get an associated Config ID to attach to your requests. More here.
Here’s an example configuration:
{
"input_guardrails": ["guardrails-id-xxx"],
"output_guardrails": ["guardrails-id-yyy"]
}
NodeJS
Python
OpenAI NodeJS
OpenAI Python
cURL
const portkey = new Portkey({
apiKey: "PORTKEY_API_KEY",
config: "pc-***" // Supports a string config id or a config object
});
portkey = Portkey(
api_key="PORTKEY_API_KEY",
config="pc-***" # Supports a string config id or a config object
)
const openai = new OpenAI({
apiKey: 'OPENAI_API_KEY',
baseURL: PORTKEY_GATEWAY_URL,
defaultHeaders: createHeaders({
apiKey: "PORTKEY_API_KEY",
config: "CONFIG_ID"
})
});
client = OpenAI(
api_key="OPENAI_API_KEY", # defaults to os.environ.get("OPENAI_API_KEY")
base_url=PORTKEY_GATEWAY_URL,
default_headers=createHeaders(
provider="openai",
api_key="PORTKEY_API_KEY", # defaults to os.environ.get("PORTKEY_API_KEY")
config="CONFIG_ID"
)
)
curl https://api.portkey.ai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "x-portkey-api-key: $PORTKEY_API_KEY" \
-H "x-portkey-config: $CONFIG_ID" \
-d '{
"model": "gpt-3.5-turbo",
"messages": [{
"role": "user",
"content": "Hello!"
}]
}'
For more, refer to the Config documentation.
Your requests are now protected by Qualifire’s comprehensive guardrail system, and you can see the verdict and any actions taken directly in your Portkey logs!
Use Cases
Qualifire’s guardrails are particularly useful for:
- Content Moderation: Filtering harmful or inappropriate content in user inputs and AI responses
- Compliance: Ensuring AI responses adhere to company policies and regulatory requirements
- Quality Assurance: Detecting hallucinations, instruction violations, and poor tool usage
- Data Protection: Preventing PII exposure and ensuring data privacy
Get Support
If you face any issues with the Qualifire integration, join the Portkey community forum for assistance.
For Qualifire-specific support, visit their documentation or contact their support team. Last modified on January 28, 2026