Partner Guardrails

Acuvity
- Scan prompts and responses for security threats
- Detect PII, toxicity, and prompt injections
- Real-time content analysis and filtering

Aporia
- Validate custom Aporia policies via project ID
- Define policies on your Aporia dashboard
- Seamless integration with Portkey checks

Azure
- Detect and redact sensitive PII data
- Apply Azure’s comprehensive content safety checks
- Enterprise-grade security compliance

AWS Bedrock
- Analyze and redact PII to prevent manipulation
- Integrate AWS Guardrails directly in Portkey
- Advanced security policy enforcement

Lasso Security
- Analyze content for security risks and jailbreaks
- Enforce custom policy violations detection
- AI-powered threat detection and prevention

Mistral
- Detect and filter harmful content automatically
- Multi-dimensional content safety analysis
- Real-time moderation capabilities

Pangea
- Guard LLM inputs and outputs with Text Guard
- Detect malicious content and data transfers
- Prevent model manipulation attempts

Patronus
- Detect hallucinations and factual errors
- Assess quality: conciseness, helpfulness, tone
- Identify gender and racial bias in outputs

Pillar
- Scan prompts and responses comprehensively
- Detect PII, toxicity, and injection attacks
- Enterprise security and compliance features

Palo Alto Networks Prisma AIRS
- Real-time threat detection across all OSI layers (1-7)
- Block prompt injections, data leakage, and model DoS attacks

Prompt Security
- Scan prompts for security vulnerabilities
- Analyze responses for policy violations
- Advanced threat detection and mitigation
Bring Your Own Guardrail
We have built Guardrails in a very modular way, and support bringing your own Guardrail using a custom webhook! Learn more here.Portkey’s Guardrails
Along with the partner Guardrails, there are also deterministic as well as LLM-based Guardrails supported natively by Portkey.BASIC
Guardrails are available on all Portkey plans.PRO
Guardrails are available on Portkey Pro & Enterprise plans.BASIC
— Deterministic Guardrails
Regex Match
Checks if the request or response text matches a regex pattern.
Parameters: rule:
string
Supported On:
input_guardrails
, output_guardrails
Sentence Count
Checks if the content contains a certain number of sentences. Ranges allowed.
Parameters: minSentences:
number
, maxSentences: number
Supported On:
input_guardrails
, output_guardrails
Word Count
Checks if the content contains a certain number of words. Ranges allowed.
Parameters: minWords:
number
, maxWords: number
Supported On:
input_guardrails
, output_guardrails
Character Count
Checks if the content contains a certain number of characters. Ranges allowed.
Parameters: minCharacters:
number
, maxCharacters: number
Supported On:
input_guardrails
, output_guardrails
JSON Schema
Check if the response JSON matches a JSON schema.
Parameters: schema:
json
Supported On:
output_guardrails
onlyJSON Keys
Check if the response JSON contains any, all or none of the mentioned keys.
Parameters: keys:
array
, operator: string
Supported On:
output_guardrails
onlyContains
Checks if the content contains any, all or none of the words or phrases.
Parameters: words:
array
, operator: string
Supported On:
output_guardrails
onlyValid URLs
Checks if all the URLs mentioned in the content are valid
Parameters: onlyDNS:
boolean
Supported On:
output_guardrails
onlyContains Code
Checks if the content contains code of format SQL, Python, TypeScript, etc.
Parameters: format:
string
Supported On:
output_guardrails
onlyLowercase Detection
Check if the given string is lowercase or not.
Parameters: format:
string
Supported On:
input_guardrails
, output_guardrails
Ends With
Check if the content ends with a specified string.
Parameters: Suffix:
string
Supported On:
input_guardrails
, output_guardrails
Webhook
Makes a webhook request for custom guardrails
Parameters: webhookURL:
string
, headers: json
Supported On:
input_guardrails
, output_guardrails
JWT Token Validator
Check if the JWT token is valid.
Parameters: JWKS URI:
string
,
Header Key: string
,
Cache Max Age: number
,
Clock Tolerance: number
,
Max Token Age: number
(in seconds)Supported On: input_guardrails
Model Whitelist
Check if the inference model to be used is in the whitelist.
Parameters: Models:
array
, Inverse: boolean
Supported On: input_guardrails
PRO
— LLM Guardrails
Moderate Content
Checks if the content passes the mentioned content moderation checks.
Parameters: categories:
array
Supported On:
input_guardrails
onlyCheck Language
Checks if the response content is in the mentioned language.
Parameters: language:
string
Supported On:
input_guardrails
onlyDetect PII
Detects Personally Identifiable Information (PII) in the content.
Parameters: categories:
array
Supported On:
input_guardrails
, output_guardrails
Detect Gibberish
Detects if the content is gibberish.
Parameters:
boolean
Supported On:
input_guardrails
, output_guardrails
You can now have configurable timeouts for Partner & Pro Guardrails!