Prompt Security detects and protects against prompt injection, sensitive data exposure, and other AI security threats.
Prompt Security provides advanced protection for your AI applications against various security threats including prompt injections and sensitive data exposure, helping ensure safe interactions with LLMs.
To get started with Prompt Security, visit their website:
Admin Settings
button on SidebarPlugins
tab under Organisation SettingsGuardrails
page and click the Create
buttonAdd
actions
you want on your check, and create the Guardrail!Guardrail Actions allow you to orchestrate your guardrails logic. You can learn more about them here
Check Name | Description | Parameters | Supported Hooks |
---|---|---|---|
Protect Prompt | Protect a user prompt before it is sent to the LLM | None | beforeRequestHook |
Protect Response | Protect a LLM response before it is sent to the user | None | afterRequestHook |
before_request_hooks
or after_request_hooks
params in your Portkey ConfigHere’s an example configuration:
For more, refer to the Config documentation.
Your requests are now guarded by Prompt Security’s protection mechanisms, and you can see the verdict and any actions taken directly in your Portkey logs!
If you face any issues with the Prompt Security integration, join the Portkey community forum for assistance.