CrowdStrike AIDR provides AI Detection and Response capabilities for scanning LLM inputs and outputs. It can block or sanitize text depending on configured rules.Documentation Index
Fetch the complete documentation index at: https://docs.portkey.ai/docs/llms.txt
Use this file to discover all available pages before exploring further.
Get Started with CrowdStrike
Using CrowdStrike AIDR with Portkey
1. Add CrowdStrike Credentials to Portkey
- Navigate to the
Integrationpage underSidebar - Click on the edit button for the CrowdStrike AIDR integration
- Add your CrowdStrike API credentials
2. Add CrowdStrike’s Guardrail Check
- Navigate to the
Guardrailspage and click theCreatebutton - Search for Guard Chat Completions and click
Add - Configure your guardrail settings
- Set any
actionsyou want on your check, and create the Guardrail!
Guardrail Actions allow you to orchestrate your guardrails logic. You can learn them here
| Check Name | Description | Parameters | Supported Hooks |
|---|---|---|---|
| Guard Chat Completions | Pass LLM Input and Output to CrowdStrike’s guard_chat_completions endpoint. Able to block or sanitize text depending on configured rules. | Redact, Timeout | beforeRequestHook, afterRequestHook |
| Parameter | Type | Default | Description |
|---|---|---|---|
redact | boolean | false | If true, detected harmful content will be redacted |
timeout | number | 5000 | Timeout in milliseconds |
3. Add Guardrail ID to a Config and Make Your Request
- When you save a Guardrail, you’ll get an associated Guardrail ID - add this ID to the
input_guardrailsoroutput_guardrailsparams in your Portkey Config - Create these Configs in Portkey UI, save them, and get an associated Config ID to attach to your requests. More here.
- NodeJS
- Python
- OpenAI NodeJS
- OpenAI Python
- cURL

