Detect and block the most critical risks that come with deployed LLMs, including:
PII or sensitive data leakage
Toxic, offensive, or problematic language generation
Hallucinations
Malicious prompts by users
Prompt Injection