BETA RELEASE

Summary

Product page for Azure AI Content Safety, featuring tools for detecting harmful content, preventing prompt injections, mitigating hallucinations, and detecting protected material in AI applications.

Key quotes

The system monitors across four harm categories: hate, sexual, violence, and self-harm.
Prompt shields enhance the security of generative AI systems by defending against prompt injection attacks
Groundedness detection identifies and corrects the ungrounded outputs of generative AI models, ensuring they’re based on provided source materials.

This page details the capabilities of Azure’s AI content moderation tools, including custom category filters and security measures like prompt shields. It also describes the integration of Content Safety within the Azure OpenAI content filtering system.