BETA RELEASE

Summary

An overview of Microsoft's AI Red Team, detailing best practices for probing AI systems for security vulnerabilities and responsible AI failures to ensure safer deployments.

Key quotes

AI red teaming is now an umbrella term for probing both security and RAI outcomes.
Generative AI systems, on the other hand, are probabilistic. This means that running the same input twice may provide different outputs.

The post explains the distinction between traditional and AI red teaming, highlighting the need for defense-in-depth strategies and iterative testing due to the probabilistic nature of generative AI.