Researchers from Microsoft, Northwestern University, and Witness have released the MNW deepfake detection benchmark, a diverse dataset of AI-generated images, audio, and video designed to help developers build more robust detection systems. Current detectors struggle in real-world settings because they are trained on narrow datasets from few generators and fail to generalize. The MNW dataset addresses this by drawing from many generators and including post-processed media that mimics real-world manipulation. It will be updated biannually each spring and fall to track evolving AI artifacts and evasion techniques, aiming to close the gap between lab performance and real-world detection accuracy.
Sort: