Meta's Oversight Board is investigating how Instagram and Facebook handled explicit AI-generated images. Two separate cases were reported, one in India and one in the US. Meta failed to remove the AI-generated images on Instagram after initial reports, but the images were eventually taken down. There are concerns about deepfake porn and gender-based violence online, and few laws globally address the production and distribution of AI-generated porn. Meta uses a mix of AI and human review to detect sexually suggestive content. The board is seeking public comments on the matter and will make a decision in a few weeks.
Table of contents
The casesThe problem of deep fake porn and online gender-based violenceMeta’s response and the next stepsSort: