Text-to-image AI models can be tricked into generating disturbing images

Popular text-to-image AI models can be prompted to ignore their safety filters and generate disturbing images. A group of researchers managed to get both Stability AI’s Stable Diffusion and OpenAI’s DALL-E 2’s text-to-image models to disregard their policies and create images of naked people, dismembered bodies, and other violent and sexual scenarios.  Their work, which…

https://www.technologyreview.com/2023/11/17/1083593/text-to-image-ai-models-can-be-tricked-into-generating-disturbing-images/

Reply to this note

Please Login to reply.

Discussion

No replies yet.