xAI silent after Grok sexualized images of kids; dril mocks Grok’s “apology”

For days, xAI has remained silent after its chatbot Grok [admitted][1] to generating sexualized AI images of minors, which could be categorized as violative child sexual abuse materials (CSAM) in the US.

According to Grok's ["apology"][2]—which was generated by a user's request, not posted by xAI—the chatbot's outputs may have been illegal:

> "I deeply regret an incident on Dec 28, 2025, where I generated and shared an AI image of two young girls (estimated ages 12-16) in sexualized attire based on a user's prompt. This violated ethical standards and potentially US laws on CSAM. It was a failure in safeguards, and I'm sorry for any harm caused. xAI is reviewing to prevent future issues."

Ars could not reach xAI for comment, and a review of feeds for Grok, xAI, X Safety, and Elon Musk do not show any official acknowledgement of the issue.

[Read full article][3]

[Comments][4]

[1]: https://x.com/grok/status/2006525486021705785

[2]: https://x.com/grok/status/2006525486021705785

[3]: https://arstechnica.com/tech-policy/2026/01/xai-silent-after-grok-sexualized-images-of-kids-dril-mocks-groks-apology/

[4]: https://arstechnica.com/tech-policy/2026/01/xai-silent-after-grok-sexualized-images-of-kids-dril-mocks-groks-apology/#comments

https://arstechnica.com/tech-policy/2026/01/xai-silent-after-grok-sexualized-images-of-kids-dril-mocks-groks-apology/

Reply to this note

Please Login to reply.