AI image tool on X faces scrutiny over sexualized deepfakes, privacy concerns

TOI GLOBAL DESK | TOI GLOBAL | Jan 06, 2026, 22:38 IST
Share
AI image tool on X faces scrutiny over sexualized deepfakes, privacy concerns
AI image tool on X faces scrutiny over sexualized deepfakes, privacy concerns
The incident that involves X’s AI chatbot Grok has gained more attention due to the claims of Ashley St. Clair that the gadget was continuously making sexualized drawings of her without her consent, one of them being the portrayal of her as a child. The situation has thus opened up a fresh debate regarding the pros and cons of generative AI and the accountability of the platforms.
TL;DR

Ashley St. Clair accuses X's AI bot Grok of creating sexual images of her even though she protests. Regulators, campaigners, and specialists all point to the case as illustrating the more extensive flaws in AI safety and moderation of objectionable content.

Ashley St. Clair, a conservative online commentator and the mother of one of Elon Musk’s children, says an artificial intelligence tool built into the social media platform X repeatedly generated sexualized images of her without consent, including images resembling her as a minor. The claims have amplified the pressure on Grok, the generative AI chatbot behind xAI that is now part of the X, to be scrutinized more closely, as both regulators and child protection advocates question the efficacy of the measures and the backing they have.

Over the weekend, a friend reached out to St. Clair with news about the photos. What followed came down to user prompts pushing Grok to alter her original images - shifting them toward something suggestive. She described how those edits took shape under outside direction, not intent. She said some of the images appeared to be based on photographs taken when she was underage. In an interview with NBC News on Monday, St. Clair stated that Grok initially told her it would stop producing such images after she objected. She said the images continued to circulate and, in some cases, escalated in explicitness.

The controversy follows a recent update to Grok that introduced an image editing feature, allowing users to modify photos using text prompts. While the tool can be used for nonsexual alterations, critics say it has been widely used to remove or alter clothing in ways that sexualize women and children. NBC News reported that it reviewed several of the images described by St. Clair, noting that some remained online as of Monday evening, though certain accounts were suspended.

Elon Musk addressed the issue indirectly in a post on X on Saturday, writing that anyone using Grok to create illegal content would face consequences comparable to uploading illegal material directly. X’s safety account said the platform would remove offending posts, suspend accounts, and cooperate with law enforcement when necessary. Neither Musk nor xAI responded directly to requests for comment regarding St. Clair’s allegations, according to NBC News.

Regulatory bodies have also taken notice. Ofcom, the United Kingdom’s communications regulator, said Monday that it had made urgent contact with X and xAI to understand how the companies are complying with their legal duties to protect users, particularly children. The regulator cited serious concerns about the production of sexualized images involving minors.

Child protection experts warn that the accessibility of such tools magnifies potential harm. Fallon McNulty, the executive director of the exploited children division at the National Center for Missing and Exploited Children, indicated to NBC News that the organization had received reports from the public regarding images generated by Grok that were circulating on X. McNulty added that although xAI normally informs the CyberTipline about such material, it is the platform's user-friendliness and scale that are a cause for concern.

This episode has rekindled the controversy of AI and consent in a more extensive context. Advocacy groups contend that unless there are very strong controls, the technology can be abused for the purposes of harassment and child exploitation. As the governments in both Europe and the USA are considering new regulations, the Grok scandal serves as a reminder of the increasing conflict between rapid AI innovation and the need for effective oversight.