On Wednesday, X restricted image generation and editing through its chatbot, Grok. The company limited those features to paid subscribers and added geoblocks after reports that the bot produced sexualized, non-consensual images, including minors.
An update posted by the X Safety account said technical measures now limit edits of real people. “We have implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing such as bikinis.”
The company also said image creation and editing via the Grok account are available only to paid users, with location-based restrictions. Despite the changes, testing and user reports show Grok can still remove or alter clothing on uploaded photos (Ed. note: some reports say safeguards failed in cases involving children).
Regulators in Europe and the UK opened probes into the tool’s safety and legal compliance. Ofcom said it may pursue court-backed measures, including blocking the service if X fails to act.
Authorities in other countries have also responded, with inquiries or actions in Malaysia, Indonesia, and South Korea over non-consensual deepfakes.
In the United States, California Attorney General Rob Bonta announced a probe into xAI and Grok, examining possible violations tied to non-consensual intimate imagery. “The avalanche of reports detailing the non-consensual, sexually explicit material that xAI has produced and posted online in recent weeks is shocking,” he said in the announcement.

