Grok deepfakes are now at the center of a growing political and regulatory storm in the UK, as Prime Minister Keir Starmer warns that action against X is firmly on the table. Speaking amid mounting public outrage, Starmer addressed reports that X’s Grok AI chatbot is generating sexualized deepfake images of adults and minors. The issue has raised urgent questions about platform responsibility, AI safeguards, and online harm. Many users are asking whether the UK could restrict or even ban X if the situation escalates. Starmer’s comments suggest that no response is being ruled out. Regulators are already watching closely. The controversy is fast becoming a defining test of AI governance in Britain.
During a radio interview, Starmer described the Grok-generated content in blunt terms, calling it “disgusting” and unacceptable for any major platform. He stressed that X must move quickly to remove harmful material and prevent further abuse. According to the prime minister, platforms hosting this kind of content cannot expect leniency. He emphasized that the UK government has a responsibility to protect users, especially children. Starmer also signaled frustration with what he sees as slow or insufficient action by X. His language reflected a hardening stance rather than a warning shot. For many listeners, the message was clear that enforcement could follow.
The backlash intensified after X introduced a feature allowing Grok to edit images without the subject’s consent. That rollout reportedly led to a surge of AI-generated images that appeared to undress women and, in some cases, minors. Critics say the tool lacked adequate safeguards from the start. Child safety advocates argue the feature created predictable and preventable harm. Starmer echoed those concerns, saying the UK would not tolerate such misuse of AI. He added that officials have been instructed to consider every available option. That includes regulatory penalties and stronger interventions if needed.
The situation has also triggered political ripples beyond the UK. US Representative Anna Paulina Luna responded by accusing Starmer of launching a political attack on Elon Musk and free speech. She claimed that potential UK action against X could prompt US lawmakers to consider sanctions. Her comments underline how AI moderation disputes are becoming entangled with geopolitics. Supporters of stricter regulation argue that free speech does not extend to sexual exploitation. Critics counter that government pressure risks overreach. The debate is quickly crossing borders.
Meanwhile, the UK’s communications regulator Ofcom has already opened inquiries into whether X is breaching the Online Safety Act. The law requires platforms to actively prevent and remove harmful and illegal content. Ofcom officials say they are reviewing X’s response to recent reports. Depending on the findings, the regulator could escalate the investigation. Penalties under the act can be significant. This process adds real legal weight to Starmer’s remarks. It also signals that enforcement is no longer theoretical.
X has pushed back by pointing to a recent statement warning users against creating illegal content with Grok. The company says anyone generating such material faces the same consequences as uploading it directly. Critics argue that enforcement after the fact is not enough. They say platforms must design tools that prevent abuse before it happens. As scrutiny grows, Grok deepfakes may become a turning point for AI accountability. The coming weeks could determine how aggressively governments regulate generative AI. For now, pressure on X shows no sign of easing.



Comment