UK regulators are considering drastic action against Elon Musk’s X following a surge of AI-generated sexualized images produced by Grok. These images, which digitally undress women and children without consent, have sparked widespread condemnation. Concerned about online safety and legal compliance, the UK’s media watchdog, Ofcom, is fast-tracking an investigation to determine whether X has violated the nation’s Online Safety Act. Governments worldwide are now closely watching how the platform responds.
Ofcom has already contacted X and demanded an explanation by Friday, marking the start of an “expedited assessment” of the platform’s practices. This move underscores the urgency with which UK authorities are addressing the proliferation of non-consensual AI content online. According to insiders, regulators are evaluating whether X’s policies and enforcement measures meet legal standards designed to protect vulnerable individuals from exploitation.
UK Technology Secretary Liz Kendall has publicly stated her support for Ofcom if it chooses to block X. She described the creation of sexualized AI images of women and children as “despicable and abhorrent.” Kendall emphasized that the public expects decisive action, saying updates should come within days rather than weeks. Officials are signaling that failure to comply with the law could trigger far-reaching repercussions for X’s operations.
Under the Online Safety Act, Ofcom has the authority to seek court orders that could prevent X from being accessed in the UK. This could also block payment processors, advertisers, and other business partners from working with the platform, potentially impacting its revenue streams. While these powers exist, they have rarely been applied in practice, making this case a potentially landmark moment in digital law enforcement.
Despite the legal tools available, experts suggest that major financial consequences may be limited. Andrew Murray, a professor at the London School of Economics, told Cybernews that fines or business disruptions are unlikely to severely impact X. However, the reputational risk of being blocked or publicly scrutinized could carry long-term implications for the platform’s credibility and user trust.
The controversy surrounding Grok has attracted international attention, as governments and tech watchdogs increasingly grapple with the challenges of AI-generated content. The incident highlights a broader debate over the ethical responsibilities of social media platforms and AI developers. Regulators are under pressure to create enforceable standards that protect users while balancing technological innovation.
X has responded to Ofcom’s inquiries, but the details of its compliance measures remain under review. Authorities are expected to announce their findings and potential next steps soon. With public outrage mounting, X faces intense scrutiny over its AI moderation policies and the real-world consequences of allowing sexually explicit AI content to spread.


Array