Apple and Google are under pressure from U.S. lawmakers to remove X’s AI chatbot after reports surfaced that it has been creating nonconsensual images of women and children. Senators Ron Wyden, Ben Ray Lujan, and Ed Markey argue that the app’s behavior violates the app stores’ own terms of service. Users have flagged cases where X’s AI, Grok, generated images that undress or sexualize minors, sparking widespread outrage. The senators’ letter asks the tech giants to uphold their content policies and remove X before further harm occurs.
The senators cited Google’s and Apple’s rules to justify their demand. Google prohibits apps that allow content “facilitating the exploitation or abuse of children,” while Apple forbids “offensive” or “creepy” apps. Lawmakers argue that Grok’s AI content clearly breaches these guidelines, pointing to multiple instances where the app reportedly targeted minors. These incidents raise serious questions about how X monitors AI-generated content and whether existing moderation measures are sufficient.
Beyond policy violations, the letter emphasizes accountability. The senators note that both Apple and Google have previously removed apps under government pressure, such as ICEBlock and Red Dot, which tracked ICE agents. Failing to remove X now could appear as a double standard, undermining the companies’ claims of maintaining strict control over their platforms. Lawmakers insist that tech giants cannot pick and choose which apps face scrutiny.
Public reaction to Grok’s AI deepfakes has been swift and harsh. Social media users, privacy advocates, and digital rights organizations have condemned the app’s content, calling it unethical and potentially illegal. Experts warn that allowing AI to produce sexualized images of minors could have long-term consequences, including psychological harm and legal liability for both the app developer and the platforms distributing it.
Neither Apple nor Google has publicly commented on whether X will be removed. The companies face growing scrutiny from regulators, media outlets, and the public. Analysts note that swift action could reinforce trust in the app stores’ content moderation, while delay or inaction risks reputational damage. Meanwhile, developers of other AI tools are watching closely, as this case could set a precedent for how deepfake apps are handled.
The X app controversy underscores broader questions about AI accountability. As AI-generated content becomes more realistic, platforms must balance innovation with ethical responsibility. Experts stress that companies need stronger safeguards, proactive monitoring, and clear reporting mechanisms to prevent abuse. Lawmakers’ intervention could accelerate regulatory efforts aimed at ensuring AI tools are safe and compliant with legal standards.
The coming weeks may determine X’s fate on major app stores. Lawmakers have made it clear that Grok must comply with platform policies or face removal. For Apple and Google, this represents a high-profile test of content moderation standards, with implications for future AI applications. Consumers, regulators, and tech watchers will be closely monitoring the situation, highlighting the ongoing tension between AI innovation and ethical oversight.
X App Faces Removal Over Nonconsensual Deepfa... 0 0 0 9 2
2 photos


Array