Meta’s current systems for identifying deepfakes are struggling to keep up with the speed and scale of misinformation online. The Meta Oversight Board has now issued a strong warning, urging the company to strengthen AI content labeling across Facebook, Instagram, and Threads. Users, experts, and regulators alike are increasingly concerned about the role AI-generated content plays in spreading false narratives, particularly during crises or armed conflicts.
The Board emphasizes that accurate content labeling is not just a technical concern—it’s a matter of safety. With deepfake videos and AI-generated misinformation circulating rapidly, platforms that fail to act risk amplifying harm for millions of users.
According to the Board, Meta’s current moderation relies too heavily on self-disclosure from creators and escalated reviews after content is flagged. This approach is insufficient in today’s environment, where deepfake content can spread across multiple platforms in hours. One investigation focused on a fake AI video showing alleged damage in Israel, which first appeared on one platform before proliferating on Meta’s social apps.
“The Board’s findings show that Meta’s system to label AI content is not robust or comprehensive enough,” the Oversight Board stated. “The reliance on self-reporting and cross-platform tracking does not match the realities of modern misinformation.”
This gap is particularly concerning during periods of geopolitical tension or military escalation. The Board warns that when people cannot distinguish real from fake content quickly, it can affect decision-making and safety.
The Oversight Board has outlined several steps for Meta to address the issue. First, the company should improve its existing misinformation policies to specifically address deceptive AI-generated content. Second, it should create a new, dedicated community standard for AI content, ensuring clearer rules and enforcement across all its platforms.
The Board also recommends broader implementation of AI content labeling technologies such as C2PA standards. These tools could help users verify the authenticity of media, increasing transparency and trust in Meta’s ecosystem.
Experts argue that these steps are urgent. Without proactive labeling and enforcement, deepfakes can quickly distort public perception and exacerbate the spread of harmful misinformation.
Deepfakes rarely remain confined to a single platform. The Oversight Board notes that content often migrates from one network to another, complicating detection and moderation efforts. Videos or images originating elsewhere may appear on Facebook, Instagram, or Threads without proper labeling, giving users little context about their authenticity.
Addressing these cross-platform challenges will require both technological solutions and stricter policies. Meta must balance user safety with freedom of expression, ensuring that AI content labeling is accurate, timely, and visible.
For everyday users, these developments highlight the importance of media literacy and caution online. Even a single deepfake can fuel misinformation, erode trust, and provoke real-world consequences. By implementing stronger AI labeling systems, Meta could provide users with clearer information about what they are seeing and interacting with, reducing the risk of manipulation.
The Oversight Board’s recommendations are a call to action: Meta must evolve its content moderation practices to meet the demands of an AI-driven digital landscape. Without significant changes, users remain exposed to a growing flood of misleading content with potentially serious consequences.

Comment