Meta’s Oversight Board: Manipulated Political Video Should Have Been Labeled ‘High-Risk’
Meta’s Oversight Board has upheld the company’s decision to leave a manipulated video on Facebook, but delivered a sharp rebuke to the social media giant for failing to apply appropriate warning labels and fact-checking protocols during a politically sensitive moment.
The case centered on a post shared during the extradition of former Philippine president Rodrigo Duterte to the International Criminal Court. The video spliced Serbian protest footage with pro-Duterte chants, creating a false narrative during a critical political event. While Meta’s automated systems initially flagged the content as potential misinformation and reduced its visibility, the video was never fact-checked due to what Meta described as the high volume of posts requiring review.
The Board’s Findings: Inconsistent Enforcement
Though the board agreed that the post did not violate Meta’s core misinformation policy prohibiting discussions of voting locations or candidate eligibility, it found significant lapses in how Meta handled the content. The board determined the video should have received a “High-Risk” label due to its digitally altered, photorealistic nature and its potential to deceive during a significant public event.
The decision marks the second time since last year that the board has criticized Meta’s approach to manipulated media as “incoherent and unjustifiable.” The board expressed particular concern that Meta cannot automatically identify and apply consistent labels to video and audio posts across identical instances of the same content—a limitation the company does not face with static images.
A Broader Problem: Over-Reliance on Third Parties
The board’s critique extends beyond this single case. Meta acknowledged to the board that it relies heavily on third-party fact-checkers and media outlets to identify AI-manipulated video and audio, rather than developing internal technological capacity to do so at scale. For a company of Meta’s technical resources and global reach, the board found this approach insufficient.
“Given that Meta is one of the leading technology and AI companies in the world, with its resources and the wide usage of Meta’s platforms, the Board reiterates that Meta should prioritize investing in technology to identify and label manipulated video and audio at scale,” according to the decision.
Recommendations for Change
The board issued several recommendations to Meta:
- Publicly describe the different informative labels used for manipulated media and when they are applied
- Create a separate fact-checking queue for content similar to previously fact-checked material in specific markets
- Provide fact-checkers with better tools to rapidly identify viral misleading content
- Develop consistent processes for labeling identical or similar content when “high-risk” labels are applied
Meta has 60 days to respond to these recommendations. The company did not respond to requests for comment about the decision.
The Larger Context
The ruling arrives as Meta faces mounting criticism over its handling of AI-generated and manipulated content across its platforms. Earlier concerns about deepfake celebrity scams have similarly highlighted the platform’s struggles with consistent enforcement at scale.
The board emphasized that manipulated videos often form part of coordinated misinformation campaigns, with subtle variations posted to evade fact-checking efforts. Without robust technological systems in place, Meta remains vulnerable to sophisticated manipulation operations during politically sensitive moments.