The latest news, trends and insights from the team.
How will Google and Meta respond to political deepfakes in the 2024 European Parliament elections?
THURSDAY, 30 NOV, 2023.
We examine the codes of conduct for responding to synthetic videos and images by major digital platforms.
Deepfakes have already impacted elections in countries like Slovakia, Argentina, and Poland. Let’s examine the codes of conduct for responding to synthetic videos and images by major digital platforms: Google, Meta, Microsoft, TikTok, and X/Twitter.
- Google/YouTube/Ad Display: Starting in November 2023 the political ads using AI that alters voice and imagery will require disclosure on Google and YouTube. The ads will not be covered by the policy if the AI will be used in editing techniques like image resizing, cropping, color, defect correction, or background edits.
- Meta is taking it slow and it will start requiring political ads to disclose if they used artificial intelligence beginning “sometime next year”. At this stage, the company does not allow political campaigns to use their new generative AI advertising products. Rules will allow unpaid posts that are satire or parody.
- Microsoft is taking the cybersecurity route by promising to launch a tool that will allow to certify content and attach information to an image or video’s metadata. This option will be made available to political campaigns in Q2 2024. In the States, the company will also endorse the Protect Elections from Deceptive AI Act that seeks to ban the use of AI to make “deceptive content falsely depicting federal candidates.”
- On TikTok all realistic deepfakes must be clearly disclosed. Shorts using synthetic images of public figures, including government officials and politicians will be taken down if their avatars will be used to endorse a product or violate the app’s other policies. TikTok does not allow political advertising at all.
- X/Twitter’s moderation team has a difficult task of interpreting pretty vague set of rules. The platform will take down or flag misleading media that may lead to “mass violence or widespread civil unrest” or “voter suppression or intimidation”. Memes, satire and animations are allowed.
It seems that these limitations will affect paid advertising tools, but AI content distributed organically is here to stay. The mega-election year of 2024 will serve as a real-life test to show whether the proposed measures are sufficient to secure election integrity.