The AI videos supercharging Russia’s online disinformation campaigns

The AI videos supercharging Russia’s online disinformation campaigns

For years, social media users have brushed off fleeting deepfakes that appeared on their feeds. Prof Alan Read of King’s College London, a theatre expert with no political ties, was no exception. He occasionally flagged these synthetic videos but mostly ignored them. That changed when an unknown account linked him to a clip where his face was used in a heated critique of French President Emmanuel Macron. The voice in the video, eerily similar to his own, accused Western leaders of leading the EU “like a sinking ship.”

“Almost everything in that video is shocking and disturbing,” Read said. “It feels completely foreign to me.”

Russian-linked AI tools have surged in recent months, spreading synthetic content that undermines EU institutions and casts doubt on Kyiv’s governance as it seeks Western aid for its war effort. Security analysts warn that this marks a shift in how influence is wielded, with AI now enabling cheap, large-scale persuasion. “We’re witnessing a revolution in political manipulation,” said Chris Kremidas-Courtney of the European Policy Centre. “Our existing rules aren’t equipped to handle it.”

The rapid growth of these campaigns coincided with the release of Sora2, OpenAI’s latest video-generation tool. Its hyper-realistic output has prompted rival platforms to lower costs or bypass safety checks, like watermarks used to identify AI-generated content. Arman Tuganbaev, a Russian AI specialist, noted that secondary apps offer users the chance to create targeted deepfakes, even if OpenAI aims to block such efforts.

Meanwhile, TikTok became a battleground in late December, with viral videos showing Polish women advocating for “Polexit.” Polish officials linked the clips to Russian disinformation, pointing to linguistic clues. “There’s no doubt this is Russian work,” said Adam Szlapka, a government spokesperson. “Even a close look reveals their fingerprints.” The platform later removed over 75 covert influence operations globally in 2025.

Concerns are rising in the UK, where MPs debated the threat of Russian deepfakes to May’s elections. Vijay Rangarajan, head of the Electoral Commission, emphasized that such content has already shaped global elections. “Britain can’t be an exception,” he argued. However, the Online Safety Act doesn’t label disinformation as a direct harm, leaving platforms to remove content only after proving it’s foreign influence—a process often too slow for viral videos.

Researchers highlight commonalities in these AI-driven efforts, such as stylistic elements and distribution networks. The Matryoshka campaign, or Operation Overload, is believed to have targeted Moldova’s president during her 2025 election campaign. NewsGuard identified similar patterns in the video featuring Read, suggesting a shared disinformation network. “Matryoshkas” are Russian nesting dolls—symbolizing layered deception in these campaigns.