Operation Overload: An AI fuelled escalation of the Kremlin-linked propaganda effort

The Russian propaganda operation targeted at media organisations and fact-checkers is still going strong. Operation Overload, which we first documented in June 2024 is now leveraging AI generated content, impersonation techniques and is expanding to more platforms such as TikTok and BlueSky. Telegram and direct emails to newsrooms remain a daily dissemination technique used to attempt to create a sense of urgency amongst their targets. Since we last published an update about the operation last September, some legitimate outlets regularly fall in the trap.

This latest report is the third in a series published by CheckFirst and Reset Tech, offering a deeper, sharper analysis of one of the most sophisticated current propaganda operations targeting Western democracies. Building on findings from our previous investigations, the new edition reveals an alarming surge in both volume and complexity of coordinated false content.

What’s New?

Since September 2024, we’ve recorded over 700 targeted emails and nearly 600 unique pieces of falsified content disseminated across platforms including Telegram, X, BlueSky, and most recently TikTok. This material, often AI-generated or deceptively edited, impersonated renowned individuals or media brands, using the identities of over 180 people and institutions to sow confusion, manipulate debate, and overload fact-checkers.

Our latest findings further document techniques faking the voices and identities of journalists, public figures, and respected institutions, complete with counterfeit logos and branding. Telegram continues to serve as the campaign’s central distribution hub, but the disinformation now circulates more widely through hired amplification networks on X, fake media personas on Bluesky, and viral engagement-farming content on TikTok.

At the heart of the campaign lies a focused effort to interfere in elections and the wider political landscape in Ukraine, France, Germany, and most recently Poland and Moldova. The increasing use of AI-generated content is a sign of the adaptation of operatives to a wider available toolset, in an effort to sow even more confusion.

Despite previous warnings and growing evidence, platforms’ responses remain worryingly uneven. BlueSky has taken action against the majority of accounts involved, while X continues to underperform in enforcement, risking non-compliance with the EU’s Digital Services Act (DSA).

Our Recommendations

We call for urgent platform accountability—especially from X, which is legally bound under the DSA to mitigate systemic risks, yet continues to host clearly illegal content. We also encourage impersonated individuals and organisations to exercise their rights and demand action via formal reporting mechanisms.

We urge journalists and fact-checkers to be wary of inadvertently amplifying falsehoods by reporting on isolated fakes. When discussing misleading content linked to Operation Overload, we encourage them to always provide clear context and flag the broader campaign behind it.

Without decisive intervention from platforms, regulators, and civil society, the integrity of public information—and of our elections—remains under threat.

Read the full report here.

About us

Check First is a leading Finnish software and methodologies company, spearheading adversarial research techniques. We believe that everyone should be able to understand how and why content is presented to them. We advocate for online clarity and accountability, building solutions to attain this goal. Partnering with leading institutions, regulators, NGOs and educators, we aim at curbing the spread of disinformation and foreign influence manipulations.

Our story