They won't ever go to election rallies, knock on doors, or vote. But coordinated groups of AI systems, which researchers now call "AI personas," could have a bigger impact on the future of democratic governance than any human campaign. These fake people can act like real people so well that they can get into online communities, change conversations, and even change the outcome of elections, all at the speed of a machine and for a fraction of the cost of traditional influence operations.
In January 2026, an international group of researchers published a landmark policy forum paper in Science that warned of what they called an emerging and deeply disruptive threat. Daniel Thilo Schroeder, a research scientist at Norway's SINTEF, was the main author. He was joined by a number of well-known scholars, including Nick Bostrom, Nicholas Christakis, Maria Ressa, a Nobel Prize winner, Audrey Tang, a former digital minister in Taiwan, and computer scientists from UC Berkeley, Oxford, Cambridge, NYU, and the University of British Columbia. Their conclusion was clear: combining large language models with multi-agent architectures has made a new type of weapon in information warfare. This weapon can create fake grassroots support, break up shared reality, and destroy trust in institutions on a large scale.
How AI Persona Swarms Work
From Botnets to Hive Intelligence
The main difference between today's AI persona swarms and the simple botnets of the past ten years is that they can work together and change. In the past, social media manipulation used "copy-paste" tactics over and over again. Networks of accounts would post the same messages, use the same hashtags, and follow obvious patterns of behavior that made it easy for platform detection systems to find and remove them.
AI persona swarms work on a completely different idea. Instead of a central server sending out the same commands, thousands of AI-driven personas work together like a hive. Every agent has its own memory, identity, posting history, and way of talking. They can change stories in real time, create "social norms" that are like the ones in the communities they infiltrate, and respond quickly to pushback or fact-checking efforts. When individual accounts are found and deleted, the swarm continues. New personas appear without any problems, taking over the storylines that their predecessors started.
Synthetic Consensus on a Large Scale
Researchers call the most powerful feature of these systems "synthetic consensus." This is the false impression that a certain opinion, complaint, or political position has widespread, natural public support. Schroeder said that the biggest danger is not just false information but the fake look of universal agreement. This is done by testing audiences with many different versions of a message, measuring their responses in real time, and then amplifying the ones that are most convincing.
One person can use current large language model technology and multi-agent frameworks to deploy thousands of AI "voices" that sound real and local. Each persona can use slang that is specific to the community, talk about local news events, and have long conversations that build trust over days or weeks before switching to politically targeted messages. The swarm can run millions of micro-tests at the same time, which is like doing A/B testing on whole populations all the time, to find out which emotional triggers, framing devices, and narrative structures change people's minds the most.
The outcome represents a distinct form of astroturfing that was previously unattainable. Even if individual claims are proven false, the constant chorus of seemingly independent voices can make fringe ideas seem normal and make divisions in communities that were once united even worse.
The Economics of Deception
AI persona swarms are especially dangerous because they are cheap to make. Russia's Internet Research Agency and other traditional influence operations needed hundreds of people to work in shifts, each managing a few accounts. This cost millions of dollars a year. AI swarms can reach the same or even farther with only a small part of that money. Open-source large language models, cloud computing infrastructure, and social media APIs are all commercially available and getting cheaper all the time.
Russia spends more than $150 billion a year on its military, but only about $1 billion on information warfare, which is a small amount compared to that. But that smaller investment pays off big. AI persona swarms make this imbalance much worse, allowing even actors with only a little money—like nation-states, political parties, or private interests—to run influence campaigns that would have been impossible to logistically a few years ago.
Evidence Already Present
The "Fox8" Botnet: A First Look
The researchers say that full-scale AI swarms have not yet been definitively documented in the wild, but the evidence trail is growing quickly. In the middle of 2023, Professor Filippo Menczer from Indiana University and his coworkers at the Observatory on Social Media found a group of more than 1,000 AI-powered bot accounts on Twitter (now X) that were working together to scam people out of their cryptocurrency. They called it the "fox8" botnet after one of the fake news sites it was meant to boost.
The researchers were only able to find these accounts because the operators were careless. They missed posts where ChatGPT refused to follow prompts that broke its terms of service, which left error text that showed what was wrong with the service in public posts. The implication was sobering: a slightly better operator would have been able to hide from regular methods.
Menczer, who later wrote for the Science paper, gave a blunt review in February 2026. The idea of malicious AI swarms is no longer just a theory; there is proof that these methods are already being used. He says that policymakers and tech experts need to quickly make it more expensive, risky, and visible to do this kind of manipulation.
A Case Study of Romania's Annulled Election
The 2024 presidential election in Romania may have been the most dramatic real-world example of AI-assisted influence operations. Călin Georgescu, a far-right, pro-Russian candidate who had been polling in the low single digits just weeks before the vote, won the first round on November 24, 2024, with 23% of the vote. His campaign was mostly on TikTok, where his videos got about 150 million views in two months.
Romanian intelligence agencies later made public documents that showed a complex campaign of manipulation. Investigators found more than 85,000 cyberattacks on election-related IT systems, coordinated bot networks and troll farms on TikTok and Telegram, and a payment plan in which influencers were paid about $100 for every 20,000 followers they had to post videos asking people to vote for a "ideal candidate" without naming Georgescu. Fake accounts then filled the comment sections of these videos with messages in favor of Georgescu, making it look like there was a real groundswell.
On December 6, 2024, Romania's Constitutional Court made history by throwing out the election results completely, saying there was overwhelming evidence that the process had been compromised. It was the first time that a member of the EU had canceled a presidential election because of interference from outside the country. After that, the European Commission officially looked into TikTok under the Digital Services Act because it didn't do enough to protect the integrity of elections.
The Romania case is still being fought over; TikTok said it found no proof of a coordinated Russian campaign, and the U.S. House Judiciary Committee later reported similar findings. However, the incident showed how easy it is for algorithms to mess with democratic processes, no matter who was behind it.
Deepfakes in the Democratic World
AI-generated deepfakes have already made their way into elections on several continents, in addition to coordinated bot campaigns. According to research by Surfshark, 38 countries have had deepfake incidents related to elections since 2021, affecting a total of 3.8 billion people.
During the New Hampshire primary in January 2024, between 5,000 and 25,000 voters in the United States got robocalls with an AI-generated imitation of President Biden's voice telling Democrats not to vote. The political consultant who did it was later charged, but the event showed how easy and cheap it is to use voice-cloning technology to keep people from voting.
Political parties spent about $50 million on AI-generated content for India's 2024 general election, which is the biggest democratic event in the world. Deepfakes looked like politicians, famous people, and even dead leaders. During the 2020 assembly elections in Delhi, one party used the technique for the first time by sending 15 million voters deepfake videos of a party leader speaking in languages he didn't really speak.
In the presidential election in Taiwan in January 2024, a lot of deepfake videos attacked the ruling Democratic Progressive Party. These videos included fake audio of private conversations between candidates and AI-generated sexual content meant to ruin reputations. Microsoft said that this was the first time a country-state, which it said was China, used AI-generated content to sway a foreign election.
In Indonesia, both local and foreign groups used AI. In a campaign video for their candidate, the political party Golkar used AI to "bring back to life" the dead dictator Suharto. At the same time, deepfake videos of opposition candidates saying things that weren't true spread quickly on social media.
A deepfake audio clip that spread false information about election fraud in Slovakia came out just days before the September 2023 election. This was done on purpose to stop people from being able to prove it was false before the polls opened. Analysts said it was one of only two cases in the world where deepfakes had a clear effect on the results of an election.
The Dimension of Data Poisoning
Russia's "Pravda Network" and LLM Grooming
The danger goes beyond real-time manipulation to something that could be even worse: the AI systems themselves becoming corrupt. Monitoring groups have written down how pro-Russian groups are flooding the internet with content meant to "poison" the training data that future AI models will learn from. Researchers have called this method "LLM grooming."
The Pravda network, a group of fake news sites that has been around since at least 2022, is at the heart of this effort. The network now includes 182 different internet domains and subdomains that are aimed at at least 74 countries and speak 12 languages. In 2024 alone, the network put out about 3.6 million articles that told pro-Kremlin stories, from claims about "secret U.S. biolabs in Ukraine" to accusations of corruption against Ukrainian President Zelensky.
A report from the watchdog group NewsGuard in March 2025 tested ten popular AI chatbots, such as ChatGPT, Claude, Gemini, and Copilot. It found that when asked related questions, they gave false information from the Pravda network about 33% of the time. Seven of the ten chatbots that were tested directly named specific Pravda articles as sources.
The American Sunlight Project, a nonprofit research group, also warned that the more false stories there are online, the more likely it is that language models will start to see them as true and include them in their responses. For an AI system trained on internet-scale data, the sheer amount of repetition can stand in for credibility. When many sources agree, it looks like corroboration, even if those sources are only there to mess with the training signal.
Nonetheless, it is important to acknowledge that the magnitude and efficacy of LLM grooming continue to be focal points of vigorous academic discourse. A study from Harvard Kennedy School's Misinformation Review found that chatbot references to Kremlin-linked sources were rare and only happened in response to very specific questions about topics that mainstream media didn't cover well. This suggests that "data voids" rather than intentional infiltration may be to blame for a lot of the disinformation found in chatbot outputs. The researchers warned against alarmist frames that could become tools of information warfare by making people think that foreign powers are all-powerful.
What This Means for Elections and Trust in Democracy
The Decline of Shared Reality
The researchers who wrote the Science paper say that if AI persona swarms are allowed to grow without any limits, they will have a number of effects that will happen one after the other. These include fake grassroots consensus that changes public norms, breaking up shared reality into information bubbles that don't fit together, mass harassment campaigns that push vulnerable voices out of public discourse, and voter micro-suppression or micro-mobilization, which means sending targeted messages to specific demographic groups to either discourage or encourage turnout.
Dr. Kevin Leyton-Brown, a computer scientist at UBC and one of the study's authors, says we shouldn't assume that society will stay the same as these systems become more common. He says that one likely result is that people will trust unknown voices on social media less. That may sound like healthy skepticism, but it has a bad effect: it would give more power to well-known celebrities and institutions while making it harder for real grassroots movements, whistleblowers, and citizen journalists to get attention. The very thing that is supposed to stop manipulation—distrust of strange voices—could itself be used to silence people.
The "Liar's Dividend"
Researchers have also discovered a second-order effect referred to as the "liar's dividend." As AI-generated content becomes indistinguishable from authentic material, public skepticism escalates to the extent that genuine evidence can be disregarded as fabricated. In India's 2024 election, a candidate said that a real audio clip of him criticizing his own party was a deepfake. Independent fact-checkers couldn't easily prove him wrong in real time. In the same way, when President Biden said he was dropping out of the 2024 presidential race, conspiracy theories spread right away saying that his speech was made by AI.
The more authentic and synthetic content coexist indistinguishably, the harder it is to prove that such claims are false, and the liar's dividend grows.
Taking Down Defenses
These threats are coming up at a time when we are especially weak. The current administration in the United States has shut down a number of federal programs that were meant to stop foreign influence operations and has cut funding for research into them. Companies like X (formerly Twitter) have stopped giving researchers free access to the platform data that used to make it possible to find and keep an eye on online manipulation.
The Centre for International Governance Innovation found that in 2024, more than 80% of countries that held elections used AI in ways that affected their elections. Content creation, such as deepfakes, synthetic articles, and AI-generated social media posts, made up 90% of all cases.
Proposed Defenses and the Way Forward
A Three-Part Plan
The authors of the Science paper suggest a three-part defense plan that they know won't completely stop misuse, but they hope it will make manipulation so expensive and obvious that coordinated disinformation campaigns become too expensive to run.
Platform-side defenses would include dashboards that are always on and can find statistically unlikely patterns of consensus, high-fidelity swarm-simulation stress tests before the election (essentially red-teaming elections before they happen), mandatory transparency audits, and optional "AI shields" on the client side that flag interactions that might be fake for each user.
Model-side safeguards would require standardized testing for AI models to see how likely they are to persuade people before they are put to use, digital passkeys that prove the source of content, and watermarks on AI-generated text, audio, and video.
The most ambitious idea is system-level oversight: the creation of a UN-backed "AI Influence Observatory" made up of a network of academic groups, NGOs, and civil society organizations that can monitor AI and respond quickly without being controlled by any one government or business.
Simulating Attacks Before They Happen
One of the paper's most original ideas is to regularly simulate AI swarm attacks on democratic processes, not just as thought experiments but as real-life, high-fidelity rehearsals. The logic is similar to cybersecurity: just as companies do penetration testing to find weaknesses before their enemies do, democracies should stress-test their information ecosystems against realistic AI swarm scenarios.
Schroeder said that the goal is to make manipulation so expensive and so quickly exposed that coordinated campaigns fail before they can reach their goals.
The Rules and Regulations
The European Union has been the most active when it comes to rules and regulations. The Digital Services Act already requires major platforms to reduce risks to the integrity of elections. The AI Act added more requirements for generative AI systems. The Romanian election crisis was the first big test of how well these frameworks work in practice, and it showed that there were big holes in how quickly they could respond.
Regulatory responses in the United States are still not very well organized. Several states have passed laws that specifically target deepfakes in political campaigns. The person behind the New Hampshire robocall scheme was arrested, which may stop others from doing the same thing. But there hasn't been any comprehensive federal legislation yet that deals with AI-driven influence operations, and the trend in policy has been toward less regulation rather than more.
A Test Bed for Democratic Government
The researchers are careful to say that the next elections around the world could be the first real test of this technology. Today, platform companies, governments, AI developers, and the general public who have to deal with an information environment that is becoming more and more full of synthetic voices are all making decisions that will determine whether they are a catalyst for reform or a warning about how democracy can fail.
This challenge is unlike any other in terms of size and difficulty. A missile or a cyberattack on physical infrastructure leaves a crater and sets off an alarm. An AI persona swarm does not. Its weapon is persuasion, its ammunition is stories, and its battlefield is the mind. The question is no longer whether these tools will be used to undermine democratic processes. Evidence suggests they already are. Instead, the question is whether democratic societies can quickly adapt their defenses to protect the integrity of collective decision-making in the age of AI.
The authors of the science paper say that success depends on encouraging people to work together without getting in the way of scientific research, while also making sure that the public sphere is strong and accountable. If we commit now to strict measurement, appropriate protections, and shared oversight, the next elections could be a test of democratic AI governance instead of a setback.