If, in 2026, you are still on social media – and, admit it, most of us still are – you probably have realized that not all the content you see can be trusted. There are people out there with what seem like crazy, or at least uninformed, ideas. Anonymous accounts allow for points-of-view people wouldn’t normally espouse publicly. And bots have been a pernicious influence for some time; Elon even bought Twitter (OK: X) supposedly to combat them.

Hint: they're not real. They just seem real. Credit: Microsoft Designer
Now, of
course, we have AI chatbots to contend with, which can interact realistically
enough that you may not realize they aren’t, in fact, human. But get ready for
the next stage: AI “swarms” driving discourse on social media platforms.
This week Tiffany Hsu wrote in The New York Times about the flood of pro-Trump avatars showing up on social media platforms, such as Tik Tok, Facebook, Instagram, and YouTube, She writes:
In the months leading up to the midterm elections, hundreds of accounts have emerged on social media featuring A.I.-generated pro-Trump influencers posting at a rapid pace about the “radical left” and “America First.” They tend to appear as ordinary — if very good-looking — men and women, gazing flirtatiously at the camera while pontificating about the war in Iran, abortion or Bad Bunny.
The
Times’ analysis found
some 304 accounts sharing the same content, driving over a half-million views. Ms.
Hsu says it is not clear who created the accounts, but experts told her “that
creating such avatars is becoming easier, especially for contractors and
marketing companies that now specialize in developing and dispatching A.I.
avatars in bulk for increasingly low prices.”
I suspect
there are orders of magnitude more of these kinds of accounts.
“People
gearing up for the midterms should expect that they might see some of this
content on their accounts, that it might be crafted to be particularly engaging
or exciting to them,” Kaylyn Jackson Schiff, a co-director of GRAIL (Governance
and Responsible A.I. Lab at Purdue University,), told her.
This should
come as no surprise. It has been happening, and as AI advances, it’s going to
happen more. In fact, last January researchers warned, in Science: How malicious AI
swarms can threaten democracy: The fusion of
agentic AI and LLMs marks a new frontier in information warfare.
The
University of British Columbia press release about the commentary says: “Advances
in large language models and multi-agent systems allow a single operator to
deploy thousands of AI ‘voices’ that look authentic and talk like locals. They
can run millions of micro-tests to find the most persuasive messages, creating
a synthetic consensus that feels grassroots-driven but is engineered to
manipulate democratic discourse.”
UBC
computer scientist Dr. Kevin Leyton-Brown warns: “We shouldn’t imagine that
society will remain unchanged as these systems emerge. A likely result is
decreased trust of unknown voices on social media, which could empower
celebrities and make it harder for grassroots messages to break through.”
![]() |
| Professor Kevin Leyton-Brown |
That’s
scary enough, but, even worse: "I think the more sophisticated these bots
are, the less you actually need," lead author Daniel Schroeder, a
researcher at the technology research organization SINTEF in Norway, told Mr.
Pester.
Similarly, in The Conversation, Filippo Menczer, Professor of Informatics and Computer Science at Indiana University, wrote:
Today, people and organizations with malicious intent have access to more powerful AI language models – including open-source ones – while social media platforms have relaxed or eliminated moderation efforts. They even provide financial incentives for engaging content, irrespective of whether it’s real or AI-generated. This is a perfect storm for foreign and domestic influence operations targeting democratic elections. For example, an AI-controlled bot swarm could create the false impression of widespread, bipartisan opposition to a political candidate.
He notes that, in addition to tech companies cutting back on moderation, the current Administration has dismantled federal programs intended to combat such efforts, leaving the door open. He and an interdisciplinary team of computer science, AI, cybersecurity, psychology, social science, journalism and policy researchers are sounding the alarm:
We believe that current AI technology allows organizations with malicious intent to deploy large numbers of autonomous, adaptive, coordinated agents to multiple social media platforms. These agents enable influence operations that are far more scalable, sophisticated and adaptive than simple scripted misinformation campaigns.
“Manufactured
synthetic consensus,” he says, “is a very real threat to the public
sphere, the mechanisms democratic societies use to form shared beliefs,
make decisions and trust public discourse.”
The flood of AI avatars Ms. Hsu profiles suggests that, if we’re not already there, we’re dangerously close. Eric Nelson, a special investigations analyst from Alethea, a digital threat mitigation company, told her: “This really is the first time I have seen something like this.”
“They’re
trying to spread political messages and give an illusion of a consensus,” Andrew
Yoon, a member of the technical staff at CivAI, a nonprofit that educates
people about A.I.’s capabilities and consequences, told Ms. Hsu. “Flooding the
zone here with tons and tons of videos seems geared to give a false sense of a
majority opinion.”
"Humans,
generally speaking, are conformist," Professor Kunst told
Mr. Pester. "We often don't want to agree with that, and people vary
to a certain extent, but all things being equal, we do have a tendency to
believe what most people do has certain value. That's something that can
relatively easily be hijacked by these swarms."
Both Professor
Kunst and Professor Menczer agree that the threat is real, the threat is
severe, and, unfortunately, that there are no simple solutions. The AI won’t
just cut-and-paste the same content and try to flood the zone. It will tailor
messages to users and their reactions to it. The messages will seem authentic
and plausible. They’ll try to make us feel that if we don’t agree with them, we’re
in a distinct minority. Not many of us are good with that.
I’d been
worrying about swarms
of AI-driven drones overwhelming conventional ministry defenses, but even
that may now be outdated: the attack will be coming from inside the house, via
our phones and computers.


No comments:
Post a Comment