Monday, April 20, 2026

Worried About AI? Try AI Swarms

If, in 2026, you are still on social media – and, admit it, most of us still are – you probably have realized that not all the content you see can be trusted. There are people out there with what seem like crazy, or at least uninformed, ideas. Anonymous accounts allow for points-of-view people wouldn’t normally espouse publicly. And bots have been a pernicious influence for some time; Elon even bought Twitter (OK: X) supposedly to combat them.

Hint: they're not real. They just seem real. Credit: Microsoft Designer

Now, of course, we have AI chatbots to contend with, which can interact realistically enough that you may not realize they aren’t, in fact, human. But get ready for the next stage: AI “swarms” driving discourse on social media platforms.

This week Tiffany Hsu wrote in The New York Times about the flood of pro-Trump avatars showing up on social media platforms, such as Tik Tok, Facebook, Instagram, and YouTube, She writes:

In the months leading up to the midterm elections, hundreds of accounts have emerged on social media featuring A.I.-generated pro-Trump influencers posting at a rapid pace about the “radical left” and “America First.” They tend to appear as ordinary — if very good-looking — men and women, gazing flirtatiously at the camera while pontificating about the war in Iran, abortion or Bad Bunny.

The Times’ analysis found some 304 accounts sharing the same content, driving over a half-million views. Ms. Hsu says it is not clear who created the accounts, but experts told her “that creating such avatars is becoming easier, especially for contractors and marketing companies that now specialize in developing and dispatching A.I. avatars in bulk for increasingly low prices.”

I suspect there are orders of magnitude more of these kinds of accounts.

“People gearing up for the midterms should expect that they might see some of this content on their accounts, that it might be crafted to be particularly engaging or exciting to them,” Kaylyn Jackson Schiff, a co-director of GRAIL (Governance and Responsible A.I. Lab at Purdue University,), told her.

This should come as no surprise. It has been happening, and as AI advances, it’s going to happen more. In fact, last January researchers warned, in Science: How malicious AI swarms can threaten democracy: The fusion of agentic AI and LLMs marks a new frontier in information warfare.

The University of British Columbia press release about the commentary says: “Advances in large language models and multi-agent systems allow a single operator to deploy thousands of AI ‘voices’ that look authentic and talk like locals. They can run millions of micro-tests to find the most persuasive messages, creating a synthetic consensus that feels grassroots-driven but is engineered to manipulate democratic discourse.”

UBC computer scientist Dr. Kevin Leyton-Brown warns: “We shouldn’t imagine that society will remain unchanged as these systems emerge. A likely result is decreased trust of unknown voices on social media, which could empower celebrities and make it harder for grassroots messages to break through.”

Professor Kevin Leyton-Brown
Patrick Pester, in Live Science, explains: “With an LLM at the helm, a swarm will be sophisticated enough to adapt to the online communities it infiltrates, installing collections of different personas that retain memory and identity, according to the commentary.” Commentary co-author Jonas Kunst, a professor of communication at the BI Norwegian Business School in Norway added: “We talk about it as a kind of organism that is self-sufficient, that can coordinate itself, can learn, can adapt over time and, by that, specialize in exploiting human vulnerabilities."

That’s scary enough, but, even worse: "I think the more sophisticated these bots are, the less you actually need," lead author Daniel Schroeder, a researcher at the technology research organization SINTEF in Norway, told Mr. Pester.

Similarly, in The Conversation, Filippo Menczer, Professor of Informatics and Computer Science at Indiana University, wrote:

Today, people and organizations with malicious intent have access to more powerful AI language models – including open-source ones – while social media platforms have relaxed or eliminated moderation efforts. They even provide financial incentives for engaging content, irrespective of whether it’s real or AI-generated. This is a perfect storm for foreign and domestic influence operations targeting democratic elections. For example, an AI-controlled bot swarm could create the false impression of widespread, bipartisan opposition to a political candidate.

He notes that, in addition to tech companies cutting back on moderation, the current Administration has dismantled federal programs intended to combat such efforts, leaving the door open. He and an interdisciplinary team of computer science, AI, cybersecurity, psychology, social science, journalism and policy researchers are sounding the alarm:

We believe that current AI technology allows organizations with malicious intent to deploy large numbers of autonomous, adaptive, coordinated agents to multiple social media platforms. These agents enable influence operations that are far more scalable, sophisticated and adaptive than simple scripted misinformation campaigns.

“Manufactured synthetic consensus,” he says, “is a very real threat to the public sphere, the mechanisms democratic societies use to form shared beliefs, make decisions and trust public discourse.”

The flood of AI avatars Ms. Hsu profiles suggests that, if we’re not already there, we’re dangerously close. Eric Nelson, a special investigations analyst from Alethea, a digital threat mitigation company, told her: “This really is the first time I have seen something like this.”

“They’re trying to spread political messages and give an illusion of a consensus,” Andrew Yoon, a member of the technical staff at CivAI, a nonprofit that educates people about A.I.’s capabilities and consequences, told Ms. Hsu. “Flooding the zone here with tons and tons of videos seems geared to give a false sense of a majority opinion.”

"Humans, generally speaking, are conformist," Professor Kunst told Mr. Pester. "We often don't want to agree with that, and people vary to a certain extent, but all things being equal, we do have a tendency to believe what most people do has certain value. That's something that can relatively easily be hijacked by these swarms."

Both Professor Kunst and Professor Menczer agree that the threat is real, the threat is severe, and, unfortunately, that there are no simple solutions. The AI won’t just cut-and-paste the same content and try to flood the zone. It will tailor messages to users and their reactions to it. The messages will seem authentic and plausible. They’ll try to make us feel that if we don’t agree with them, we’re in a distinct minority. Not many of us are good with that.

I’d been worrying about swarms of AI-driven drones overwhelming conventional ministry defenses, but even that may now be outdated: the attack will be coming from inside the house, via our phones and computers.

Monday, April 13, 2026

Chances are someone in your family is a gamer. Maybe you are a gamer yourself. After all, somewhere between two-thirds and three-fourths of Americans play video games, and if you just looked at young men, it’d be closer to 100%. Grumpy older people don’t get it, complaining that gaming is just a waste of time, but gamers believe it helps with their problem solving (although at a cost of sleep).

Does this qualify you to be an air traffic controller? Maybe. Credit: Microsoft Designer

Well, the good news is that if you are, indeed, a gamer, the Federal Aviation Authority (F.A.A.) is looking for you.

Last Friday Transportation Secretary Sean P. Duffy announced the F.A.A.’s campaign to attract “the next generation of air traffic controllers,” It is looking for people “who possess useful skills that are transferable to a career in air traffic control, including:

  • Demonstrated high cognitive functions
  • Multitasking
  • Spatial awareness
  • Strategy and problem-solving”

By all that, they mean gamers. The announcement goes on to add: “…this effort is focused on reaching talented young people pursuing alternative career paths, many of whom are active in gaming. Feedback from controller exit interviews reinforces this, with several controllers pointing to gaming as an influence on their ability to think quickly, stay focused, and manage complexity.”

There’s a slick YouTube ad too.

“When you bring on someone who has gaming experience, particularly with air traffic control, they have an edge up,” Michael O’Donnell, an aerospace consultant who previously worked as a senior F.A.A. official focused on air traffic safety, told Karoun Demirjian of The New York Times. “They’re coming in with a skill set. But it doesn’t replace aptitude, or discipline, or decision making under pressure.”

Surprisingly, the National Air Traffic Controllers Association supports the effort, with its president Nick Daniels telling BBC:: “Our union welcomes innovative approaches to expanding the candidate pool, including outreach to individuals with high-level aptitude skills such as gamers, so long as all pathways maintain the rigorous standards required of this safety-critical profession."

To be fair, both the F.A.A. and the NATCA probably would welcome anything that might drive people to apply. The F.A.A. only has about 75% of the target number of controllers, leaving it several thousand short. Individual airports may be staffed even lower, as might certain times of day. It’s not a new problem and it is not a problem that is going to be quickly fixed; it is not as though today you can play a video game and tomorrow you can be an air traffic controller. There is definitely a learning curve.

It also doesn’t help that air traffic controllers aren’t usually paid during government shutdowns, which Congress seems to increasingly allow. "The failure to pay air traffic controllers for 44 days created uncertainty, drove many experienced controllers out of the profession and harmed the recruitment pipeline," a spokesperson from the Department of Transportation told CBS News in November.    

Nor does it help that air traffic controllers rely on technology is that likely to be older than they are. The F.A.A. is trying, for example, to replace its outdated radar system, but NBC reports: “The FAA has been spending most of its $3 billion equipment budget just maintaining the fragile old system that still relies on floppy discs in places. Some of the equipment is old and isn't manufactured anymore, so the FAA sometimes has to search for spare parts on eBay.”

The National Transportation Safety Board (NTSB) Chair Jennifer Homendy complained: “This is 2026. The secretary talks about upgrading our air traffic control system. We have an old air traffic control system. This is why he talks about that. We need to upgrade.”  

I was surprised to learn that gaming might not just be an asset to become an air traffic controller, but also an asset for air traffic controllers. Josh Jennings, a supervisor at the F.A.A.’s air traffic command center in Virginia, told Ms. Demirjian that gaming is both a way for controllers to stay sharp, and as a form of “social currency” among them. “I would say it’s probably tenfold on how fast this new generation is able to pick up on our physical tech, our radar scopes,” he said. Controllers apparently often play video games on their breaks.

In similar approaches to look for unconventional backgrounds, the Marines are looking at dirt bikers to become drone pilots, while Russia is looking at university students for its drone pilots.     

I can see the argument for recruiting gamers to be air traffic controllers. Both are used to obsessively monitoring multiple screens with lots of activity, requiring quick reactions, and with lives on the line. The difference, of course, is that for air traffic controllers, those virtual images represent real things, and the lives that may be lost are real people’s lives.

Still, given a choice between a controller who was a gamer versus some middle-aged college grad who is used to looking at spreadsheets, give me the gamer every time.

I think about all this, oddly enough, in regards to health care. Some of you may also be fans of “The Pitt.” One of my favorite characters is head nurse Dana Evans, and I sometimes wonder if she would ever get tired enough of covering for ineffective/incompetent doctors that she might opt to become one.  You can’t tell me that she isn’t smart enough and you probably couldn’t convince me she didn’t have enough medical knowledge, but in our system if she wanted to make such a change, it would mean sending her to medical school, then internship and residency – years of her life and hundreds of thousands of dollars of debt.

Who, exactly, would that help?

You know she'd be a good doctor
Where is the “gamers, please apply” equivalent to medical training, where non-traditional but potentially appliable backgrounds count? Could, for example, people with exceptional pattern recognition skills but perhaps not so good in chemistry or biology become excellent radiologists? Might biologists do well as pathologists, without all the years of physician training?

For many decades a college degree was seen as the ticket to middle-class (or more) success, but we’re seeing that’s less true now. We’re living in a digital world, and people are gaining skills and knowledge from that world that we’re not fully recognizing.

So kudos to the F.A.A. for recognizing how gamers might be good candidates, and I can only hope the subsequent training program isn’t so tradition-bound that it scares them off. And I’m waiting to see how healthcare and other industries might learn from -- not just copy -- its approach.

 

P.s. If you are wondering, “1337” is gamer slang for “leet,” which is itself slang for :elite,” as in gaming prowess.   

Monday, April 6, 2026

Let's Get Physical (AI)

In the U.S., we’re starting to worry more about AI and robots taking our jobs. It is, apparently, the “grimmest” job market in years for college grads, and AI often gets the blame. Whether that’s true is not so clear. Callum Borchers wrote in The Wall Street Journal about “AI washing” – using AI as an excuse for not hiring. “It’s a wonderful way of looking like a genius when job cuts are something you might have to do for other operational reasons,” Peter Bell, the founder of Gather.dev, told him. “It’s great smoke cover if you just need to goose your bottom line.”

Get ready for the robots. Credit: Microsoft Designer

Still, though, it’s not an unwarranted concern. “I don’t think A.I. has hit the labor market yet, and I don’t think it’s radically changed corporate productivity yet, either, but I think it’s coming,” Daniel Rock, a University of Pennsylvania economist, told Ben Casselman of The New York Times.

Mr. Casselman reports on a new working paper from a number of economists on forecasting the economic effects of AI, which reveals there is some divergence among economists about how much AI will improve the growth of the economy or its impact on the labor force. They do think there will be impacts but “experts do not forecast economic outcomes outside the range of historical experience.”

Take your pick about the forecasted AI impact. Credit: Karger, et. al.

The experts might want to look at Japan for a glimpse of the future. In TechCrunch, Kate Park takes a long look at how Japan is prioritizing “Physical AI” not as something to fear but as an economic necessity. Its Ministry of Economy, Trade and Industry announced in March that it wants to bolster Japan’s domestical Physical AI sector, and capture a 30% global share by 2040.

Japan has a big demographic problem. It has never encouraged immigration, its population has been shrinking for 14 straight years, its senior population continues to grow, and its working age population is declining. The demographic bomb is already going off.

“Physical AI is being bought as a continuity tool: how do you keep factories, warehouses, infrastructure, and service operations running with fewer people?” Hogil Doh, Global Brain general partner, told Ms. Park. “From what I’m seeing, labor shortages are the primary driver.”

“The driver has shifted from simple efficiency to industrial survival,” Sho Yamanaka, a principal with Salesforce Ventures, added. “Japan faces a physical supply constraint where essential services cannot be sustained due to a lack of labor. Given the shrinking working-age population, physical AI is a matter of national urgency to maintain industrial standards and social services.”

Justin Brown writes in Silicon Canals: “The framing matters. In the U.S., physical AI is a venture capital thesis. In China, it’s a geopolitical strategy. In Japan, it’s an answer to a structural question about whether the country can keep its industrial base running at all.”

It should be troubling that in the U.S. physical AI is neither a strategy nor a tactic, but just a “venture capital thesis.”

Ms. Parks states that Japan has historically excelled in the physical building blocks of robotics, whereas China and the U.S. have focused on “full stack” systems that include hardware, software, and data. “Japan’s expertise in high-precision components – the critical physical interface between AI and the real world – is a strategic moat,” Sho Yamanaka, a principal with Salesforce Ventures, told her. “Controlling this touchpoint provides a significant competitive advantage in the global supply chain. The current priority is to accelerate system-level optimization by integrating AI models deeply with this hardware.”

Japan’s efforts are attracting attention. Tech Buzz reports:

The shift is attracting serious enterprise money. Salesforce Ventures is betting on Japanese physical AI startups, joined by Woven Capital, Toyota's venture arm, and local heavyweight Global Brain. These aren't speculative moonshot investments-they're backing companies deploying robots into warehouses, manufacturing lines, and service positions today.

As such, Tech Buzz concludes: “This pragmatic necessity is creating a real-world testing ground that Silicon Valley can only simulate. Japanese robotics companies are learning what works when physical AI meets messy human environments-unpredictable warehouse layouts, variable product packaging, and the constant adaptation required in actual operations. The feedback loop is accelerating development faster than any research lab could manage.”

I.e., if you want to see the future of Physical AI, look to Japan.

Mr. Brown offers a very practical example:

In construction, an industry where Japan’s worker shortage is especially severe and the average age of laborers now sits above 50, Shimizu Corporation and Obayashi have deployed autonomous welding robots, concrete-finishing machines, and AI-guided cranes on active building sites. Shimizu’s Robo-Welder system has demonstrated a roughly 70% reduction in required human welding hours on structural steel projects.

Affordable housing, anyone?

It’s not that Japan is investing so much in AI – it has approved a national AI plan with five year funding of “only” US$6.3b – as it is that it is targeting it very effectively. As Franklin Templeton describes it: “The emphasis is not on chasing frontier models, but on embedding AI into sectors that already anchor Japan’s economy…Few countries are as comfortable integrating robotics into daily life and industrial production, and that long familiarity with automation shapes how AI is deployed.”

It should come as no surprise that a group of Japanese robotics developers and major electronics and semiconductor companies are collaborating to produce a humanoid robot, with the aim of mass production by 2027. Elon better get Optimus cranking.

Jensen Huang, founder and CEO of NVIDIA says: “Physical AI has arrived — every industrial company will become a robotics company,” but it’s not that simple. As Mr. Brown warns: “Japan’s regulatory willingness to permit autonomous systems in mixed environments like construction sites, farms, and retail stores is proving as important as the technology itself, and countries that wait until the labor crisis is acute before updating regulatory frameworks will find themselves a decade behind on deployment infrastructure.”

I especially liked Mr. Brown’s conclusion:

Western automation discourse treats robotics as something that happens to workers, a force that displaces and disrupts, and nearly every policy debate in the U.S. and Europe is still structured around that premise. Japan reveals how fundamentally parochial that framing is. When automation becomes a continuity tool rather than an optimization tool, the entire institutional posture shifts: political resistance dissolves, regulatory frameworks accelerate, and the relationship between human labor and machine capability stops being adversarial and starts being architectural. 

The U.S. already has critical shortages of farm or construction workers, we don’t produce enough engineers, and goodness knows we’re driving away our scientists, so if we wait until it’s clear that we need Physical AI, it will probably be too late.