Monday, April 13, 2026

Chances are someone in your family is a gamer. Maybe you are a gamer yourself. After all, somewhere between two-thirds and three-fourths of Americans play video games, and if you just looked at young men, it’d be closer to 100%. Grumpy older people don’t get it, complaining that gaming is just a waste of time, but gamers believe it helps with their problem solving (although at a cost of sleep).

Does this qualify you to be an air traffic controller? Maybe. Credit: Microsoft Designer

Well, the good news is that if you are, indeed, a gamer, the Federal Aviation Authority (F.A.A.) is looking for you.

Last Friday Transportation Secretary Sean P. Duffy announced the F.A.A.’s campaign to attract “the next generation of air traffic controllers,” It is looking for people “who possess useful skills that are transferable to a career in air traffic control, including:

  • Demonstrated high cognitive functions
  • Multitasking
  • Spatial awareness
  • Strategy and problem-solving”

By all that, they mean gamers. The announcement goes on to add: “…this effort is focused on reaching talented young people pursuing alternative career paths, many of whom are active in gaming. Feedback from controller exit interviews reinforces this, with several controllers pointing to gaming as an influence on their ability to think quickly, stay focused, and manage complexity.”

There’s a slick YouTube ad too.

“When you bring on someone who has gaming experience, particularly with air traffic control, they have an edge up,” Michael O’Donnell, an aerospace consultant who previously worked as a senior F.A.A. official focused on air traffic safety, told Karoun Demirjian of The New York Times. “They’re coming in with a skill set. But it doesn’t replace aptitude, or discipline, or decision making under pressure.”

Surprisingly, the National Air Traffic Controllers Association supports the effort, with its president Nick Daniels telling BBC:: “Our union welcomes innovative approaches to expanding the candidate pool, including outreach to individuals with high-level aptitude skills such as gamers, so long as all pathways maintain the rigorous standards required of this safety-critical profession."

To be fair, both the F.A.A. and the NATCA probably would welcome anything that might drive people to apply. The F.A.A. only has about 75% of the target number of controllers, leaving it several thousand short. Individual airports may be staffed even lower, as might certain times of day. It’s not a new problem and it is not a problem that is going to be quickly fixed; it is not as though today you can play a video game and tomorrow you can be an air traffic controller. There is definitely a learning curve.

It also doesn’t help that air traffic controllers aren’t usually paid during government shutdowns, which Congress seems to increasingly allow. "The failure to pay air traffic controllers for 44 days created uncertainty, drove many experienced controllers out of the profession and harmed the recruitment pipeline," a spokesperson from the Department of Transportation told CBS News in November.    

Nor does it help that air traffic controllers rely on technology is that likely to be older than they are. The F.A.A. is trying, for example, to replace its outdated radar system, but NBC reports: “The FAA has been spending most of its $3 billion equipment budget just maintaining the fragile old system that still relies on floppy discs in places. Some of the equipment is old and isn't manufactured anymore, so the FAA sometimes has to search for spare parts on eBay.”

The National Transportation Safety Board (NTSB) Chair Jennifer Homendy complained: “This is 2026. The secretary talks about upgrading our air traffic control system. We have an old air traffic control system. This is why he talks about that. We need to upgrade.”  

I was surprised to learn that gaming might not just be an asset to become an air traffic controller, but also an asset for air traffic controllers. Josh Jennings, a supervisor at the F.A.A.’s air traffic command center in Virginia, told Ms. Demirjian that gaming is both a way for controllers to stay sharp, and as a form of “social currency” among them. “I would say it’s probably tenfold on how fast this new generation is able to pick up on our physical tech, our radar scopes,” he said. Controllers apparently often play video games on their breaks.

In similar approaches to look for unconventional backgrounds, the Marines are looking at dirt bikers to become drone pilots, while Russia is looking at university students for its drone pilots.     

I can see the argument for recruiting gamers to be air traffic controllers. Both are used to obsessively monitoring multiple screens with lots of activity, requiring quick reactions, and with lives on the line. The difference, of course, is that for air traffic controllers, those virtual images represent real things, and the lives that may be lost are real people’s lives.

Still, given a choice between a controller who was a gamer versus some middle-aged college grad who is used to looking at spreadsheets, give me the gamer every time.

I think about all this, oddly enough, in regards to health care. Some of you may also be fans of “The Pitt.” One of my favorite characters is head nurse Dana Evans, and I sometimes wonder if she would ever get tired enough of covering for ineffective/incompetent doctors that she might opt to become one.  You can’t tell me that she isn’t smart enough and you probably couldn’t convince me she didn’t have enough medical knowledge, but in our system if she wanted to make such a change, it would mean sending her to medical school, then internship and residency – years of her life and hundreds of thousands of dollars of debt.

Who, exactly, would that help?

You know she'd be a good doctor
Where is the “gamers, please apply” equivalent to medical training, where non-traditional but potentially appliable backgrounds count? Could, for example, people with exceptional pattern recognition skills but perhaps not so good in chemistry or biology become excellent radiologists? Might biologists do well as pathologists, without all the years of physician training?

For many decades a college degree was seen as the ticket to middle-class (or more) success, but we’re seeing that’s less true now. We’re living in a digital world, and people are gaining skills and knowledge from that world that we’re not fully recognizing.

So kudos to the F.A.A. for recognizing how gamers might be good candidates, and I can only hope the subsequent training program isn’t so tradition-bound that it scares them off. And I’m waiting to see how healthcare and other industries might learn from -- not just copy -- its approach.

 

P.s. If you are wondering, “1337” is gamer slang for “leet,” which is itself slang for :elite,” as in gaming prowess.   

Monday, April 6, 2026

Let's Get Physical (AI)

In the U.S., we’re starting to worry more about AI and robots taking our jobs. It is, apparently, the “grimmest” job market in years for college grads, and AI often gets the blame. Whether that’s true is not so clear. Callum Borchers wrote in The Wall Street Journal about “AI washing” – using AI as an excuse for not hiring. “It’s a wonderful way of looking like a genius when job cuts are something you might have to do for other operational reasons,” Peter Bell, the founder of Gather.dev, told him. “It’s great smoke cover if you just need to goose your bottom line.”

Get ready for the robots. Credit: Microsoft Designer

Still, though, it’s not an unwarranted concern. “I don’t think A.I. has hit the labor market yet, and I don’t think it’s radically changed corporate productivity yet, either, but I think it’s coming,” Daniel Rock, a University of Pennsylvania economist, told Ben Casselman of The New York Times.

Mr. Casselman reports on a new working paper from a number of economists on forecasting the economic effects of AI, which reveals there is some divergence among economists about how much AI will improve the growth of the economy or its impact on the labor force. They do think there will be impacts but “experts do not forecast economic outcomes outside the range of historical experience.”

Take your pick about the forecasted AI impact. Credit: Karger, et. al.

The experts might want to look at Japan for a glimpse of the future. In TechCrunch, Kate Park takes a long look at how Japan is prioritizing “Physical AI” not as something to fear but as an economic necessity. Its Ministry of Economy, Trade and Industry announced in March that it wants to bolster Japan’s domestical Physical AI sector, and capture a 30% global share by 2040.

Japan has a big demographic problem. It has never encouraged immigration, its population has been shrinking for 14 straight years, its senior population continues to grow, and its working age population is declining. The demographic bomb is already going off.

“Physical AI is being bought as a continuity tool: how do you keep factories, warehouses, infrastructure, and service operations running with fewer people?” Hogil Doh, Global Brain general partner, told Ms. Park. “From what I’m seeing, labor shortages are the primary driver.”

“The driver has shifted from simple efficiency to industrial survival,” Sho Yamanaka, a principal with Salesforce Ventures, added. “Japan faces a physical supply constraint where essential services cannot be sustained due to a lack of labor. Given the shrinking working-age population, physical AI is a matter of national urgency to maintain industrial standards and social services.”

Justin Brown writes in Silicon Canals: “The framing matters. In the U.S., physical AI is a venture capital thesis. In China, it’s a geopolitical strategy. In Japan, it’s an answer to a structural question about whether the country can keep its industrial base running at all.”

It should be troubling that in the U.S. physical AI is neither a strategy nor a tactic, but just a “venture capital thesis.”

Ms. Parks states that Japan has historically excelled in the physical building blocks of robotics, whereas China and the U.S. have focused on “full stack” systems that include hardware, software, and data. “Japan’s expertise in high-precision components – the critical physical interface between AI and the real world – is a strategic moat,” Sho Yamanaka, a principal with Salesforce Ventures, told her. “Controlling this touchpoint provides a significant competitive advantage in the global supply chain. The current priority is to accelerate system-level optimization by integrating AI models deeply with this hardware.”

Japan’s efforts are attracting attention. Tech Buzz reports:

The shift is attracting serious enterprise money. Salesforce Ventures is betting on Japanese physical AI startups, joined by Woven Capital, Toyota's venture arm, and local heavyweight Global Brain. These aren't speculative moonshot investments-they're backing companies deploying robots into warehouses, manufacturing lines, and service positions today.

As such, Tech Buzz concludes: “This pragmatic necessity is creating a real-world testing ground that Silicon Valley can only simulate. Japanese robotics companies are learning what works when physical AI meets messy human environments-unpredictable warehouse layouts, variable product packaging, and the constant adaptation required in actual operations. The feedback loop is accelerating development faster than any research lab could manage.”

I.e., if you want to see the future of Physical AI, look to Japan.

Mr. Brown offers a very practical example:

In construction, an industry where Japan’s worker shortage is especially severe and the average age of laborers now sits above 50, Shimizu Corporation and Obayashi have deployed autonomous welding robots, concrete-finishing machines, and AI-guided cranes on active building sites. Shimizu’s Robo-Welder system has demonstrated a roughly 70% reduction in required human welding hours on structural steel projects.

Affordable housing, anyone?

It’s not that Japan is investing so much in AI – it has approved a national AI plan with five year funding of “only” US$6.3b – as it is that it is targeting it very effectively. As Franklin Templeton describes it: “The emphasis is not on chasing frontier models, but on embedding AI into sectors that already anchor Japan’s economy…Few countries are as comfortable integrating robotics into daily life and industrial production, and that long familiarity with automation shapes how AI is deployed.”

It should come as no surprise that a group of Japanese robotics developers and major electronics and semiconductor companies are collaborating to produce a humanoid robot, with the aim of mass production by 2027. Elon better get Optimus cranking.

Jensen Huang, founder and CEO of NVIDIA says: “Physical AI has arrived — every industrial company will become a robotics company,” but it’s not that simple. As Mr. Brown warns: “Japan’s regulatory willingness to permit autonomous systems in mixed environments like construction sites, farms, and retail stores is proving as important as the technology itself, and countries that wait until the labor crisis is acute before updating regulatory frameworks will find themselves a decade behind on deployment infrastructure.”

I especially liked Mr. Brown’s conclusion:

Western automation discourse treats robotics as something that happens to workers, a force that displaces and disrupts, and nearly every policy debate in the U.S. and Europe is still structured around that premise. Japan reveals how fundamentally parochial that framing is. When automation becomes a continuity tool rather than an optimization tool, the entire institutional posture shifts: political resistance dissolves, regulatory frameworks accelerate, and the relationship between human labor and machine capability stops being adversarial and starts being architectural. 

The U.S. already has critical shortages of farm or construction workers, we don’t produce enough engineers, and goodness knows we’re driving away our scientists, so if we wait until it’s clear that we need Physical AI, it will probably be too late.  

Monday, March 30, 2026

Oh. Another Moonshot

If all goes well, in a couple days NASA will be sending astronauts on their way to the moon, for the first time since – gulp – 1972. They’re not landing, mind you, they’re just doing a fly around, something Apollo 8 first did way back in 1968. Given the advances in microchips, computing power, AI, a robust private space industry, and Elon’s grand plans to inhabit Mars, it doesn’t really sound all that ambitious, hardly a “moonshot” in the sense that we’ve come to use that term, but I guess we should be glad that NASA hasn’t entirely conceded space to the billionaires.

Artemis II Space Launch System Credit: NASA/Jim Ross

The Artemis II mission will send four astronauts – including, if you are counting (and many are), the first person of color, the first woman, and the first Canadian to reach the moon -- on a ten day, 230,000 mile trip that won’t actually orbit the moon but just loop around it, not getting closer than a few thousand miles. “Things are certainly starting to feel real,” Christina Koch, one of the four, said during a news conference Sunday morning.

Last week NASA unveiled its “Ignition” strategy that Artemis II is part of. It includes not just the fly-by, but also a follow-up mission in 2027, a manned landing in 2028, and a permanent moon base in the 2030’s, committing $20b over the next seven years to accomplish the latter. “NASA is committed to achieving the near‑impossible once again, to return to the Moon before the end of President Trump’s term, build a Moon base, establish an enduring presence, and do the other things needed to ensure American leadership in space,” said NASA Administrator Jared Isaacman.

He added: “Today, we are providing a demand for frequent crewed missions well beyond (previously announced moon landings in 2028). We intend to work with no fewer than two launch providers with the aim of crewed landings every six months, with additional opportunities for new entrants in the years ahead. America will never again give up the moon.”

I knew Elon and Jeff were going to get something from all this.

I hope the mission goes according to plan. I hope I live long enough to see a successful manned landing on the moon and even that lunar base. Then again, President Obama launched the Cancer Moonshot in 2016, aiming to “end cancer as we know it,” and there still seems to be plenty of cancer around. Sure, much progress has been made, but we’re still seeing disturbing trends like  skyrocketing” increases in colorectal cancer rates in young adults.  

You might call Operation Warp Speed a moonshot, developing effective vaccines against the global COVID pandemic in a matter of months, but it has had the paradoxical result of a new wave of vaccine hesitancy generally, aided and abetted by the MAHA team heading up HHS in the Trump Administration. You wouldn’t consider our measles outbreak as what we’d expect from a vaccine moonshot.   

Similarly, Alphabet has a whole “Moonshot Factory” aimed at big breakthroughs, but none of its successes have revolutionized society or even been the Next Big Thing for Alphabet. "We have a 2% hit rate," CEO Astro Teller told a conference last fall. "Most of the things we try don't work out, and that's okay." Waymo and Wing are considered its big successes, but, I don’t know about you, neither is in my market yet.

A couple weeks ago I wrote about the U.S. military seems to have failed top learn the lessons of the way in Ukraine, continuing to rely on expensive weapons systems that are ill-equipped to deal with flights of AI-driven drones. A couple days ago Simon Shuster wrote in The Atlantic about his visit to Rheinmetall, the German arms manufacturer. He told his guide about how tanks in Ukraine had changed from being killing machines to being easy drone targets, and so had been modified to have nets and other anti-drone protections. His guide was abashed. “No,” he said. “We don’t have something like that.”

The Rheinmetall CEO was dismissive of Ukrainian innovation: ““It’s Ukrainian housewives. They have 3-D printers in the kitchen, and they produce parts for drones. This is not innovation.”

I beg to differ.

I think of all this in the context of an updated KFF analysis of hospital concentration. The key takeaways:

  • “One or two health systems controlled the entire market for inpatient hospital care in nearly half (47%) of metropolitan areas in 2024.
  • In more than four of five metropolitan areas (83%), one or two health systems controlled more than 75 percent of the market.
  • Nearly all (97% of) metropolitan areas had highly concentrated markets for inpatient hospital care when applying HHI thresholds from antitrust guidelines to MSAs.
  • Most hospital markets in metropolitan areas (80%) became less competitive from 2015 to 2024 or were controlled by one health system over that entire period.”

I first wrote in 2015 about how hospitals were the biggest source of health care spending – as they had been in 1960, and as they are today. KFF says they accounted for 40% of our national health care spending growth from 2022 to 2024. With such concentrated market share, it’s easy to see why.

This is not innovation. Those are not the result of any moonshots. That is not the future.

Hospitals, to use an overworked analogy, are the health care system’s tanks (or aircraft carriers). Powerful but hugely expensive, relatively slow, steeped in traditions of prior wars. They should not be the mainstays of 21st century medicine.

21st century healthcare should not be “fought” with big, expensive, slow-to-produce assets. Even aside from hospitals, I mean, how long does it take to train physicians, at what expense? And once they are practicing, how long does it take to bring the new clinical findings into their actual practice? It’s ridiculous, especially in an AI era.

Similarly, how many billions does it take to develop new drugs, leaving how many years of patent protection?  With genetic manipulation, AI-assistance, and 3D printing, why aren’t we in the era of inexpensive, more effective prescription drugs?

We need the kind of innovation that Ukraine has brought to 21st century warfare. Those are the kind of moonsho

Monday, March 23, 2026

Calling BS

We are living, you’d have to say, in the age of bullshit. Our politicians can’t answer the simplest of questions without spouting word salad answers aimed at running out the clock until the next question. Our corporations spew endless platitudes about their lofty goals in an attempt to distract us from their mendacious profit-seeking. And now we have AI producing endless volumes of words, an unpredictable amount of which aren’t remotely true.

Despite what you might think, you may want to be that guy. Credit: Microsoft Designer

For better or worse (and, trust me, it has often been for worse), I’ve always been one to ask “why,” to probe vagueness -- whether it was a teacher, a boss, or a politician. Call me cynical, call me skeptical, call me inquisitive, but I have a low tolerance for bullshit, in its many forms. So I was thrilled to see that a new study suggests that employees who don’t fall for corporate bullshit may be better employees.

The study is from Shane Littrell, a postdoctoral researcher and cognitive psychologist at Cornell University, whose research “focuses primarily on how people evaluate and share knowledge, particularly the ways that misleading information (e.g., bullshit, conspiracy theories, corporate messaging) influence people’s beliefs, attitudes, and decisions.”

One wonders what he was like as a child.

His new research introduces a new tool called the Corporate Bullshit Receptivity Scale (CBSR), which was “designed to measure susceptibility to impressive-but-empty organizational rhetoric.”

His paper defines “bullshit” as “a type of semantically, logically, or epistemically dubious information that is misleadingly impressive, important, informative, or otherwise engaging,” and distinguishes it from other types of speech (such as jargon) in that “it is both functionally misleading and epistemically irresponsible.”  

“Corporate bullshit is a specific style of communication that uses confusing, abstract buzzwords in a functionally misleading way,” said Dr. Littrell. “Unlike technical jargon, which can sometimes make office communication a little easier, corporate bullshit confuses rather than clarifies. It may sound impressive, but it is semantically empty.”   

For the current research, he developed a “corporate bullshit generator” that mixes and marches phrases from actual Fortune 500 business leaders to produce “statements that were syntactically coherent but semantically empty (e.g., “Working at the intersection of cross-collateralization and blue-sky thinking, we will actualize a renewed level of cradle-to-grave credentialing and end-state vision”).” They sound like statements a real person might say and that should have meaning, but are neither.

Could you tell the real from the bullshit. Source: Lattrell
He then had study participants evaluate those pseudo-statements versus actual statements, rating the “business savvy” they reflected. As the Cornell press release summarized:

The results revealed a troubling paradox. Workers who were more susceptible to corporate BS rated their supervisors as more charismatic and “visionary,” but also displayed lower scores on a portion of the study that tested analytic thinking, cognitive reflection and fluid intelligence. Those more receptive to corporate BS also scored significantly worse on a test of effective workplace decision-making.
The study found that being more receptive to corporate bullshit was also positively linked to job satisfaction and feeling inspired by company mission statements. Moreover, those who were more likely to fall for corporate BS were also more likely to spread it.

E.g., the more gullible sheep probably aren’t the best workers.

Don't just follow the herd. Credit: Microsoft Designer
“This creates a concerning cycle,” Dr. Littrell said. “Employees who are more likely to fall for corporate bullshit may help elevate the types of dysfunctional leaders who are more likely to use it, creating a sort of negative feedback loop. Rather than a ‘rising tide lifting all boats,’ a higher level of corporate BS in an organization acts more like a clogged toilet of inefficiency.”

Dr. Littrell was quick to point out that falling for corporate bullshit is not a function of intelligence, education, or job functions, telling Michael Sainato of The Guardian: “This isn’t something that only affects people who are less intelligent. Anybody can fall for bullshit, and we all, depending on the situation, fall for bullshit when it is kind of packaged up to appeal to our biases.”

Similarly, he told Jessica Stillman, writing in Inc.: ““Unfortunately, bullshit and bullshitting are unavoidable. It’s just part of human behavior, especially in competitive environments...If senior executives communicate in ‘bullshitty’ ways, then everyone else will too. They should normalize clearly defining their terms, focus on shorter, to-the-point sentences, and resist using ambiguous buzzwords.”

“Most of us, in the right situation, can get taken in by language that sounds sophisticated but isn’t,” Dr. Littrell said. “That’s why, whether you’re an employee or a consumer, it’s worth slowing down when you run into organizational messaging of any kind – leaders’ statements, public reports, ads – and ask yourself, ‘What, exactly, is the claim? Does it actually make sense?’ Because when a message leans heavily on buzzwords and jargon, it’s often a red flag that you’re being steered by rhetoric instead of reality.”

Ask. That. Question.

One of my favorite takes on the research was from Rupert Goodwins in The Register, who starts by saying:

Science is at its best when it makes manifest radical ideas that change our worldview. This is the flag all sane people salute, under which we march to war. Yet in our hearts, we know that the very tastiest science is that which confirms our prejudices and validates what we've known all along. Cornell University has just served up a plate of the finest yet. Tuck in.

He points out the long history of corporate bullshit, especially in tech and consulting, and now made much worse with AI as “prime slime.” According:

This is where we call upon the team at Cornell to expand and extend their science beyond the general skewering of business jargon and those who create and consume it, welcome and valuable as it is. The use of the stuff as a diagnostic is great – now use that as the basis for identifying and dissecting the stuff itself, and the mechanisms by which it affects choices and actions.
The Corporate Bullshit Receptivity Scale is a great start. Now we need the ABRC, the AI Bullshit Receptivity Scale.

Unfortunately, Dr. Littrell admitted to Ms. Stillman: “The scale is a promising tool for researchers, but it’s not quite ready yet to be used as a high-stakes screening instrument by private companies. We still need to investigate it more robustly first.”

In the meantime, if you’ve got troublesome employees who are always asking uncomfortable questions and seeking more clarity on goals, instead of sidelining or even firing them, you may want to consider promoting them. They may be your best employees.

Monday, March 16, 2026

Stuck in the Middle

Even before the war – oops: special operation, excursion, or whatever your preferred term is – with Iran started, people were complaining about how expensive things are. Home ownership for first time buyers seems out of reach. Sure, egg prices may be down from the late stages of the Biden Administration (thank you so much, bird flu!), but most of us are still dismayed by our grocery bills. Health insurance costs what a house might have cost fifty years ago and what a new car might have cost twenty years ago.

Using a middleman to negotiate. Credit: Microsoft Designer

The latest findings from the West Health-Gallup Center on Healthcare in America show that a third of Americans have cut back on expenses in order to pay health care expenses. We’re stringing out their prescriptions, borrowing money, even skipping meals to pay our health care bills. Even among those with health insurance 29% are cutting back; 62% of those without health insurance are making trade-offs, and I’m surprised the latter isn’t much higher.

Similarly, Kaiser Family Foundation found that 4 in 10 Americans have not taken their prescription medications due to costs, and 6 in 10 worry about being able to afford prescription drugs for themselves or their families. Even among those with insurance, a majority worry.  

Gallop also found that Americans are delaying major life events due to their health care costs, including taking vacations (29%), surgical or medical treatments (26%), or changing jobs (18%). Even a quarter of those with family incomes over $240,000 report such delays.



Meanwhile, the average cost for a new car hit $50,000 in December (although it declined slightly in January). Edmonds reports that 1in 5 new car buyers have payments of $1,000+, a new record The average new car payments were $772, also a new record. Even the percent of used car buyers with $1,000+ payments hit a new record. If you think you can still find entry level cars under $20,000, Kelley Blue Book says forget it.

And, of course, even once you have a car you have to pay for gas, insurance, and maintenance, all of which are also going up noticeably. Navy Federal Credit Union’s Cost of Car Ownership,(COVO) Index found that the cost of car ownership has gone up 42% since January 2020, going up at twice the rate of inflation.

“Americans are frustrated by Whac-a-Mole inflation,” said Heather Long, chief economist at the credit union. “It’s difficult to plan and leaves middle-class and moderate-income consumers constantly on edge about what will shoot up in price next.”

If you’re wondering why all the talk about cars it is because of a fascinating article by Imani Moise in The Wall Street Journal about a new way to buy – or, at least, to negotiate for – cars: hire a middleman. For a flat $1,000, 33 year-old Tomi Mikula will negotiate for you, using the expertise he gained from a decade of selling cars.

His company is called Delivrd, which now includes five other professionals. Its slogan is “Skip the Dealership, Not the Deal,” and it promises “A seamless, enjoyable car buying experience tailored to your busy lifestyle.” He even livestreams some of his negotiations.

Mr. Mikula pits dealers against dealers, looking for the best deal. Some have started to refuse to deal with him, while others relish the challenge. Even his expertise can’t always result in a good deal; for some popular models, he says, ““You’re paying for me to find you one.”

Here’s the quote I loved: “You’re hiring a middleman to deal with the middleman to make the middleman more efficient,” Mr. Mikula said.

That sure brings me back to health care.

In the Republicans’ perfect heath care world, consumers would control their own money, purchasing services wisely, with transparent pricing. It was a point of contention in the recent efforts to expand the expanded premium credit for ACA, but goes much further. President Trump recently amplified this in his State of the Union: “I want to stop all payments to big insurance companies and instead give that money directly to the people so they can buy their own healthcare, which will be better healthcare at a much lower cost.”

Of course, they always gloss over the huge differences in health care expenditures, where the top 5% of people account for half of all spending.



“Transparency” has been a rallying cry for conservatives for the last twenty years, with some progress but little impact. There are tens of thousands of “services,” each of which have prices that vary by payor, and few of which are meaningful unless you happen to have a medical degree (and, even then, not always).

Even prescriptions, which would seem like something that should be simple, are maddeningly opaque. Is it on formulary, is it in-network (or not only in-network but “preferred”), is it brand or generic?

Cars, on the other hand, are much simpler. A new car model from Dealer A is the same thing as that model from Dealer B. You can easily find the list price, the safety record, the consumer and expert ratings. Even for used cars, you can find suggested price and vehicle history record. All the data you should need to negotiate like Mr. Mikula should be there.

Yet, I daresay, few of us leave a car dealer feeling we’ve gotten the best deal, no matter how much homework we’ve done. The information asymmetry has been lessened, but not eliminated.  Thus the opportunity for “a middleman to deal with the middleman to make the middleman more efficient.”

I suppose we could create an industry of such middlemen for healthcare. They’d have to deal with the problem that the service you buy from Hospital A is not the same service you might buy from Hospital B; in fact, the service you buy from Dr. Z at Hospital A is not the same as the service you might buy from Dr. Y at the same hospital. Health care is not a commodity, and we don’t really know how to quantify exactly what we’re buying.

Middlemen or not.

In theory, health insurers should be our middlemen, dealing with health care practitioners and organizations from a position of more volume and more expertise, but most of us view them as acting more in their own interests. And even those middlemen have hired their own middlemen, such as PBMs.

If we want to make things more affordable, we need more than transparency, and presence of middlemen is a sign that a market isn’t working, not a way to make it work better.

Monday, March 9, 2026

While We Were Bombing

When it comes to the fight between Anthropic and the Pentagon, I’m on Team Claude. If asked to trust Anthropic CEO Dario Amodei or Secretary Pete Hegseth, I’m picking Dr. Amodei. The spat between Anthropic and the Pentagon may really be less about AI governance than a personality problem between the two men, but still is important. All that being said, I hate to break the news to Dr. Amodei, but there are going to be autonomous AI weapons – if there are not already – and AI is almost certainly already being used for mass surveillance, even of U.S. citizens.

Attack of the drones -- guided by AI. Credit: Microsoft Designer

Those were his supposed “red lines,” and they are good ones, but technology advances and current events have rendered them moot. Are they “lawful”? Well, they probably aren’t illegal, but that speaks more to how outdated our laws are when it comes to AI (or many other newer technologies). Meanwhile, of course, the U.S. and Israel unilaterally attacked Iran – pick your choice of the many rationales offered – and Claude has been an integral part.

The future of war has arrived. It actually arrived in Ukraine a couple of years ago. A war that started out as a 20th century war, relying heavily on tanks, troops, and artillery, quickly evolved into something few had been expecting -- a war of drones, cell phones, GPS, AI, anti-drone countermeasures. Ukraine has demonstrated startling (and desperately needed) innovation, in tactics, strategy, and especially drones. Despite the country being battered by Russian missile and drone attacks, Ukraine produces over 4 million drones a year, far more than the U.S. or, indeed, all NATO countries combined.

U.S.-supplied missile systems like the Patriot, Stinger, or Javelins have helped Ukraine fend off Russian attacks, but those systems are expensive and in short supply. And once Russia started using Iranian-designed drones in mass attacks, they became woefully inadequate, not to mention not cost-effective – a $1,000 drone versus a $1 million interceptor?  The economics are clear.

Iranian Shahed drones. Credit: AP
They may be clear, but evidently not quite fully apparent to the U.S. military. Attacks on Iran look a lot like the Gulf War, although the aircraft and the “smart” munitions are better (and more expensive). When Iran retaliated, it was largely through its vaunted Shehed drones. The initial U.S. casualties were the result of a drone attack, as were attacks on U.S. radar systems. As Ukraine painfully learned, but the U.S. apparently did not, expensive missile systems are not well designed to counter massive drone attacks. The Hill reports that Pentagon officials admitted to Congressional leaders that Iranian drone attacks were getting through U.S. defenses, putting our troops and bases at risk.

Kelly Grieco, a senior fellow at the Stimson Center think tank, told The Hill: “It’s worth saying that the notion the U.S. military couldn’t have predicted this threat begs belief given that it was well known about Iranians’ Shahed threat. And we’ve had four years of watching Ukraine deal with Iranian drones and Russian-made variants of them in attacks, so this shouldn’t have come as a surprise.”

And yet…

It’s no wonder that, after years of having to beg for U.S. support, Ukraine’s President Zelensky has offered to share some of his country’s hard won drone expertise. “Our military possesses the necessary capabilities,” President Zelensky said in a post on X. “Ukrainian experts will operate on-site, and teams are already coordinating these efforts.”

Let us hope that U.S. and Israeli officials are not too proud, or too stupid, to take such assistance.

The problem may boil down to, as The Pentagon’s first AI chief, retired Air Force Lt. Gen. Jack Shanahan, told The Wall Street Journal: “The Department of Defense was built as a hardware company in the industrial age, and it has struggled to become a digital company in a software-centric era,” Weapons and weapons systems that take years to develop, more years to produce, while costing cost tens of millions or more, are going to struggle to keep up in a world where weapons can be 3D printed and guided by AI.

It should be noted that last fall President Zelensky warned the UN: “Dear leaders, we are now living through the most destructive arms race in human history because this time, it includes artificial intelligence. We need global rules now for how AI can be used in weapons. And this is just as urgent as preventing the spread of nuclear weapons.” Dr. Dario was perhaps listening, with Secretary Hegseth almost certainly was not.

Craig Jones, a political geographer at Newcastle University, UK, told Nicola Jones in Nature: “The current failure to regulate AI warfare, or to pause its usage until there is some agreement on lawful usage, seems to suggest potential proliferation of AI warfare is imminent.”

It’s here. Is AI being used in Iran? You bet, as Michael Daniels and Dov Lieber of The Wall Street Journal outline, everything from logistics and intelligence analysis to targeting.  

Unfortunately, as Steve Feldstein, a senior fellow at the Carnegie Endowment for International Peace think tank, told Rest of World: “Do we have the right rules in place and accountability norms to handle the exponential growing use of these tools? My answer would be no.”  

Similarly, Daniel Castro, a vice president at Information Technology and Innovation Foundation (ITIF), wrote in IEEE Spectrum regarding the Amodei/Hegseth dispute:

Reasonable people can disagree about where those lines should be drawn
But that disagreement underscores a deeper point: the boundaries of military AI use should not be settled through ad hoc negotiations between a Cabinet secretary and a CEO. Nor should they be determined by which side can exert greater contractual leverage.
If the U.S. government believes certain AI capabilities are essential to national defense, that position should be articulated openly. It should be debated in Congress, and reflected in doctrine, oversight mechanisms and statutory frameworks. The rules should be clear — not only to companies, but to the public.

The Pentagon’s strategy seems to be bombs away, even if it costs $1 billion per day and soon will have diminishing impact. The Administration’s AI strategy seems to be that guardrails only would dampen innovation and leave us behind in the AI race. Our septuagenarian Congress still can’t figure out Facebook, and wants no part of tackling AI. None of these inspire confidence.

I think Dr. Amodei and President Zelensky have a much better grasp on the future – which is already happening -- than do Secretary Hegseth or President Trump, but I worry we’re going to have to go through a lot of scary things before we settle into that future.

Monday, March 2, 2026

Where Are We Going to Put All That?

Right now, the world is talking about the U.S. attacks on Iran, while the tech world is closely following the Pentagon-Anthropic-OpenAI spats about AI safety and guardrails. I’m going to let cooler, smarter heads opine about them. Instead, I want to return to the rather more dull, but equally important, topic of data storage.

A pretty piece of glass? Yes, but with LOTs of data on it. Credit: Microsoft Research

I first wrote about data storage almost ten years ago, focusing on the then-new ideas of using diamonds or DNA as ultra-dense storage mechanisms. Five years later I was surprised to find we were about to enter the Yottabyte Era of data, with DNA still a leading candidate to store all that data. Since then we’ve seen AI blossom and data centers become a top-of-mind topic of conversation for many Americans. We’re generating data faster than we can find places to store it, and still haven’t solved how that storage will last long term.

Well, I’m happy to report that advances in DNA storage continue, and that there is a new rival – glass! – that may prove even better.

Let’s start with DNA:

Rewritable DNA storage: This week researchers at the University of Missouri said that they’ve found a way to not only store data in DNA but to rewrite it as needed. Li-Qun “Andrew” Gu, a professor of chemical and biomedical engineering, said. “We wanted to see if we could store and rewrite information at the molecular level faster, simpler and more efficiently than ever before.”

The team is developing a compact electronic device paired with a molecular-scale detector called a nanopore sensor. As the DNA passes through the sensor, it creates subtle electrical changes that software translates back into zeros and ones and, ultimately, the original data file. This method allows data to be written, erased, and rewritten repeated, just like a hard drive might, except with much more storage and longevity.

“Think of it like a super-secure safe deposit box for your digital life,” Professor Gu said. “DNA storage could protect everything from personal memories and important documents to scientific data and corporate archives — without the added cybersecurity concerns.”

Synthetic biology, meet electronics: Last week, a team of researchers at Penn State University, reported on getting DNA to work with electronics. The researchers developed a memory resistor, or “memristor,” that requires little energy to operate. Better yet, memristors can allow current flow even after its power source is turned off and it can remember the direction of prior current flow.

The team had to create customed DNA sequences, integrate them with thin films of perovskite, which is commonly used in solar cells, lasers and data storage devices. This made the DNA capable of conducting electricity.

“We can computationally determine exactly which sequences we need and how long they should be, and then we can rationally design them with synthetic DNA,” co-author Neela H. Yennawar, research professor and director of the Penn State Huck Institutes, said. “These structures can be systematically doped with silver and other ions and engineered to interface seamlessly with perovskites — transforming DNA from a biological macromolecule into a programmable, multifunctional nanomaterials platform.”

“Biology and electronics are different domains,” said Kavya S. Keremane, co-corresponding author and postdoctoral researcher in materials science and engineering. “Bridging these two fields required developing an entirely new materials platform that allows them to function seamlessly together. By combining the information storage capabilities of DNA with the exceptional electronic properties of perovskite semiconductors, we created a bio-hybrid system that fundamentally changes how low-power memory devices can be designed.”

Cheaper, Faster, More secure: In a pair of related studies released in late January, researchers at Arizona State University propose to approach DNA storage differently: “By treating DNA as an information platform rather than just a genetic material, we can begin to rethink how data is stored, read and secured at the nanoscale,” says Hao Yan, a Regents Professor in the School of Molecular Sciences and director of the Biodesign Center for Molecular Design and Biomimetics.

The approach centers less around the well known letters DNA uses but rather the physical shape. They designed and constructed nanoscale DNA structures that acted as physical letters, When those letters pass through a microscopic sensor, machine learning software records and analyzes subtle electrical signals, which the system can then translate back into readable words and short messages with high accuracy.

The approach greatly increases the number of possible molecular codes that can be created, making unauthorized decoding far more difficult. It also allows information to be packed into three-dimensional DNA structures, which adds even more complexity and security to each molecular key.

“In these studies, our team brings together complementary approaches, including DNA nanotechnology, super-resolution optical imaging, high-speed electronic readout and machine learning, to interrogate DNA nanostructures across multiple spatial and temporal scales,” Chao Wang, associate professor in the School of Electrical, Computer and Energy Engineering. said.

Then, for something completely different:

Through a glass, clearly: In mid-February, Microsoft reported what it called a “breakthrough” in glass-based storage, under its Project Silica, the goal of which is to develop:  

…the world’s first storage technology designed and built from the media up to address humanity’s need for a long-term, sustainable storage technology. We store data in quartz glass: a low-cost, durable WORM media that is electromagnetic field-proof, and offers lifetimes of tens to hundreds of thousands of years. This has huge consequences for sustainability, as it means we can leave data in situ, and eliminate the costly cycle of periodically copying data to a new media generation.

The breakthrough entails writing not just to expensive silica but ordinary borosilicate glass, the same material found in kitchen cookware and oven doors. Moreover, they’ve made both the reading and writing devices simpler, faster, and cheaper. The team wrote: “All steps, including writing, reading and decoding, are fully automated, supporting robust, low-effort operation,”

They believe that glass storage is resistant to water, heat, and dust (unlike DNA), and should preserve data for at least 10,000 years. “It has incredible durability and incredible longevity. So once the data is safely inside the glass, it’s good for a really long time,” said Richard Black, the research director of Project Silica. He cautions, though, ““This is not a replacement for everyday storage like [solid state drives] or hard drives. It’s designed for data you want to write once and preserve for a very long time.”

----------

Obviously, I’m skipping lots of technical details, and, just as obviously, we’re not quite there yet with either. But that’s the thing about long-term solutions; we have to start developing them now, before the future overwhelms the present.