Monday, November 24, 2025

Revenge of (the Other) Nerds

Like it or not, ready for it or not, we appear to be living in the age of A.I. Anyone who isn’t worried about its impact on their job isn’t paying attention, and anyone who isn’t thinking about it for their portfolio is risking experiencing FOMO. If it wasn’t for all the A.I. spending – both on its development and on the data centers that support it – we probably wouldn’t be seeing big stock market gains and might even be in a recession. The Wall Street Journal reports: “Growth has become so dependent on AI-related investment and wealth that if the boom turns to bust, it could take the broader economy with it.”

AI is going to have to get past the accountants and actuaries. Credit: Microsoft Designer

So it is kind of ironic that a revolution caused by the computer nerds could be facing big headwinds caused by the green eyeshade kind of nerds, like accountants and actuaries.

Let’s start with the accounting. Many questions have been raised about how circular some of the AI investments seem. Microsoft invests in OpenAI, which then buys cloud computing from Microsoft. Same with Oracle. NVIDIA invests in OpenAI, which then buys lots of NVIDIA chips and causes others to do the same. And the money goes round and round.

Sam Altman, CEO of OpenAI, recently said: “There is always a lot of focus on technological innovation. What really drives a lot of progress is when people also figure out how to innovate on the financial model.” Unfortunately, some of those innovations are starting to look like innovations Enron might have come up with.

Jonathan Weil of The Wall Street Journal analyzed how Meta was financing the building of a new $25b data center without it appearing on its balance sheets, and concluded: “The favorable accounting outcome hinges on some convenient assumptions. Some appear implausible, while others are in tension with one another, making the off-balance-sheet treatment look questionable.”

Mr. Weil concludes: “Artificial intelligence, meet artificial accounting.”

Then there is insurance. AI benefits carry AI liabilities. Consider AI toys. The U.S. Public Interest Research Group Education Fund (PIRG) issued its 40th Trouble in Toyland report, and one of the areas of focus was A.I. chatbots that interact with children. It’s scary: “We found some of these toys will talk in-depth about sexually explicit topics, will offer advice on where a child can find matches or knives, act dismayed when you say you have to leave, and have limited or no parental controls.” Privacy is also a concern.

An advisory from Fairplay was also blunt: “AI Toys Are NOT Safe for Kids.”

When AI products might tell kids where to find knives or encourage explicit sex talks, you can imagine that liability concerns come right to mind for actuaries and CFOs. That’s why Financial Times is reporting that insurers want no part of AI exposure. Lee Harris and Christina Criddle found:

Major insurers are seeking to exclude artificial intelligence risks from corporate policies, as companies face multibillion-dollar claims that could emerge from the fast-developing technology.
AIG, Great American and WR Berkley are among the groups that have recently sought permission from US regulators to offer policies excluding liabilities tied to businesses deploying AI tools including chatbots and agents.

Dennis Bertram, head of cyber insurance for Europe at Mosaic told them: “It’s too much of a black box.” Rajiv Dattani, co-founder of the Artificial Intelligence Underwriting Company, an AI insurance and auditing start-up, added: “Nobody knows who’s liable if things go wrong,”

The "black box" nature of AI is a liability problem. Credit: Microsoft Designer

We’ve already seen some AI-related losses. The Tech Buzz details:

Google's AI Overview falsely accused a solar company of legal troubles earlier this year, triggering a $110 million lawsuit. Air Canada got stuck honoring a discount its chatbot completely invented after a customer took the airline to small claims court. Most dramatically, fraudsters used a digitally cloned executive to steal $25 million from London engineering firm Arup during what appeared to be a legitimate video conference.

In addition to outright exclusions, insurers are adding amendments that limit liability to certain types of risks or certain amounts of payouts. Aon’s head of cyber Kevin Kalinich told FT that the industry could accept some AI-related losses, but: “What they can’t afford is if an AI provider makes a mistake that ends up as a 1,000 or 10,000 losses — a systemic, correlated, aggregated risk.”

One way or another, the experts told FT,  there will be some AI-related losses, and some of them will end up in court. Aaron Le Marquer, head of insurance disputes team at law firm Stewarts, told FT: “It will probably take a big systemic event for insurers to say, hang on, we never meant to cover this type of event.”

This isn’t unexpected. New technologies bring new benefits, and new risks, and it takes time to factor in both. “When we think about car insurance, for example, the broad adoption of the safety belt was really something which was driven by the demands of insurance,” Michael von Gablenz, who heads the AI insurance division of Munich Re, told NBC News. “When we’re looking at past technologies and their journey, insurance has played a major role in that, and I believe insurance can play the same role for AI.”

NBC News also cites a survey from the Geneva Association indicating that 90% of businesses want insurance protection against generative AI losses, and an Ernst & Young report that found 99% of the 975 firms it surveyed had suffered financial losses from AI-related risks – two-thirds of them more than $1 million.

Clearly there is a need for insurance against AI-related losses, and certainly there is an emerging market, but actuaries need data and predictability, and both of those are in short supply at the moment.  Still, the potential is huge. Deloitte predicts AI insurance could be a $4.8b market by 2032, which seems remarkably low.

Martin Anderson, writing in Unite.AI, suggests we may need some sort of federal backstop, such as what happened with the nuclear industry or for vaccine development. “However, history suggests that forcing AI companies to insure themselves, devoid of government aid, is not the likely path ahead,” he says. On the other hand, “Those that object to the possibility of AI obtaining the same ‘bailout’ status as banks, are not likely to embrace heavily government-backed solutions to the insurance quandaries around AI.”  

Insurance quandaries abound.

----------

In one sense, it may frustrate AI advocates that mundane things like accounting and insurance might slow down its progress. On the other hand, the fact that they are is a sign that they’re truly becoming mainstream. So give the green eye-shade guys a break while they figure this out.

Monday, November 17, 2025

If You Could Read My Mind - Wait, You Can?

Over the years, one area of tech/health tech I have avoided writing about are brain-computer interfaces (B.C.I.). In part, it was because I thought they were kind of creepy, and, in larger part, because I was increasing finding Elon Musk, whose Neuralink is one of the leaders in the field, even more creepy. But an article in The New York Times Magazine by Linda Kinstler rang alarm bells in my head – and I sure hope no one is listening to them.

This is your brain in fMRI. Credit: Max Planck Institure

Her article, Big Tech Wants Direct Access to Our Brains, doesn’t just discuss some of the technological advances in the field, which are, admittedly, quite impressive. No, what caught my attention was her larger point that it’s time – it’s past time – that we started taking the issue of the privacy of what goes on inside our heads very seriously.

Because we are at the point, or fast approaching it, when those private thoughts of ours are no longer private.

The ostensible purpose of B.C.I.s has usually been as for assistance to people with disabilities, such as people who are paralyzed. Being able to move a cursor or even a limb could change their lives. It might even allow some to speak or even see. All are great use cases, with some track record of successes.

B.C.I.s have tended to go down one of two paths. One uses external signals, such as through electroencephalography (EEG) and electrooculography (EOG), to try to decipher what your brain is doing. The other, as Neuralink uses, is an implant directly in your brain to sense and interrupt activity. The latter approach has the advantage of more specific readings, but has the obvious drawback of requiring surgery and wires in your brain.

There’s a competition held every four years called Cybathlon, sponsored by ETH Zurich, that “acts as a platform that challenges teams from all over the world to develop assistive technologies suitable for everyday use with and for people with disabilities.” A profile of it in NYT quoted the second place finisher, who uses the external signals approach but lost to a team using implants: “We weren’t in the same league as the Pittsburgh people. They’re playing chess and we’re playing checkers.”  He’s now considering implants.  

A Cybathlon 2024 competitor. Credit: Cybathlon ETH Zurich

Fine, you say. I can protect my mental privacy simply by not getting implants, right?  Not so fast. A new paper in Science Advances discusses progress in “mind captioning.” I.e.:

We successfully generated descriptive text representing visual content experienced during perception and mental imagery by aligning semantic features of text with those linearly decoded from human brain activity…Together, these factors facilitate the direct translation of brain representations into text, resulting in optimally aligned descriptions of visual semantic information decoded from the brain. These descriptions were well structured, accurately capturing individual components and their interrelations without using the language network, thus suggesting the existence of fine-grained semantic information outside this network. Our method enables the intelligible interpretation of internal thoughts, demonstrating the feasibility of nonverbal thought–based brain-to-text communication.

The model predicts what a person is looking at “with a lot of detail”, says Alex Huth, a computational neuroscientist at the University of California, Berkeley who has done related research. “This is hard to do. It’s surprising you can get that much detail.”

“Surprising” is one way to describe it. “Exciting” could be another.  For some people, though, “terrifying” might be what first comes to mind.

The mind captioning uses fMRI and  AI to do the mind captioning, and the participants were fully aware of what was going on. None of the researchers suggest that the technique can tell exactly what people are thinking. “Nobody has shown you can do that, yet,” says Professor Huth.

 It’s that “yet” that worries me.

Dr. Kinstler points out that’s not all we have to worry about: “Advances in optogenetics, a scientific technique that uses light to stimulate or suppress individual, genetically modified neurons, could allow scientists to “write” the brain as well, potentially altering human understanding and behavior.”

“What’s coming is A.I. and neurotechnology integrated with our everyday devices,” Nita Farahany, a professor of law and philosophy at Duke University who studies emerging technologies, told Dr. Kinstler. “Basically, what we are looking at is brain-to-A.I. direct interactions. These things are going to be ubiquitous. It could amount to your sense of self being essentially overwritten.” 

Now are you worried?

Dr. Kinstler notes that some countries – not including the U.S., of course – have passed neural privacy laws. California, Colorado, Montana and Connecticut have passed neural data privacy laws, but the Future of Privacy Forum details how each is different and that there is not even a common agreement on exactly what “neural data” is, much less how best to safeguard it. As is typical, the technology is way outpacing the regulation.

Credit: Future of Privacy Forum

“While many are concerned about technologies that can “read minds,” such a tool does not currently exist per se, and in many cases nonneural data can reveal the same information,” writes Jameson Spivack, Deputy Director for Artificial Intelligence for FPF. “As such, focusing too narrowly on “thoughts” or “brain activity” could exclude some of the most sensitive and intimate personal characteristics that people want to protect. In finding the right balance, lawmakers should be clear about what potential uses or outcomes on which they would like to focus.”

I.e., we can’t even define the problem well enough yet.  

Dr. Kinstler describes how people have been talking about this issue literally for decades, with little progress on the legislative/regulatory front. We may be at the point where debate is no longer academic. Professor Farahany warns that having the ability to control ones thoughts and feelings ““is a precondition to any other concept of liberty, in that, if the very scaffolding of thought itself is manipulated, undermined, interfered with, then any other way in which you would exercise your liberties is meaningless, because you are no longer a self-determined human at that point.”

In 2025 America, this does not seem like an idle threat.

------------

In this digital world, we’ve gradually been losing our privacy. Our emails aren’t private? Oh, OK. Big tech is tracking our shopping? Well, we’ll get better offers. Social media mines our data to best manipulate us? Yes, but think of the followers we might gain. Surveillance camera can track our every move? But we need it to fight crime!

We grumble but mostly have accepted these (and other) losses of privacy. But when it comes to the possibility of technology reading our thoughts, much less directly manipulating them, we cannot afford to keep dithering.

Monday, November 10, 2025

Support Your Neighborhood Scientist

These are, it must be said, grim times for American science. Between the Trump budget cuts, the Trump attacks on leading research universities, and the normalization of misinformation/ disinformation, scientists are losing their jobs, fleeing to other countries, or just trying to keep their heads down in hopes of being able to just, you know, keep doing science.

Seriously: you should. Credit: Stand Up for Science

But some scientists are fighting back, and more power to them. Literally.

Lest you think I’m being Chicken Little, warning prematurely that the sky is falling, there continue to be warning signs. Virginia Gewin, writing in Nature, reports Insiders warn how dismantling federal agencies could put science at risk. A former EPA official told her: “It’s not just EPA. Science is being destroyed across many agencies.” Even worse, one former official warned: “Now they are starting to proffer misinformation and putting a government seal on it.”  

A third researcher added: “The damage to the next generation of scientists is what I worry the most about. I’ve been advising students to look for other jobs.”

It’s not just that students are looking for jobs outside of the government. Katrina Northrop and Rudy Lu write in The Washington Post about the brain drain going to China. “Over the past decade,” they say, “there has been a rush of scholars — many with some family connection to China — moving across the Pacific, drawn by Beijing’s full-throttle drive to become a scientific superpower.” They cite 50 tenure track scholars of Chinese descent who have left U.S. universities for China. Most are in STEM fields.

“The U.S. is increasingly skeptical of science — whether it’s climate, health or other areas,” Jimmy Goodrich, an expert on Chinese science and technology at the University of California Institute on Global Conflict and Cooperation, told them. “While in China, science is being embraced as a key solution to move the country forward into the future.”

They note how four years ago the U.S. spent four times as much in R&D than China, whereas now the spending is basically even, at best.

I keep in mind the warning of Dan Wang, a research fellow at Stanford’s Hoover Institution:

Think about it this way: China is an engineering state, which treats construction projects and technological primacy as the solution to all of its problems, whereas the United States is a lawyerly society, obsessed with protecting wealth by making rules rather than producing material goods.

We’ve seen what a government of lawyers does, creating laws and regulations that protect big corporations and the ultra-rich, while making everything so complex that, voila, more lawyers are needed. Maybe it’s time to see what a government of scientists could do.

When scientists (or engineers) are in charge, we can put a man on the moon within a decade or create a pandemic vaccine in months. When lawyers are in charge we get Congresses that can’t even pass a budget.  

In The Atlantic, Katherine J. Wu discusses a new wave of scientists who are running for public office. Core to that effort is 314 Action, which claims it is “the only organization in the nation focused on recruiting, training, and electing Democrats with a background in science to public office.” Shaughnessy Naughton, the president of 314 Action, told Ms. Wu the organization had fielded 700 applications from scientists interested in becoming candidates just this year, which is seven times what it would normally expect.

Credit: 314 Action
Ms. Wu cites data from Rutgers University’s Eagleton Institute of Politics that only 3 percent of state legislators are scientists, engineers, or health-care professionals – and most of those are Republicans. 314 action thinks it can help change that. Its website declares:

Bottom line: when candidates run on their science credentials and have the backing to get their message out there, they win. 314 Action candidates are scientists first, not politicians. We’re fighting to elect scientists who can tackle urgent shared challenges – like the climate crisis, reproductive rights, and healthcare access –  and secure a better future for us all. 

It claims to have raised some $8.6m and help elect 400 endorsed candidates, including 4 U.S. Senators, 13 members of the U.S. House, 9 candidates for down-ballot statewide offices, and over 300 candidates at the state and municipal level. Ms. Wu reports that Hawaii’s Josh Green, the only Democratic physician currently serving in a state governorship, has partnered with 314 Action to launch a $25 million campaign to elect 100 new Democratic physicians to office by 2030.

“Politics came for us,” pediatrician Annie Andrews told Ms. Wu. “You can’t fight bad politics by staying apolitical.”

Running for office is only one way for scientists (or people who care about science) to fight back. Take Stand Up for Science, which believes in protesting loudly and proudly. Founded just this year in response to Trump Administration actions, Stand Up for Science describes itself as “a political activism organization dedicated to defending and advancing America’s scientific ecosystem, a cornerstone of democracy, freedom, and progress.”

Its mission:

We believe that science is the lifeblood of American democracy and freedom. With a bold strategy combining activism, messaging campaigns, grassroots organizing, and political advocacy, we’re mobilizing the fight for science and democracy, now and for generations to come.

SUFS was active in the No Kings protests, and is conducting an important – and amusing -- effort to impeach HHS Secretary Robert Kennedy Jr. called “Impeach the Quack,” complete with toy ducks.

Founder and Chief Executive Officer Colette Delawalla, MA, MS manages to run the organization while working on her Ph.D. (and apparently being a mom). She saw the need as soon as Trump was inaugurated. “You’ve got these legacy organizations that have simply not thought it was that important to communicate with the public in a meaningful way,” she told NOTUS. “Any of these organizations — and I know this because I’ve done it — on Jan. 21, 2025, could have stood up, in less than 24 hours, a 501(c)(4) nonprofit arm of what they’re already doing, and granted over some money, set up a little team, and gotten political.” Too few did, so she created her own organization.

Both 314 Action and Stand Up for Science deserve our support.

----------

Scientists are no angels (e.g., James Watson, William Shockley), Maybe putting them in charge isn’t the answer. But, really, could they do any worse than our current politicians? We’re quickly moving into an era of AI, quantum computing, synthetic biology, and a host of other advances, while battling climate change, microplastics, income inequality, and many other challenges. Who do you think will be best able to deal with them: lawyers, or scientists?  

Monday, November 3, 2025

Life Is Geometry

In 2025, we’ve got DNA all figured out, right?  It’s been over fifty years since Crick and Watson (and Franklin) discovered the double helix structure. We know that permutations of just four chemical bases (A, C, T, and G) allow the vast genetic complexity and diversity in the world. We’ve done the Humam Genome Project. We can edit DNA using CRISPR. Heck, we’re even working on synthetic DNA. We’re busy finding other uses for DNA, like computing, storage, or robots. Yep, we’re on top of DNA.

Super-resolution imaging reveals the 3D geometry of the genome. Credit: Northwestern University

Not so fast. Researchers at Northwestern University say we’ve been missing something: a geometric code embedded in genomes that helps cells store and process information. It’s not just combinations of chemical bases that make DNA work; there is also a “geometric language” going on, one that we weren’t hearing.

Wait, what?

The research - Geometrically Encoded Positioning of Introns, Intergenic Segments, and Exons in the Human Genome – was led by Professor Vadim Backman, Sachs Family Professor of Biomedical Engineering and Medicine at Northwestern’s McCormick School of Engineering, and director of its Center for Physical Genomics and Engineering. The new research indicates, he says, that: “Rather than a predetermined script based on fixed genetic instruction sets, we humans are living, breathing computational systems that have been evolving in complexity and power for millions of years.”

The Northwestern press release elaborates:

The geometric code is the blueprint for how DNA forms nanoscale packing domains that create physical "memory nodes" — functional units that store and stabilize transcriptional states. In essence, it allows the genome to operate as a living computational system, adapting gene usage based on cellular history. These memory nodes are not random; geometry appears to have been selected over millions of years to optimize enzyme access, embedding biological computation directly into physical structure.

Somehow I don’t think Crick and Watson saw that coming, much less either Euclid or John von Neumann.

Coauthor Igal Szleifer, Christina Enroth-Cugell Professor of Biomedical Engineering at the McCormick School of Engineering, adds: “We are learning to read and write the language of cellular memories. These ‘memory nodes’ are living physical objects resembling microprocessors. They have precise rules based on their physical, chemical, and biological properties that encode cell behavior.”

“Living, breathing computational systems”? “Microprocessors”? This is DNA computing at a new level.

Electron microscopy resolves a 3D packing domain— the physical “memory node”
of the human genome. Credit: Northwestern Engineering

The study suggests that evolution came about not just by finding new combinations of DNA but also from new ways to fold it, using those physical structures to store genetic information. Indeed, one of the researchers’ hypothesis is that development of the geometric code helped lead to the explosion of body types witnessed in the Cambrian Explosion, when life went from simple single and multicellular organisms to a vast array of life forms.

Coauthor Kyle MacQuarrie, assistant professor of pediatrics at the Feinberg School of Medicine, points out that we shouldn’t be surprised it took this long to realize the geometric code: “We’ve spent 70 years learning to read the genetic code. Understanding this new geometric code became possible only through recent advances in globally-unique imaging, modeling, and computational science—developed right here at Northwestern.” (Nice extra plug there for Northwestern, Dr. MacQuarrie.)

Coauthor Luay Almassalha, also from the Feinberg School of Medicine, notes: “While the genetic code is much like the words in a dictionary, the newly discovered ‘geometric code’ turns words into a living language that all our cells speak. Pairing the words (genetic code) and the language (geometric code) may enable the ability to finally read and write cellular memory.”

I love the distinction between the words and the actual language. We’ve been using a dictionary and not realizing we need a phrase book.   

I recently read about, and was impressed by, something called MetaGraph, a tool developed at ETH Zurich to search DNA databases. "It's a kind of Google for DNA," as Professor Gunnar Rätsch, data scientist at the Department of Computer Science at ETH Zurich, puts it. This “DNA search engine” makes it much easier, faster, and cheaper to search for DNA sequences and compare them to other sequences. Cool as that is, the existence of the geometric code means that the ETH Zurich folks may have some additional work to do, as is true of lots of other people working with DNA.

I hate to say it’s a whole new ball game, but there certainly are some important new rules.

The presence of this geometric code has implications for our health. It may not always be DNA mutations that cause problems; our DNA structures may sometimes be falling apart. Dr. Almassalha says: “Instead of a puzzle of genetic words, the geometric code lets cells build elaborate tissues, such as brains or skin. But with age, this language loses its fidelity. This decay results in neurodegeneration, cancer, or other diseases of aging.”

This opens up all sorts of new avenues for research, and, potentially, treatments. “The next step is to fully learn the engineering principles of the geometric code so we can repair dysregulated cell memories or create entirely new ones,” Professor Backman says. “Current approaches to aging try to reset cells back to a factory default state. The geometric code works differently. Cell memories are physical structures enhanced by experience. Revitalizing cells resembles restoring the clarity of a well-loved book — bringing back the stories our cells already know how to tell.”

This isn’t CRISPR. This isn’t mRNA. This is a new way of thinking about cells and our genome. This is a whole new step in computational biology, and it may be foundational in 22nd century medicine.

-------------

If you are a physics or cosmology buff, you may have heard the expression “The universe is geometry.” E.g., Einstein’s general theory of relativity indicates gravity is not a force but, rather, the result of distortions in spacetime. Similarly, whether the universe is flat (Euclidian), positively curved (spherical), or negatively curved (hyperbolic) has profound implications for the fate of the universe. In fact, some scientists believe that geometry may explain everything from the smallest particles to the universe itself.

So it pleases me to think that life itself may owe much to geometry as well.