Like it or not, ready for it or not, we appear to be living in the age of A.I. Anyone who isn’t worried about its impact on their job isn’t paying attention, and anyone who isn’t thinking about it for their portfolio is risking experiencing FOMO. If it wasn’t for all the A.I. spending – both on its development and on the data centers that support it – we probably wouldn’t be seeing big stock market gains and might even be in a recession. The Wall Street Journal reports: “Growth has become so dependent on AI-related investment and wealth that if the boom turns to bust, it could take the broader economy with it.”
![]() |
| AI is going to have to get past the accountants and actuaries. Credit: Microsoft Designer |
So it is kind of ironic that a revolution caused by the computer nerds could be facing big headwinds caused by the green eyeshade kind of nerds, like accountants and actuaries.
Let’s
start with the accounting. Many questions have
been raised about how circular some of the AI investments seem. Microsoft
invests in OpenAI, which then buys cloud computing from Microsoft. Same with
Oracle. NVIDIA invests in OpenAI, which then buys lots of NVIDIA chips and
causes others to do the same. And the money goes round and round.
Sam
Altman, CEO of OpenAI, recently
said: “There is always a lot of focus on technological innovation. What
really drives a lot of progress is when people also figure out how to innovate
on the financial model.” Unfortunately, some of those innovations are starting
to look like innovations Enron might have come up with.
Jonathan Weil
of The Wall Street Journal analyzed
how Meta was financing the building of a new $25b data center without it appearing
on its balance sheets, and concluded: “The favorable accounting outcome hinges
on some convenient assumptions. Some appear implausible, while others are in
tension with one another, making the off-balance-sheet treatment look
questionable.”
Mr. Weil
concludes: “Artificial intelligence, meet artificial accounting.”
Then there
is insurance. AI benefits carry AI liabilities. Consider AI toys. The U.S.
Public Interest Research Group Education Fund (PIRG) issued its 40th
Trouble in Toyland report, and one of the areas of focus was A.I. chatbots
that interact with children. It’s scary: “We found some of these toys will talk
in-depth about sexually explicit topics, will offer advice on where a child can
find matches or knives, act dismayed when you say you have to leave, and have
limited or no parental controls.” Privacy is also a concern.
An advisory
from Fairplay was also blunt: “AI Toys Are NOT Safe for Kids.”
When AI products might tell kids where to find knives or encourage explicit sex talks, you can imagine that liability concerns come right to mind for actuaries and CFOs. That’s why Financial Times is reporting that insurers want no part of AI exposure. Lee Harris and Christina Griddle found:
Major insurers are seeking to exclude artificial intelligence risks from corporate policies, as companies face multibillion-dollar claims that could emerge from the fast-developing technology.
AIG, Great American and WR Berkley are among the groups that have recently sought permission from US regulators to offer policies excluding liabilities tied to businesses deploying AI tools including chatbots and agents.
Dennis
Bertram, head of cyber insurance for Europe at Mosaic told them: “It’s too much
of a black box.” Rajiv Dattani, co-founder of the Artificial Intelligence
Underwriting Company, an AI insurance and auditing start-up, added: “Nobody
knows who’s liable if things go wrong,”
![]() |
| The "black box" nature of AI is a liability problem. Credit: Microsoft Designer |
We’ve already seen some AI-related losses. The Tech Buzz details:
Google's AI Overview falsely accused a solar company of legal troubles earlier this year, triggering a $110 million lawsuit. Air Canada got stuck honoring a discount its chatbot completely invented after a customer took the airline to small claims court. Most dramatically, fraudsters used a digitally cloned executive to steal $25 million from London engineering firm Arup during what appeared to be a legitimate video conference.
In
addition to outright exclusions, insurers are adding amendments that limit
liability to certain types of risks or certain amounts of payouts. Aon’s head
of cyber Kevin Kalinich told FT that the industry could accept some
AI-related losses, but: “What they can’t afford is if an AI provider makes a
mistake that ends up as a 1,000 or 10,000 losses — a systemic, correlated,
aggregated risk.”
One way or
another, the experts told FT, there
will be some AI-related losses, and some of them will end up in court. Aaron Le
Marquer, head of insurance disputes team at law firm Stewarts, told FT:
“It will probably take a big systemic event for insurers to say, hang on, we
never meant to cover this type of event.”
This isn’t
unexpected. New technologies bring new benefits, and new risks, and it takes
time to factor in both. “When we think about car insurance, for example, the
broad adoption of the safety belt was really something which was driven by the
demands of insurance,” Michael von Gablenz, who heads the AI insurance division
of Munich Re, told
NBC News. “When we’re looking at past technologies and their journey,
insurance has played a major role in that, and I believe insurance can play the
same role for AI.”
NBC
News also cites a survey from the Geneva Association indicating that 90% of
businesses want insurance protection against generative AI losses, and an Ernst & Young report that found 99% of the 975 firms it
surveyed had suffered financial losses from AI-related risks – two-thirds of
them more than $1 million.
Clearly
there is a need for insurance against AI-related losses, and certainly there is
an emerging market, but actuaries need data and predictability, and both of
those are in short supply at the moment. Still, the potential is huge. Deloitte predicts
AI insurance could be a $4.8b market by 2032, which seems remarkably low.
Martin
Anderson, writing
in Unite.AI, suggests we may need some sort of federal backstop, such as
what happened with the nuclear industry or for vaccine development. “However,
history suggests that forcing AI companies to insure themselves, devoid of
government aid, is not the likely path ahead,” he says. On the other hand, “Those
that object to the possibility of AI obtaining the same ‘bailout’ status as
banks, are not likely to embrace heavily government-backed solutions to the
insurance quandaries around AI.”
Insurance
quandaries abound.
----------
In one
sense, it may frustrate AI advocates that mundane things like accounting and
insurance might slow down its progress. On the other hand, the fact that they
are is a sign that they’re truly becoming mainstream. So give the green
eye-shade guys a break while they figure this out.















