Monday, July 31, 2023

How to Talk to a Doc

For better and for worse, our healthcare system is built around physicians. For the most part, they’re the ones we rely on for diagnoses, for prescribing medications, and for delivering care.  And, often, simply for being a comfort. 

Created by Bing

Unfortunately, in 2023, they’re still “only” human, and they’re not perfect. Despite best intentions, they sometimes miss things, make mistakes, or order ineffective or outdated care. The order of magnitude for these mistakes is not clear; one recent study estimated 800,000 Americans suffering permanent disability or death annually.  Whatever the real number, we’d all agree it is too high.   

Many, myself included, have high hopes that appropriate use of artificial intelligence (AI) might be able to help with this problem.  Two new studies offer some considerations for what it might take.

The first study, from a team of researchers led by Damon Centola, a professor at the Annenberg School for Communication at the University of Pennsylvania, looked at the impact of “structured information–sharing networks among clinicians.”  In other words, getting feedback from colleagues (which, of course, was once the premise behind group practices).

Long story short, they work, reducing diagnostic errors and improving treatment recommendations. 

Clinicians were given a case study and asked for their diagnosis and treatment recommendations. Those not in the control group got to see the diagnostic decisions of their peers (on an anonymous basis). They were, on average, twice as accurate as those making the decisions on their own.

Study co-author Elaine Khoong of UCSF says, “We are increasingly recognizing that clinical decision-making should be viewed as a team effort that includes multiple clinicians and the patient as well.”  The researchers made sure that the structured network included clinicians of various ages, specialties, expertise, and geographical locations, trying to ensure that it was not simply a top-down, hierarchical network.  

Professor Centola believes: “egalitarian online networks increase the diversity of voices influencing clinical decisions. As a result, we found that decision-making improves across the board for a wide variety of specialties.” Best of all, he notes:

The big risk with these information-sharing networks is that while some doctors may improve, there could be an averaging effect that would lead better doctors to make worse decisions. But, that’s not what happens. Instead of regressing to the mean, there is consistent improvement: The worst clinicians get better, while the best do not get worse.

The researchers think this approach could be easily adopted, building on existing e-consult technologies: “We anticipate, for instance, that instead of sending clinical cases to a single specialist, clinicians may instead submit cases to a network of specialists who participate in a structured information exchange process before providing a recommendation to the referring clinician.” Professor Centola points out that, while the networks need to be structured thoughtfully, they don’t have to be huge; in fact, 40 is ideal.  “The increasing returns above that - going, say, from 40 to 4,000 - are minimal,” he says.

It's worth pointing out that the anonymous clinicians in the structured networks were, in this case, human; an interesting follow-up would be to see what happens when some or even all of the recommendations come from AI. 

Which leads to the second study, from a team of researchers from MIT and Harvard, which looked at what happens when radiologists get assistance from AI.  Long story short: not much. 

As Professor Rajpurkar said in a lengthy Twitter thread: “Why? Radiologists implicitly discount AI predictions, favoring their own judgment - a bias we call "automation neglect"

The “automation neglect” comes from radiologists discounting the AI probabilities by around 30% relative to their own assessments. The radiologists also tended to view their recommendations and the AI predictions as independent, when, in fact, they are based on the same data.

The paper found: “We find that AI assistance does not improve human’s diagnostic quality even though the AI predictions are more accurate than almost two-thirds of the participants in our experiment.” To make things worse, “radiologists are slower when provided with AI assistance.” 

Slower but not more accurate is not a winning combination, and definitely not what we might have expected.

Created by Bing
Complicating things, the results were heterogeneous: “AI assistance improves performance for a large set of cases, but also decreases performances in many instances.”  The more “confident” the AI prediction was, the more it helped improve quality. But when the AI was less confident, radiologists’ performance also suffered.

The researchers are forced to conclude: “Our results demonstrate that, unless the documented mistakes can be corrected, the optimal solution involves assigning cases either to humans or to AI, but rarely to a human assisted by AI.”  Professor Rajpurkar notes: “While AI holds promise, thoughtfully accounting for how humans actually use AI is critical. Our work provides concrete evidence on biases and costs that should inform system design.

An open question the researchers posit is “whether the benefits from AI-specific training for radiologists and/or experience with AI are large.”  I.e., can humans learn to work better with AI? 

Given the results of the first study, I’d have been interested to see what would have happened if the second study had also tested getting recommendations not from AI but from a structured network of human physicians; did the radiologists discount just AI recommendations, or do they just not trust external recommendations generally? 

At the risk of giving it short shrift, a third study, from Fabrizio Dell’Acqua at the Harvard Business School, suggests that when AI is too good, humans tend to “fall asleep at the wheel,”  leading him to conclude: “maximizing human/AI performance may require lower quality AI, depending on the effort, learning, and skillset of the humans involved.”  There is a lot about human/AI interaction we do not yet understand.

-----------------

We’ve long looked at medicine as an “art,” allowing and even encouraging individual physicians to use their best judgement.  That has led to well documented variability of care and outcomes, much of which is not in patients’ best interests.  There’s too much for physicians to know, there’s too many extraneous factors influencing their decisions, and, at this point, there’s way too much money at stake.  They need help.

In 2023, clinical decision-making should be, as Professor Khoong noted, a team effort.  We have the ability now for that team to be human “equalitarian online networks,” as Professor Centola and his colleagues urge, and we increasingly will have the ability for such networks to include, or to be replaced by, AI.   One way or the other, we need to “thoughtfully account” for how and when physicians use them.

Monday, July 24, 2023

Healthcare, Disagree Better

On one of the Sunday morning news programs Governors Spence Cox (UT) and Jared Polis (CO) promoted the National Governors Association initiative Disagree Better. The initiative urges that we practice more civility in our increasingly civilized political discourse. It’s hard to argue the point (although one can question why NGA thinks two almost indistinguishable, middle-aged white men should be the faces of the effort), but I found myself thinking, hmm, we really need to do that in healthcare too. 

Credit: Mohammed Hassan/Pixabay

No one seems happy with the U.S. healthcare system, and no one seems to have any real ideas about how to change that, so we spend a lot of time pointing fingers and deciding that certain parties are the “enemy.”  That might create convenient scapegoats and make good headlines, but it doesn’t do much to solve the very real problems that our healthcare system has. We need to figure out how to disagree better.

I’ll go through three cases in point:

Health Insurers versus Providers of Care

On one side, there are the health care professionals, institutions, organizations that are involved in delivering care to patients, and on the other side there are health insurers that pay them.  Both sides think that the other side is, essentially, trying to cheat them.

For example, prior authorizations have long been a source of complaint, with new reports coming out about its overuse in Medicaid, Medicare Advantage, and commercial insurance.  Claim denials seem equally as arbitrary and excessive.  Health insurers argue that such efforts are necessary to counter constantly rising costs and well documented, widespread unnecessary care.

Both sides think the other is making too much money and has become too concentrated.

This is a pointless battle, one that confuses the symptoms for the problem.  The problem is that we know too little about what care is appropriate for which patients in what settings by which professionals, much less about who in the system are incompetent or overly avaricious. Solving that issue is perhaps the most important thing that everyone in the healthcare system should be focused on.  If we knew those things, irritants like prior authorizations or claim denials would cease to be issues, not to mention that patients would get better care.  

Disagreeing better would mean stop blaming the other side and start addressing the underlying problem.

Credit: kahll/Pixabay
Prescription Drug Prices

It’s no secret that the U.S. has long had the world’s highest prices for prescription drugs.  Pharmaceutical companies claimed they needed those prices to fund innovation, and to subsidize those discounted prices in the rest of the world. They’ve played tricks like extending patient protections, even on drugs like insulin that are decades old. Their tricks led to the Lown Institute  to create the Shkreli Awards, highlighting the year’s “most egregious examples of profiteering and dysfunction in health care,” naming the award after the disgraced pharmaceutical executive. 

In 2022, Congress finally got around to allowing Medicare to negotiate price prices – for a small number of drugs – and the drug companies are responding as one might expect, “throwing the kitchen sink” in their efforts to slow or negate such negotiations.  It would stifle innovation, and take away their Constitutional rights, they argue.

We were all (well, most of us) happy when drug companies quickly developed COVID vaccines, but it took $32b of federal spending to accomplish that, and, speaking of greed, we’re seeing problematic shortages of critical drugs because generic drugs aren’t as profitable for the drug companies.

The reality is that Medicare is pretty much the only major health insurance program (public or private) that hasn’t negotiated prices, and it’s shortsighted to expect that could be allowed to persist indefinitely.  Meanwhile, we’re seeing drugs whose prices are in the millions of dollars range, and less than half of new drugs approved appear to be of substantial therapeutic value over existing treatments.  This is a state of affairs that cannot persist.

Some drug companies bit the bullet when Medicare capped insulin prices, applying the $35 out-of-pocket limit more broadly, and the pharmaceutical industry need to be similarly thinking more about their public image – and the public good – when it comes to the forthcoming Medicare negotiations.

Disagreeing better would mean acknowledging that there’s “reasonable rate of return” pricing and there’s price gouging, so let’s find that line.

Abortion

This is perhaps the best example of disagreeing badly, and deserves an entire article devoted to it, so I’ll have to try to make my points succinctly.  Look, I get that, for some people, the belief that life begins at conception is a moral or religious one that cannot be argued.  It’s like the 19th century abolitionists believing slavery was wrong; thank goodness now for their stubbornness then against the tides of society at that time. 

But pro-life advocates need to recognize that not all religions or moral frameworks agree with theirs, and in America one religion is not supposed to dictate to others. It’s also hard to understand a religious or moral point-of-view that values the life of an unborn child above life of the mother, as some bans essentially do. 

It should raise eyebrows that one consequence of abortion bans has been an increase in infant deaths. If we care so much about life, then our maternal and infant mortality rates would be much better.  We’d also do much better about postpartum care, including ensuring Medicaid and other care, and would ensure adoption and foster care are viable alternatives.  And we sure as hell would not be complacent about 11 million children living in poverty. It’s hard to see what religious or moral principles wouldn’t have as much fervor about these problems with as with abortion.

Disagreeing better would mean both understanding that trying to impose our beliefs on others should also include acknowledging their views, and recognizing that preventing abortions creates consequences that moral people cannot ignore. 

Credit: Planned Parenthood

-------------

Politics impacts all of our lives, but often does so at a distance that many of us don’t easily recognize.  Health care, though, impacts most of us directly and visibly, both in our health and in our pocketbook.  Cynical as I can be, I still believe that most people in healthcare are trying to do the right thing, although sometimes they get confused about what that may be.

We’ve got to stop trying to find enemies in healthcare and start making allies, so that we can solve its problems.  Disagreeing better is a way to start.

Monday, July 17, 2023

No, the Poor Don't Always Have to Be with Us

OK, for you amateur (or professional) epidemiologists among us: what are the leading causes of death in the U.S.?  Let’s see, most of us would probably cite heart disease and cancer.  After that, we might guess smoking, obesity, or, in recent years, COVID.  But a new study has a surprising contender: poverty.   

Illustration by Luis G. Rendon/The Daily Beast

It’s the kind of thing you might expect to find in developing countries, not in the world’s leading economy, the most prosperous country in the world. But amidst all that prosperity, the U.S. has the highest rates of poverty among developed countries, which accounts in no small part for our miserable health outcomes.  The new data on poverty’s mortality should come as no surprise.

The study, by University of California Riverside professor David Brady, along with Professors Ulrich Kohler and Hui Zheng, estimated that persistent poverty – 10 consecutive years of uninterrupted poverty – was the fourth leading cause of death, accounting for some 295,000 deaths (in 2019). Even a single year of poverty was deadly, accounting for 183,000 deaths. 

“Poverty kills as much as dementia, accidents, stroke, Alzheimer's, and diabetes,” said Professor Brady. “Poverty silently killed 10 times as many people as all the homicides in 2019. And yet, homicide firearms and suicide get vastly more attention.” 

The study found that people living in poverty didn’t start showing increased mortality until in their 40’s, when the cumulative effects start catching up.  The authors note that these effects are not evenly distributed: “Because certain ethnic and racial minority groups are far more likely to be in poverty, our estimates can improve understanding of ethnic and racial inequalities in life expectancy.”

“We just let all these people die from poverty each year,” Dr. Brady told Oshan Jarow of Vox. “What motivated me to think about it in comparison to homicide or other causes of death in America is that people would have to agree that poverty is important if it’s actually associated with anywhere near this quantity of death.” 

Professor Brady believes: “We need a whole new scientific agenda on poverty and mortality.” 

Indeed, there is already an ”anti-poverty medicine” movement, founded by Lucy E. Marcil, MD, MPH.  “I started this work about a decade ago,” Dr. Marcil told Mr. Jarow. “At the time, there was a lot of confusion when I would say that I try to get more people tax credits because it helps their health. Now it’s pretty well established at most major academic medical centers that trying to alleviate economic inequities is an important part of trying to promote health.”

She further explained: “anti-poverty medicine is one step further upstream to the root cause. Social determinants of health are important, but getting someone access to a food pantry doesn’t really address why they’re hungry in the first place.”

But let’s be clear: while the healthcare system needs to recognize the burdens of poverty, poverty is not a problem that the healthcare system is going to solve.  “No country in the history of capitalist democracies has ever accomplished sustainably low poverty without an above-average welfare state,” Dr. Brady told Mr. Jarow. “And so until you get serious about expanding the welfare state in all its forms, you’re not serious about reducing poverty.”

“Welfare state” is not a term that goes over well in today’s political environment. The right wing despises it (and the people who might need it), and the left wing is struggling to make the case that “progressive” is not a four letter word.  Oh, we spend a lot of money subsidizing people, but most of it doesn’t go to poor people. 

We overwhelmingly support the federal dollars spent on Social Security and Medicare, even though neither is means-tested, but fewer recognize that things like the tax preference for employer health insurance and the tax deduction for home mortgage interest are hugely expensive – and go primarily to middle and upper income people.

We’d rather subsidize someone’s second or even third home than ensure poor people have adequate housing or enough food.  

Even the money we supposedly target for poor people doesn’t usually get to them. Dr. Marcil estimates only one-third of those eligible successfully navigate the bureaucratic gauntlet to claim the benefits. “In my experience,” Dr. Marcil said, “most social policies are written in ways that make it challenging for those who have been historically marginalized to access them.”

Similar, in his revelatory book Poverty, By America, sociologist Matthew Desmond points out that only a quarter of the families who qualify for Temporary Assistance for Needy Families (TANF) even apply for it, and, even worse, only 22% of money budgeted for TANF actually directly went to poor families.

The central point of Professor Desmond’s book is that we accept poverty in America because we – the non-poor -- benefit from it. We like our tax preferences over directly helping poor people. We like to buy cheaper goods made possible because many employers don’t pay their employees a living wage. We don’t want affordable housing in our neighborhood because we fear it will hurt our property values. We don’t care if public school systems deteriorate as long as we can send our kids to private schools or move to even higher-income neighborhoods. And so on. It’s all about us.

As he quotes Tolstoy, “It is really so simple. If I want to aid the poor, that is, to help the poor not be poor, I ought to not make them poor.”

“If we had to boil it down to a single concept, we might just say that without poverty, we’d be more free,” Professor Desmond writes. “Why? Because poverty anywhere is a threat to prosperity everywhere.”

He estimates that a measly $177b annually could help end poverty, and that this could be simply by letting that IRS go after tax avoiders.  Yet the right wing hates the IRS and has worked for years to ensure it can’t fulfill its mission of collecting the taxes that are owed.

Professor Desmond coins the term “poverty abolition” and urges that we all become poverty abolitionists.  We give due credit to the 19th century abolitionists for helping bring about the end of slavery (even at the cost of a civil war), but somehow have relegated that kind of passion to history. But the 40 million Americans who live in poverty deserve better, as do the families of those 300,000 poor people who die every year. We may never cure cancer but we can end poverty.

Poverty, Professor Desmond reminds us, is a policy choice. If the poor are, as the saying goes, always with us, it is because we choose it.   

Monday, July 10, 2023

The Heat Is On

Attention must be paid: the world is now hotter than it has been in 125,000 years.

Credit: University of Maine Climate Reanalyzer

A week ago, we broke the record for average global temperature. That record was broken the next day.  Later in the week it was broken yet again.  Yeah, I know; weather records are broken all the time, so what’s the big deal? 

Well, it is a big deal, and we should all be worried. “It’s not a record to celebrate and it won’t be a record for long,” Friederike Otto, senior lecturer in climate science at the Grantham Institute for Climate Change and the Environment, told CNN. 

Bill Maguire, a professor at University College London, tweeted: “The global temperature record smashed again yesterday. The first four days of the week were the hottest recorded for Planet Earth. I would say welcome to the future – except the future will be much hotter.” 

"Expect many more hottest days in the future," agrees Saleemul Huq, director of Bangladesh's International Centre for Climate Change and Development.

Some will shrug and say we’ll just have to get used to it, but tell that to the 61,000 people who died in Europe’s heat wave last summer, according to a new study.  Sixty-one thousand people dying of heat, in developed countries, in the 21st century.  And it’s going to get worse.

“In an ideal society, nobody should die because of heat,” Joan Ballester, a research professor at the Barcelona Institute for Global Health and the study’s lead author, told The New York Times.  Guess what: none of us are living in ideal societies.

Credit: HealthDay
Skeptics are quibbling about the 125,000 year estimate, but scientists are holding firm. “These data tell us that it hasn’t been this warm since at least 125,000 years ago, which was the previous interglacial,” Paulo Ceppi, also at the Grantham Institute, told The Washington Post.  Even if you don’t believe the data supporting the 125,000 figure, Peter Thorne, a professor at Maynooth University, also told The Post: “I’m pretty damn certain it’s the warmest day in the last 2,023 years.” 

And if you don’t accept any estimates and want to look at only recorded data, Princeton University climate scientist Gabriel Vecchi told AP: “The fact that we haven’t had a year colder than the 20th century average since the Ford administration (1976) is much more relevant.”

“It’s so far out of line of what’s been observed that it’s hard to wrap your head around,” Brian McNoldy, a senior research scientist at the University of Miami, told The New York Times. “It doesn’t seem real.”

But it is.  And to make things worse, it is not just the atmosphere that is warming; the oceans are as well.  Professor Chris Hewitt, director of climate services at the World Meteorological Organization, warns:

Global sea surface temperatures were at record high for the time of the year both in May and June. This comes with a cost. It will impact fisheries distribution and the ocean circulation in general, with knock-on effects on the climate. It is not only the surface temperature, but the whole ocean is becoming warmer and absorbing energy that will remain there for hundreds of years. Alarm bells are ringing especially loudly because of the unprecedented sea surface temperatures in the North Atlantic.

“We are in uncharted territory,” Professor Hewitt says. “This is worrying news for the planet.”



In the U.S., much of the South and Southwest is sitting under a “heat dome” with persistent record highs; Phoenix has had 10 consecutive days of 110+ degrees (F), with more to come. Even Canada is experiencing 100 degree temperatures, exacerbating the wildfires that have plagued not only there but much of the U.S.  Meanwhile, the Northeast is suffering from devastating flooding. Global warming isn’t just about heat, but about how that heat affects global weather patterns.   

Woods Hole Oceanic Institution biogeochemist Jens Terhaar says:

While it is comforting to see that the models work, it is terrifying, of course, to see climate change happening in real life. We are in it and it is just the beginning…This wouldn't have happened without climate change, we are in a new climate state, extremes are the new normal.

“The issue of climate change doesn’t often get its 15 minutes of fame,” said George Mason University climate communications professor Ed Maibach. “Feeling the heat — and breathing the wildfire smoke, as so many of us in the Eastern U.S. and Canada have been doing for the past month — is a tangible shared public experience that can be used to focus the public conversation.”

One can only hope.

-----------

It’s all about carbon dioxide levels, of course. They’ve been increasing ever since the industrial revolution, and have skyrocketed in recent years, reaching levels the Earth hasn’t seen in millions of years.  Scientists, although not all politicians or most Americans, believe that human activity is causing the climate change, primarily through burning of fossil fuels.  

Credit: climate.nasa.gov/Earth.org

Skeptics say, oh, the climate always changes – no reason to think humans are causing it. Or they say, OK, the U.S. will start curtailing carbon emissions when countries like China or India do.  Those objections miss the point; whether it is humans causing the levels to rise or not, such increased levels have been directly tied to several mass extinction events.  We might survive this particular heat wave or those wildfires or even some Saharan sand clouds, but if we don’t act, our descendants will find an Earth uninhabitable. 

There are things we can do. “It just shows we have to stop burning fossil fuels—not in decades, now,” Professor Otto, told CNN.  Professor Ceppi warns: “Looking to the future, we can expect global warming to continue and hence temperature records to be broken increasingly frequently, unless we rapidly act to reduce greenhouse gas emissions to net zero.”

 Myles Allen, a professor of geosystem science at Oxford University, told WaPo: “The solution to the problem is actually rather simple.  Capturing carbon dioxide, either where it is generated or recapturing it from the atmosphere and disposing of it back underground. If we did this, we would definitely use much less fossil fuels.”

As climate scientist Katharin Hayhoe has said: "It's true some impacts are already here. Others are unavoidable. But my research, and that of hundreds of other scientists, clearly shows that our choices matter. It is not too late to avoid the worst impacts."

I’m not a climate scientist. I’m not an expert on carbon emissions or their effects. I can’t “prove” global warming or propose solutions.  But I do know this: these are not normal times, and we can’t do nothing.

Monday, July 3, 2023

The Business Reality of Healthcare AI

I was at the barbershop the other day and overheard one barber talking with his senior citizen customer about when – not if – robot AIs would become barbers. I kid you not.

Even in AI, follow the money Credit: 3M Inside Angle

Now, I don’t usually expect to heard conversations about technology at the barber, but it illustrates that I think we are at the point with AI that we were with the Internet in the late ‘90’s/early ‘00s: people’s lives were just starting to change because of it, new companies were jumping in with ideas about how to use it, and existing companies knew they were going to have to figure out ways to incorporate it if they wanted to survive. Lots of missteps and false starts, but clearly a tidal wave that could only be ignored at one’s own risk. So now it is with AI.

I’ve been pleased that healthcare has been paying attention, probably sooner than it acknowledged the Internet. Every day, it seems, there are new developments about how various kinds of AI are showing usefulness/potential usefulness in healthcare, in a wide variety of ways.  There’s lots of informed discussions about how it will be best used and where the limits will be, but as a long-time observer of our healthcare system, I think we’re not talking enough about two crucial questions. Namely:

  • Who will get paid?
  • Who will get sued?

Now, let me clarify that these are less unclear in some cases than others.  E.g., when AI assists in drug discovery, pharma can produce more drugs and make more money; when it assists health insurers with claims processing or prior authorizations, that results in administrative savings that go straight to the bottom line. No, the tricky part is using AI in actual health care delivery, such as in a doctor’s office or a hospital.

AI in the doctor's office. Credit: Bruno Mangyoku/New Scientist

Payment

There has been some cautious optimism that AI can help with diagnosis and suggested treatments. It can analyze more data, read and understand more studies, and apply more uniform logic in making such decisions. It has shown its value, for example, in diagnosing dementia, heart attacks, lung cancer, and pancreatic cancer. Earlier and more accurate diagnoses should lead to better outcomes for patients. 

The trouble is, in our health care system, no one gets paid – at least, to any great extent -- for better outcomes or even for earlier diagnoses. Arguably, if those result in less care, some health care professional or institution is going to get less money.  Like it or not, when it comes to payment, our healthcare system is built around doing more, not doing better.

Well, maybe those quicker, more accurate diagnoses will lead to physicians being able to see more patients, increasing their throughput and thus revenue.  Again, though, no one that I know of is advocating that doctors see more patients; there’s pretty widespread agreement that doctors already see too many patients, which has adversely impacted the doctor-patient relationship.

So if a physician or health care organization is evaluating how to apply AI, if they do a cost/benefit, it’s a little hard to see where the economic benefit comes in.

Well, wait; what about helping physicians with all the paperwork, all that “pajama time” they spend on administrative tasks?  Well, yes, there is some evidence that AI can help with this, but again, as Rod Tidwell told Jerry Maguire, show me the money.  Giving physicians back some of their personal time might help reduce burnout and improve their quality of life – both laudable goals – but that doesn’t directly lead to more revenue.  A good use of AI, but who is getting paid by implementing it? 

Payment will really become an issue when – as with barbers, not “if” – AI start seeing patients directly. A single instance of AI could see thousands, perhaps millions of patients simultaneously, delivering those earlier, more accurate diagnoses.  Perhaps they’ll just triage, but it will radically change the health care landscape.  But who will get paid for those visits, and how much? 

Would the AI itself get the payment (which leads to a whole rabbit hole of personhood and licensure questions), the (presumably) healthcare organization that deployed it, or even the AI developer? In any event, if we base AI payment on what a human doctor might receive, we’d be grossly overpaying; at best the “costs” are marginal costs for an almost infinitesimal amount of the AI’s time.

For all those reasons and more, we’ll need a new paradigm for payment.

Liability

Let’s concede right away that our current liability system in healthcare is terrible. It doesn’t identify most errors or incompetence, doesn’t reward most patients injured by the care they receive, doesn’t punish most of the healthcare professionals and institutions giving harmful care, and probably over-rewards some/many of the few patients it does help. Now throw AI into that mix.

As long as human doctors retain final say about care, even if assisted by AI, they’re probably going to be stuck with any resulting liability.  That quickly will become problematic as their ability to understand why an AI makes a recommendation becomes harder (the infamous “black box” problem).

They will quickly seek to push the liability onto the AI developers, much as they might for other software or for medical equipment, but that line will be hard to draw as the AI “learns” from its instantiation in a particular healthcare practice or organization.  Neither that organization nor the AI developer is going to be keen to accept the liability.

In the world I ultimately expect, where AI acts on its own, at least to some extent, one would expect the AI to bear liability for its actions, but that presumes the AI has assets and is an entity that can be sued, neither of which is likely to be true anytime soon.

So, if anything, as it stands AI is likely to further muddy an already muddled healthcare liability system. Boy, that should speed adoption, right? 

For all those reasons and more, we’ll need a new paradigm for liability.

Credit: Hiroshi Watanabe/Getty Images

----------

Healthcare is supposed to be about caring for people, making their lives better by improving their health (or, at least, reducing their suffering). Most healthcare professionals and institutions pay at least lip service to this, but the hard truth of it is that, especially in the U.S., healthcare is a business.  As such, AI is going to face slow going in healthcare until we grapple with key business issues like payment and liability.

AI is going to be ready for healthcare long before healthcare is going to be ready for AI.