Last week I was on a fun podcast with a bunch of people who were, as usual, smarter than me, and, in particular, more knowledgeable about one of my favorite topics – artificial intelligence (A.I.), particularly for healthcare. With the WHO releasing its “first global report” on A.I. -- Ethics & Governance of Artificial Intelligence for Health – and with no shortage of other experts weighing in recently, it seemed like a good time to revisit the topic.
Credit: ITProToday |
My prediction: it’s not going to work out quite like we expect, and it probably shouldn’t.
“Like all new technology, artificial
intelligence holds enormous potential for improving the health of millions of
people around the world, but like all technology it can also be misused and
cause harm,” Dr Tedros Adhanom Ghebreyesus, WHO Director-General, said
in a statement. He’s right on
both counts.
WHO’s proposed six principles are:
- Protecting human autonomy
- Promoting human well-being and safety and the public interest
- Ensuring transparency, explainability and intelligibility
- Fostering responsibility and accountability
- Ensuring inclusiveness and equity
- Promoting AI that is responsive and sustainable
All valid points, but, as we’re already learning, easier to propose than to ensure. Just ask Timnit Gebru. When it comes to using new technologies, we’re not so good about thinking through their implications, much less ensuring that everyone benefits. We’re more of a “let the genie out of the bottle and see what happens” kind of species, and I hope our future AI overlords don’t laugh too much about that.
As Stacey Higginbotham asks
in IEEE Spectrum, “how do we know if a new
technology is serving a greater good or policy goal, or merely boosting a
company’s profit margins?...we have no idea how to make it work for society’s
goals, rather than a company’s, or an individual’s.” She further notes that “we haven’t
even established what those benefits should be.”
Ms. Higginbotham isn’t specifically talking about
healthcare, but she could be. We can’t really
agree on what a healthcare system should and shouldn’t do, much less one
augmented by A.I. It’s no wonder that
our first generations of A.I. in healthcare are confused.
The example that I’ve been using for
years is that we can’t even agree on how human physicians seeing patients
in other states via telehealth should be licensed/regulated, so how are we
going to decide how a cloud-based healthcare A.I. should be?
The FDA is paying attention |
It gets worse.
Christopher Mims just
wrote about how AI is moving from the cloud to edge devices (like your
phone or home appliance). Edge computing
is going to be a big part of our future, including
healthcare, but, as computer science professor Elisa Bertino
pointed out to him, how can anyone certify/regulate AI that is evolving on its
own, in the real world? It won’t necessarily
resemble the A.I. that it started out as; it’s going to depend on the
data/inputs it receives.
Mr. Mims also warns: “Modern
AI, which is primarily used to recognize patterns, can have difficulty coping
with inputs outside of the data it was trained on.” Oh, boy -- it’s going to run into a lot
of that with health care. People are
messy, so to speak, and a lot of that mess impacts their health. A.I. better be ready to deal with it.
AI is going to evolve much more rapidly than other
healthcare technologies, and our existing regulatory practices may not be
sufficient, especially in a global market (as we’ve seen
with CRISPR). Not to be facetious,
but we may need AI regulators to oversee AI clinicians/clinical support, just
as we may need AI
lawyers to handle the inevitable AI-related malpractice suits. Only another black box may be able to
understand what a black box is doing.
I worry that we’re thinking about how we can use A.I.
to make our healthcare system do more of the same, just better. I think that’s the wrong approach. We should be going to ground principles. What do we want from our healthcare
system? And, then, how can A.I. help get
us there?
For example, we should want that everyone has access
to affordable health care – when they need it, where they prefer it. That health care should tailored to the
individual, including genetics, environment, and socio-economic status, and
should be based on solid evidence. That
all sounds like a list of the usual platitudes, but none of it is currently
true. How can A.I. help make it true,
or, at least, truer?
If A.I. for healthcare is a better Siri or a new
decision support tool in an EHR, we’ve failed.
If we’re setting the bar for A.I. to only support clinicians, or even to
replicate physicians’ current functions, we’ve failed. We should be expecting much more.
E.g., how can we use A.I. to democratize health care, to get advice and even treatment in people’s hands? How can we use it to help health care be much more affordable? How can A.I. help diagnose issues sooner and deliver recommendations faster and more accurately?
In short, how can A.I. help us reorient our health
care from the healthcare system that delivers it, and the people who work in
it, to our health? If that means
making some of those irrelevant, or at least greatly redefining their roles, so
be it.
Right now, much A.I. work in healthcare seems to be
focused primarily on granular problems, such as diagnosing specific diseases. That’s understandable, as data is most comparable/available
around granular tools (e.g., imaging)
or conditions (e.g., breast
cancer). But our health is usually not
confined within service lines. We need
more macro A.I. approaches.
We might need A.I. to tell us how A.I. can not just
improve our healthcare but also to “fix” our healthcare system. And I’m OK with that.
No comments:
Post a Comment