Because, you know, always follow the money.
Last week Stanford's One Hundred Year Study On Artificial Intelligence released its 2016 report, looking at the progress and potential of AI, as well as some recommendations for public policy. The report urged that we be cautious about both too little regulation and too much, as the former could lead to undesirable consequences and the latter could stifle innovation.
One of the key points is that there is no clear definition of AI, because "it isn't any one thing." It is already many things and will become many, many more. We need to prepare for its impacts.
This call is important but not new. For example, in 2015 a number of thought leaders issued an "open letter" on AI, both stressing its importance and that we must maximize the societal benefit of AI. As they said, "our AI systems must do what we want them to do," "we" being society at large, not just AI inventors/investors.
The risks are real. Most experts downplay concerns that AI will supplant us, as Stephen Hawkings famously warned, but that is not the only risk it poses. For example, mathematician Cathy O'Neil argues in Weapons of Math Destruction that algorithms and Big Data are already being used to target the poor, reinforce racism, and make inequality worse. And this is when they are still largely being overseen by humans. Think of the potential when AI is in charge.
With health care, deciding what we want AI to be able to do is literally a life-or-death decision.
Let's get to the heart of it: there will be an AI that knows as much -- or more -- as any physician ever has. When you communicate with it, you will believe you are talking to a human, perhaps smarter than any human you know. There will be an AI that can perform even complex procedures faster and more precisely than a human. And there will be AIs who can look you in the eye, shake your hand, feel your skin -- just like a human doctor. Whether they can also develop, or at least mimic, empathy remains to be seen.
What will we do with such AIs?
The role that many people seem most comfortable with is that they would serve as aids to physicians. They could serve as the best medical reference guide ever, able to immediately pull up any relevant statistics, studies, guidelines, and treatment options. No human can keep all that information in their head, no matter how good their education is or how much experience they've had.
Some go further and envision AIs actually treating patients, but only with limited autonomy and under direct physician supervision, as with physician assistants.
But these only tap AI's potential. If they can perform as well as physicians -- and that is an "if" about which physicians will fight fiercely -- why shouldn't their scope of practice be as wide as physicians'? In short, why shouldn't they be physicians?
Historically, the FDA has regulated health-related products. It has struggled with how to regulate health apps, which pose much less complicated questions than AI. With AI, regulators may not be able to ascertain exactly how it will behave in a specific situation, as its program may constantly evolve based on new information and learning. How is a regulator to say with any certainty that an AI's future behavior will be safe for consumers?
Perhaps AI will grow independent enough to be considered people, not products. After all, if corporations can be "people," why not AI? Indeed, specific instances of AI may evolve differently, based on their own learning. Each AI instance might be, in a sense, an individual, and would have to be treated accordingly.
If so, can we really see a medical licensing board giving a license to an AI? Would we want to make one go through the indentured servitude of an internship/residency? How should we evaluate their ability to give good care to patients? After all, we don't do such a great job about this with humans.
Let's say we manage to get to AI physicians. It's possible that they will become widely available, but not seen as "good" as human physicians, and it ends up that only the wealthy can afford the latter. Or AIs could be seen as better, and the wealthy ensure that only they benefit from them, with everyone else "settling" for old-fashioned human physicians.
These are the kinds of societal issues the Stanford report urged that we think about.
One of the problems we'll face is that AIs may expose the amount of unnecessary care patients now get, as is widely believed. They may also expose that many of the findings which guide treatment decisions are based on faulty or outdated research, as has been charged. In short, AIs may reveal that the practice of medicine is, indeed, a very human activity, full of all sorts of human shortcomings.
Perhaps expecting AIs to be as good as physicians is setting too low a bar.
Back to the original question: who would be at fault if care given by an AI causes harm? Unlike with humans, an AI's mistakes are unlikely to be because they didn't remember what to do, or because they were tired or distracted. On the other hand, the self-generated algorithm it used to reach its decision may not be understandable to humans, so we may never know exactly what went "wrong."
Did it learn poorly, so the AI's creator is at fault? Did it base its decisions on invalid data or faulty research, in which cause their originators should be liable? Did it not have access to the right precedents, in which case can we attach blame to anyone? How would we even "punish" an AI?
Lawyers and judges, legislators and regulators will have plenty to work on. Some of them may be AIs too.
Still, the scariest thing about AI isn't the implications we can imagine, no matter how disruptive they seem, but the unexpected ones that such technological advances inevitably bring about. We may find that problems like licensing, malpractice, and job losses are the easy ones.
It will be interesting to watch how IBM's Watson evolves. This is one form of AI that is being used on a limited basis today.
ReplyDelete