Tuesday, October 11, 2016

Will Anyone Notice?

There's an interesting verbal battle going on between two prominent tech venture capitalists over the future of AI in health care.  In an interview in Vox,  Marc Andreessen asserted that Vinod Khosla "has written all these stories about how doctors are going to go away...And I think he is completely wrong."  Mr. Khosla was quick to respond via Twitter:  "Maybe @pmarca [Mr. Andreessen] should read what I think before assuming what I said about doctors going away." He included a link to his detailed "speculations and musings" on the topic. 

It turns out that Mr. Khosla believes that AI will take away 80% of physicians' work, but not necessarily 80% of their jobs, leaving them more time to focus on the "human aspects of medical practice such as empathy and ethical choices."  That is not necessarily much different than Mr. Andreessen's prediction that "the job of a doctor shifts and becomes a higher-level, more important job that pays better as the doctor becomes augmented by smarter computers."

When AIs start replacing physicians, will we notice -- or care?

Personally, I think it is naive to expect that only 20% of physicians' jobs are at risk from AI, or that AI will lead to physicians being paid even more.  The future may be closer than we realize, and "virtual visits" -- telehealth -- may illustrate why.

Recently, Fortune reported that over half of Kaiser Permanente's patient visits were done virtually, via smartphones, videoconferencing, kiosks, etc.  That's over 50 million such visits annually.  Just a year ago a research firm predicted 158 million virtual visits nationally -- by 2020.   At this rate, Kaiser may beat that projection by itself.

Or take Sherpaa, a health start-up that is trying to replace fee-for-service, in-person doctor visits with virtual visits.  Available with a $40 monthly membership fee, the visits are delivered via their app, tests or emails.  Their physicians can order lab work, prescribe, and make referrals if needed.

Sherpaa prides itself on offering more continuity to members through using a small number of full-time physicians (how and whether the Sherpaa model scales remains to be seen).   Sherpaa claims that 70% of members' health issues are delivered via virtual visits.  Many concierge medicine and direct primary care practices also encourage members to at least start with virtual consults.

How many people would notice if virtual visits were with an AI, not an actual physician?

Companies in every industry are racing to create chatbots, using AI to provide human-like interactions without humans.  Google Assistant, Amazon's Echo, and Apple's Siri are leading examples.  And health care bots are on the way.

Digital Trends reported on two U.K.-based companies who are developing AI chatbots designed specifically for health care, Your.MD and Babylon Health.   Your.MD claims to have the "world's first Artificial Intelligence, Personal Health Assistant," able to both ask patients pertinent questions and respond to their questions "personalized according to your unique profile."

Babylon Health claims to have "the world's most accurate medical artificial intelligence," which they say can analyze "hundreds of millions of combinations of symptoms" in real time to determine a personalized diagnosis.  Both companies say they want to democratise health care by making health advice available to anyone with a smartphone.

Not everyone is convinced we're there yet.  A new study did a direct comparison of human physicians versus 23 commonly used symptom checkers to test diagnostic accuracy, and found that the latter's performance was "clearly inferior."  The symptom checkers listed the correct diagnosis in their top 3 possibilities 51% of the time, versus 84% for humans.  That would seem to cast some cold water on the prospect of using an AI to help with your health issues.

However, consider the following:

  • The study was done by researchers from the Harvard Medical School.  One wonders if researchers at the MIT Computer Science and Artificial Intelligence Laboratory might have used different methodology and/or found different results.
  • The symptom-checkers may be the most commonly used, but may not have been the most state-of-the-art.  And the real test is how the best of those trackers did against the average human physician.
  • Humans still got the diagnosis wrong is at least 16% of the cases.  They're not likely to get much better (at least, not without AI assistance).  AIs, on the other hand, are only going to get better.  
It is only a matter of time until AI equal or exceed human performance in many aspects of health care and elsewhere.

It used to be that physicians were sure that their patients would always rather wait in order to see them in their offices, until retail clinics proved them wrong.  It used to be that physicians were sure patients would always rather see them in person rather than use a virtual visit (possibly with another physician), until telehealth proved them wrong.  And it still is true that most physicians are sure that patients prefer them to AI, but they may soon be proved wrong about that too.

Over 50 years ago MIT computer scientist Joseph Weizenbaum created ELIZA, a computer program that mimicked a psychotherapist.   It would be considered rudimentary today, but by all accounts its users took it seriously, to the extent some refused to believe they weren't communicating with a person.

More recently, an AI named Ellie is serving a similar purpose.  Ellie comes with an avatar and can analyze over 60 features of the people with whom it is interacting, including body language and tone of voice.  It turns out that people open up to Ellie more when they are told they are dealing with an AI than when told it is controlled by a human -- but the really amazing thing is that the latter group did not seem to realize there was actually no human involved.

Score one for the Turing test.  

AI is going to play a major role in health care.  Rather using physicians to focus more on empathy and ethical issues, as Mr.  Khosla suggested (or paying them more for it, as Mr. Andreessen suggested), we might be better off using nurses and ethicists, respectively, for those purposes.  So what will physicians do?

The hardest part of using AI in health care may not be developing the AI, but in figuring out what the uniquely human role in providing health care is.

1 comment:

  1. Baidu is another good AI example (http://www.theverge.com/2016/10/11/13240434/baidu-medical-chatbot-china-melody), as is Sense.ly

    ReplyDelete