The article -- It's Only a Computer, by Lucas, Gratch, King, and Morency -- tested participants' willingness to disclose information to a "virtual human" on a computer screen. When the participants believed the virtual human was fully automated instead of being controlled by a human, they reported lower fear of self-disclosure, were less likely to shade the truth in order to create a good impression ("impression management"), and were rated as being more willing to disclose information. The key to the behavior was their belief that no human was involved, whether or not a human was actually acting behind the scenes.
The authors note that, historically, computer or self-administered questions have lacked the critical sense of rapport that a personal interview can establish. They argue that with the advent of virtual human programs similar levels of rapport can be established, while not losing the willingness to disclose more personal information without the fear of being judged or of creating a negative impression. Their study focused on psychological rather than medical issues, but the authors believe it should be applicable in medical situations as well.
The results may not come as a surprise to anyone who remembers ELIZA, the computer program introduced in the early 1960s by Joseph Weizenbaum at MIT. Eliza mimicked a therapist using a simple statement/response approach, and was so successful that many people using it were convinced they were dealing with a human -- or that the computer "understood" them (both of which horrified Weizenbaum!). It was pretty amazing for its time, and you can still find numerous instances of it online.
The virtual human idea is not pie-in-the-sky, good only for research studies. Versions of it are already being tested, such as by Sense.ly, whose digital health avatar was profiled by MIT Technology Review a year ago. It captures patient information via an avatar, which can respond to patient statements or data and can even answer questions. The Wall Street Journal showed an example of a virtual house call with a Sense.ly nurse avatar earlier this year, and it is pretty cool. They are also testing a physical therapy avatar named Molly that can coach patients through their PT exercises.
Clearly, we're entering a new world.
The kind of artificial intelligence that might power these avatars/virtual humans can also be used to assist physicians instead of competing with them. IBM, of course, has been touting Watson in health care for several years now. As Wired recently reported, there are a number of AI efforts out there to assist physicians. They warn that AI can work very well with structured data, but not so much with unstructured data, which might be contradictory or rely on nuances and require some inferences. Still, making AI better at that is only a matter of time.
Wired also notes that companies are trying to keep their products viewed as offering recommendations instead of making decisions, which would push them over into FDA approval and regulation. We probably will get there, but that step will be a big gulp.
One of the key places for virtual humans/avatars may be able to help is in managing the huge amount of data generated by wearable technology. There is a lot of work being done trying to figure out how to monitor what kinds of health behaviors, and I'd say we're still primarily in the fitness monitoring stage but rapidly moving to health monitoring, such as with diabetes.
Some experts believe people will improve their health behaviors -- e.g., get more exercise or lose more weight -- if they know they are being monitored. Others fear people will end up forgetting about their trackers and will slide back to their previous behaviors. I suspect we're going to see some of each, and that the key to success will be what kinds of feedback/re-enforcement users get.
The plethora of tracking devices poses issues not only with the sheer volume of data generated, but also with integrating the disparate data from multiple operating systems into a unified record. Apple, of course, wants to do everything within its own ecosystem, and is mapping out its mhealth strategy accordingly. They are already working with Epic and the Mayo Clinic to integrate their data into EHRs via a cloud-based API called HealthKit.
Google is trying to keep their users within their Android platform as well, via their new Google Fit service, based on open APIs. Their desire to make it accessible to all Android users is made somewhat more challenging due to Samsung's own mHealth platform, as Samsung is the biggest manufacturer using Android.
These three companies are not by any means the only ones working on wearables and other tracking devices, but it is going to be very important how all the options can get integrated into unified data records, including EHRs. We haven't done too well on that front with EHRs themselves, after all.
The idea that health information is only collected at a medical office or lab, and that patients should wait to act on it until a human can talk to them, is simply no longer viable. The data are increasingly going to be available 24/7, and when it means something important there have to be mechanisms to act upon it in real-time. Maybe that is through alerts to physicians, who then initiate contact with patients, or maybe the wearable ecosystem can trigger its own alerts and advise the user what is going on using avatars and other automated mechanisms.
Author/physician Robin Cook, who has sold millions of techno-thriller books on medical technology themes, believes physician avatars are coming. He may just be hyping his latest novel (Cell) which features such an avatar, but he sees various health apps aggregated to not just pull data but also "to sift through billions of studies and records to make a diagnosis and offer a solution."
What I like most about his thoughts are his thoughts that:
"It's going to democratize medicine. We have been held hostage by the stakeholders - the physicians, big pharma, device makers and medical labs. This is going to free us from that."The democratization of medicine, or at least the reduction of the information asymmetry, is one of the key trends I keep coming back to. Whether it is physician alternatives (Vive la Différence) or virtual/actual health assistants (Making Health Care More Personal (Again)), the physician is less likely to be the sole gateway to medical information and advice.
A recent op-ed by Dominic Basulto in The Washington Post stated that "Google and Apple want to be your doctor, and that's a good thing." Mr. Basulto concluded:
Companies like Apple and Google can help to break down the notion that health has to be something offered by a monolithic company with a confusing set of rules and terms. It might just be the case that mobile health care facilitated by wearable tech will turn out to be better than traditional doctors.
I think it is a stretch to say that mobile health will be "better" than traditional doctors, but I think these and other technological options can certainly radically change when, why and where people need to see physicians or other health care professionals. And that's good.