Monday, September 26, 2016

Health Care Needs Some Spectacles

I've never written about Snapchat.  I didn't really get the point of its namesake app, the point of which was to post content that automatically disappeared.  I knew it was wildly popular among teens and celebrities, both of whom undoubtedly had more content they wished wouldn't persist than an old fogey like me, but it just seemed purposely trivial.

With their recent introduction of Spectacles, though, I figured Snap Inc. (as the company renamed itself) deserves a closer look.

The Wall Street Journal broke the story (as Business Insider also did) with an in-depth look at Spectacles.  It is not a new app, nor some new service on its existing app (which continues to be called Snapchat), but rather a piece of hardware: a pair of sunglasses that can record short videos.  Users can record ten to thirty second videos, taken from the sunglass's perspective.  The videos can then, of course, be uploaded to Snapchat, where they also will self-destruct.

Lights on the inside of the glasses will alert users that they are recording, and -- unlike with Google Glass's similar capability -- an external light will let surrounding people know they are being recorded.


Snap believes Spectacles allow for a more natural experience than using a smartphone camera.  The recording is more like what one would see, since they both are from the eye's POV and because it uses a 115-degree-angle lens to record a circular image.  More importantly, it frees your hand, much like GoPro does for adventure junkies.  As Snap's CEO Evan Spiegel points out, you're not holding your smartphone in front of you "like a wall in front of your face."

Snap has even gone so far as to label themselves a camera company, a curious move in an era where former camera titans like Kodak and Polaroid are trying to reinvent themselves out of that business.  As Mr. Spiegel described it to WSJ, "First it was make a photo [studio portraits].  Then it was take a photo [portable camera].  And finally it was give a photo [instant Polaroids evolving to smartphone selfies]."  He thinks this is a business with a future.

Spectacles are mounted on hipster sunglasses (available in three colors), are priced at $130, and will be offered in a limited rollout this fall.  Mr. Spiegel calls Spectacles a "toy," but plenty of people are taking it seriously, as the flurry of press it has received illustrate (e.g., Christian Science Monitor, Fast CompanyForbes, The New York Times, TechCrunch, and Wired).  The consensus seems to be that it bears watching, and won't share Google Glass's premature demise.

Snap isn't finished with Spectacles.  "We’re going to take a slow approach to rolling them out,” Mr. Spiegel told the WSJ. “It’s about us figuring out if it fits into people’s lives and seeing how they like it."

Snapchat is used by over 150 million people daily -- more than Twitter -- and more than 60% of 13 to 34 year-old smartphone users have it.  As the NYT reported, more than 35 million U.S. users watched portions of the Rio Olympics using a Snapchat channel -- "there was more Olympics footage and content on Snapchat then there was on NBC" --  and media companies are flocking to produce Snapchat content.

No wonder Facebook first tried to imitate Snapchat (Poke, anyone?), then buy it (a supposed $3b offer).  They're paying attention.

There are several lessons here:

1.  AR awaits:  Yes, right now Spectacles are just taking videos, but don't expect that to remain the limits of its capabilities.  Snapchat already offers various features (e.g., Lenses and Geofilters) to alter conventional smartphone photos, and adding augmented reality options makes sense.  Honestly, would you rather experience AR through your smartphone screen or in your field of vision (as Google Glass attempted)?  Not much of a contest.  I'm a big believer in how AR/VR will inform us and transform many of our experiences, and one of Spectacles' descendants is a very real possibility to help deliver those.

2.  Goodbye Smartphone:  Yes, we increasingly love our smartphones.  They are the Swiss Army knives of personal electronic devices, offering features undreamed of just a couple decades ago, to the point that even using the term "phones" in the name no longer reflects a main purpose.  As multi-purpose and omnipresent as they have become, there still is that awkwardness of having to hold the device.

We still are not at an Internet of Things environment (IoT), but we will be in less time than it took to go from cell phones to smartphones.  So apps and services that are built on smartphones better start looking for other, less device-specific platforms.  I'm not suggesting that Spectacles is, in any way, that platform, but at least Snap Inc. understands the problem.

3. Define Your Industry -- Don't Be Defined By It:  Snapchat was, and still is, sitting pretty in the messaging industry.  Messaging is big.  Facebook is pumping lots of money into Messenger and Instagram, suitors are falling over themselves to try to buy Twitter, Google has high hopes for Allo, and WeChat still hopes to take over the world outside of China.

Meanwhile, Snapchat's parent company wants to be a camera company.  That might sound dangerously backward-looking, but when Mr. Spiegel says Snap Inc. is a camera company, he doesn't mean that in the traditional sense.  As he told the WSJ, "It’s about instant expression and who you are right now. Internet-connected photography is really a reinvention of the camera."  Snap Inc. is reinventing the industry they are in.

Look at it this way: Snap won't be dependent on some other company's camera sitting on some other company's device to generate content.  Maybe not so backward looking at all.


In health care, bold thinking is for hospitals to relabel themselves as "health systems," or chiropractors to call their offices "wellness clinics."  That's nothing like a hugely successful messaging app company declaring they are in the camera business and producing hardware to support that vision.  Snap may succeed or they may fail (just ask Google), but what they are doing takes guts, and a vision of the future that doesn't just look like more of the same.  If any industry needs those, it is health care.

OK, health innovators: what is your parallel to Spectacles, and what industry do you think you're in?

Sunday, September 18, 2016

I Really Wish You Wouldn't Do That

Digital rectal exams (DREs) typify much of what's wrong with our health care system.  Men dread going to go get them, they're unpleasant, they vividly illustrate the physician-patient hierarchy, and -- oh, by the way -- they apparently don't actually provide much value.

By the same token, routine pelvic exams for healthy women also don't have any proven value either.

The recent conclusions about DREs come from a new study.  One of the researchers, Dr. Ryan Terlecki, declared: "The evidence suggests that in most cases, it is time to abandon the digital rectal exam (DRE).  Our findings will likely be welcomed by patients and doctors alike."

No kidding.

The study actually questioned doing DREs when PSA tests were available, but it's not as if PSA tests themselves have unquestioned value.  Even the American Urological Association came out a few years ago against routine PSA tests, citing the number of false positives and resulting unnecessary treatments.

Indeed, the value of even treating the cancer that DREs and PSAs are trying to detect -- prostate cancer -- has come under new scrutiny.  A new study tracked prostate cancer patients for ten years, and found "no significant difference" in mortality between those getting surgery, radiation, or simple active monitoring.

The surgery and radiation, on the other hand, had some unwelcome side effects.  Forty-six percent of men who had their prostate removed were wearing adult diapers six months later, and impotence was reported in 88% of surgical patients and 78% of radiation patients.  The chief medical officer of the American Cancer Society admitted, "Our aggressive approach to screening and treating has resulted in more than 1 million American men getting needless treatment."

"Needless" is perhaps the most benign description of what happened to those men.

As for the pelvic exam, about three-fourths of preventive visits to OB-GYNs include them, over 60 million visits annually.  They're not very good at either identifying or ruling out ovarian cancer, and the asymptomatic conditions they can detect don't have much data to indicate that treating them early offers any advantage to simply waiting for symptoms.

Or take mammograms.  Mammograms are uncomfortable, have significant false positive/over-diagnosis rates, and costs us something like $4b annually in unnecessary costs, yet remain the "gold standard."

Not many women like them.  It has been oft-stated that if men had to get them, there would be a better method.  Yet, according to the CDC, about two-thirds of women over 40 have had a mammogram within the past two years.  Maybe they shouldn't have.

Recommendations for how often and for which ages should get mammograms vary widely, with the default often ending up being annual screenings.  However, new research has concluded that many women only need triennial screenings.  Lead author Amy Trentham-Dietz said: "Women at low risk and low breast density will experience more harms with little added benefit with annual and biennial screening compared to triennial screening."

Mammograms can find evidence of breast cancers or pre-cancers, which often leads to mastectomies.  It has been known for some time that mastectomy rates in the U.S. are much higher than other countries, but now we're seeing more mastectomies in earlier stages of breast cancer and a "perplexing" increase in bilateral mastectomies, even among women who neither have cancer in the second breast nor carry the BRCA risk mutation for it, according to a AHRQ brief earlier this year.

As AHRQ Director Rick Kronick observed: "This brief highlights changing patterns of care for breast cancer and the need for further evidence about the effects of choices women are making on their health, well-being and safety."

In less diplomatic terms: what the hell?

Then there is everyone's favorite test -- colonoscopies.  Only about two-thirds of us are getting them as often as recommended, and over a quarter of us have never had one.  There are other alternatives, including a "virtual" colonoscopy and now even a pill version of it, but neither has done much to displace the traditional colonoscopy.  And all of those options still require what many regard as the worst part of the procedure, the prep cleansing.

An option that avoids not only the procedure but also the prep hasn't taken root either.  It involves collecting a sample of one's stool to test the blood.  This option, such as fecal immunochemical test (FIT) or fecal occult blood test (FOBT), has strong research support, to the point that the Canadian Task Force on Preventive Care says it, not colonoscopies, should be the first line of screening.  It is also much cheaper than a colonoscopy.  In the U.S., though, colonoscopies remain the preferred option for physicians.  

The final example is what researchers recently called an "epidemic" of thyroid cancer, which they attributed to overdiagnosis.  In the U.S., for example, annual incidence tripled from 1975 to 2009.  They found that the rates of the cancer were tied to the increased availability of diagnostic tests like ultrasound and CT scans, which led to the discovery of more cancers.  The researchers believe that as many as 80% of the tumors discovered were small benign ones, which did not mean they weren't surgically treated.

In fact, according to the researchers: "The majority of the overdiagnosed thyroid cancer cases undergo total thyroidectomy and frequently other harmful treatments, like neck lymph node dissection and radiotherapy, without proven benefits in terms of improved survival."  Not only that, once they've had the surgery, most patients will have to take thyroid hormones the rest of their lives.  

All of these examples happen to relate to cancer, although there certainly are similar examples with other diseases/conditions (e.g., appendectomy versus antibiotics for uncomplicated appendicitis).

Two conclusions:

1.  If we're going to have unpleasant things done to us, they better be based on facts: As the above examples illustrate, some of our common treatments and tests are based on tradition and/or outdated science.  We deserve better than that.  We should demand the options and the evidence.

2.  We should do everything we can to make unpleasant things, well, less unpleasant:  Physicians can't just focus on reducing patients' medical complaints but also should seek to reduce other complaints about their care.  When patients dread having something done, and often use that as an excuse not to get services, that should be a tip-off that something needs to change.

Let's get right on those.

Thursday, September 8, 2016

AI Docs May Need Some Good AI Lawyers

A recent post highlighted how artificial intelligence (AI) is already playing important roles in health care, and concluded that expanded use of AI may be ready for us before we are ready for it.  One example of the kind of problem we'll face is: who would we sue if care that an AI recommended or performed went wrong?

Because, you know, always follow the money.

Last week Stanford's One Hundred Year Study On Artificial Intelligence released its 2016 report, looking at the progress and potential of AI, as well as some recommendations for public policy.  The report urged that we be cautious about both too little regulation and too much, as the former could lead to undesirable consequences and the latter could stifle innovation.

One of the key points is that there is no clear definition of AI, because "it isn't any one thing."  It is already many things and will become many, many more.  We need to prepare for its impacts.

This call is important but not new.  For example, in 2015 a number of thought leaders issued an "open letter" on AI, both stressing its importance and that we must maximize the societal benefit of AI.  As they said, "our AI systems must do what we want them to do," "we" being society at large, not just AI inventors/investors.

The risks are real.  Most experts downplay concerns that AI will supplant us, as Stephen Hawkings famously warned, but that is not the only risk it poses.  For example, mathematician Cathy O'Neil argues in Weapons of Math Destruction that algorithms and Big Data are already being used to target the poor, reinforce racism, and make inequality worse.  And this is when they are still largely being overseen by humans.  Think of the potential when AI is in charge.

With health care, deciding what we want AI to be able to do is literally a life-or-death decision.  

Let's get to the heart of it: there will be an AI that knows as much -- or more -- as any physician ever has.  When you communicate with it, you will believe you are talking to a human, perhaps smarter than any human you know.  There will be an AI that can perform even complex procedures faster and more precisely than a human.  And there will be AIs who can look you in the eye, shake your hand, feel your skin -- just like a human doctor.  Whether they can also develop, or at least mimic, empathy remains to be seen.

What will we do with such AIs?  

The role that many people seem most comfortable with is that they would serve as aids to physicians.  They could serve as the best medical reference guide ever, able to immediately pull up any relevant statistics, studies, guidelines, and treatment options.  No human can keep all that information in their head, no matter how good their education is or how much experience they've had.  

Some go further and envision AIs actually treating patients, but only with limited autonomy and under direct physician supervision, as with physician assistants.  

But these only tap AI's potential.  If they can perform as well as physicians -- and that is an "if" about which physicians will fight fiercely -- why shouldn't their scope of practice be as wide as physicians'?  In short, why shouldn't they be physicians?

Historically, the FDA has regulated health-related products.  It has struggled with how to regulate health apps, which pose much less complicated questions than AI.  With AI, regulators may not be able to ascertain exactly how it will behave in a specific situation, as its program may constantly evolve based on new information and learning.  How is a regulator to say with any certainty that an AI's future behavior will be safe for consumers?

Perhaps AI will grow independent enough to be considered people, not products.  After all, if corporations can be "people," why not AI?  Indeed, specific instances of AI may evolve differently, based on their own learning.  Each AI instance might be, in a sense, an individual, and would have to be treated accordingly.  

If so, can we really see a medical licensing board giving a license to an AI?  Would we want to make one go through the indentured servitude of an internship/residency?   How should we evaluate their ability to give good care to patients?  After all, we don't do such a great job about this with humans.  

Let's say we manage to get to AI physicians.  It's possible that they will become widely available, but not seen as "good" as human physicians, and it ends up that only the wealthy can afford the latter.  Or AIs could be seen as better, and the wealthy ensure that only they benefit  from them, with everyone else "settling" for old-fashioned human physicians.  

These are the kinds of societal issues the Stanford report urged that we think about.

One of the problems we'll face is that AIs may expose the amount of unnecessary care patients now get, as is widely believed.   They may also expose that many of the findings which guide treatment decisions are based on faulty or outdated research, as has been charged.  In short, AIs may reveal that the practice of medicine is, indeed, a very human activity, full of all sorts of human shortcomings.  

Perhaps expecting AIs to be as good as physicians is setting too low a bar.

Back to the original question: who would be at fault if care given by an AI causes harm?  Unlike with humans, an AI's mistakes are unlikely to be because they didn't remember what to do, or because they were tired or distracted.  On the other hand, the self-generated algorithm it used to reach its decision may not be understandable to humans, so we may never know exactly what went "wrong." 

Did it learn poorly, so the AI's creator is at fault?  Did it base its decisions on invalid data or faulty research, in which cause their originators should be liable?  Did it not have access to the right precedents, in which case can we attach blame to anyone?  How would we even "punish" an AI?

Lawyers and judges, legislators and regulators will have plenty to work on.  Some of them may be AIs too.

Still, the scariest thing about AI isn't the implications we can imagine, no matter how disruptive they seem, but the unexpected ones that such technological advances inevitably bring about.  We may find that problems like licensing, malpractice, and job losses are the easy ones.  

Monday, September 5, 2016

To Err Is Human, To Diagnose AI?

A new study found that physicians have a surprisingly poor knowledge of the benefits and harms of common medical treatments.  Almost 80% overestimated the benefits, and two-thirds overestimated the harms.  And, as Aaron Carroll pointed out, it's not just that they were off, but "it's how off they often were."

Anyone out there who still doesn't think artificial intelligence (AI) is needed in health care?

The authors noted that previous studies have found that patients often overestimate benefits as well, but tended to minimize potential harms.  Not only do physicians overestimate harm, they "underestimate how often most treatments have no effects on patients -- either harmful or beneficial."  Perhaps this is because, at least in part, because "physicians are poor at assessing treatment effect size and other aspects of numeracy."  

The authors pointed out that "even when clinicians understand numeracy, expressing these terms in a way patients will understand is challenging."  When asked how often they discussed absolute or relative risk reduction, or NNT, with patients, 47% said "rarely" -- and a third said "never."  

Dr. Carroll's reaction to this: "I’m screaming in my office because I feel like it’s all I talk about."

An accompanying editorial called for more physician training, and also urged more use of visual representations of probabilities.  Better visual representation of data is certainly good, but one wonders why physicians should need more training in understanding what treatments have what kind of value to their patients.  Isn't that, in fact, the whole point of medical education?

So, who/what is good with numeracy, remembering statistics, and evaluating data?  There are math nerds, of course, but, instead of going to medical school, as they once might have, they're probably making fortunes working on Wall Street or trying to make billions with their tech start-ups.  Then, of course, there is AI.

An AI would know exactly the known benefits/risks of treatments, common or not, and maybe even produce a nifty graph to help illustrate them.

Probably the best-known AI in health care is IBM's Watson, but they definitely don't have the field to themselves.  CB Insights recently profiled over 90 AI start-ups in healthcare, with over 55 equity funding rounds already this year (compared to 60 in 2015).  These run the gamut; they categorized them into: drug discovery, emergency room & hospital management, healthcare research, insights and risk management, lifestyle management & monitoring, medical imaging & diagnostics, mental health, nutrition, miscellaneous, wearables, and virtual assistants.

No wonder Frost & Sullivan projects this to be a $6.7b industry by 2025.

Take at look at some of AI's recent successes:

  • Researchers at Houston Methodist developed AI that improves breast cancer risk prediction, translating data "at 30 times human speed and with 99 percent accuracy."
  • Harvard researchers created AI that can differentiate breast cancer cells almost as well as pathologists (92% versus 96%) -- and, when used in tandem with humans, raised accuracy to 99.5%.
  • Stanford researchers developed AI that they believe beats humans in analyzing tissue cells for cancer, partly because it can identify far more traits that lead to a correct diagnosis than a human can.  No wonder some think AI may replace radiologists! 
  • A Belgium study found that AI can provide a "more accurate and automated" interpretation of tests for lung disease.
  • Watson recently diagnosed leukemia in a patient, a diagnosis that physicians had missed.  Watson had the benefit of comparing the patient's data to 20 million cancer records. 
AI is very good at looking at vast amounts of data very quickly, identifying patterns, and making predictions.  AI may lack humans' treasured intuition, but it has us beat when it comes to processing facts.  When it comes to something objective like "is this a cancer cell or not?" or "what are the documented risks/benefits of a treatment?", it's hard to argue against the value that AI can bring to health care.

Current iterations of AI are less truly "intelligent" than just really, really fast at what they do.  That is changing, though, as AI becomes less about what we program them to do than about an AI using "deep learning" to get smarter.  Deep learning essentially uses trial and error -- at almost incomprehensible speeds -- to figure out how to accomplish tasks.  It's how AI have gone from lousy at image recognition to comparable to -- or better than -- humans.

One of the dirty little secrets of health care is that much of our care is based on trial and error -- and that trial and error is often limited to our physician's personal training and experience.  If we're lucky, we have a physician who has seen lots of similar patients and is very well versed in the research literature, but he/she is still working with much less information than an AI could have access to -- even about one of the physician's own patients.
 
Our problem is going to be when we simply don't know what an AI did or why it reached the conclusion it did.  No human could have searched through the 20 million records Watson did to identify the leukemia, but at least it was a fairly objective result.  Someday soon AI will pull together seemingly unrelated facts to produce diagnoses or proposed treatments that we simply won't be able to follow, much less replicate.

We've had trouble getting state licensing boards to accept telemedicine, sometimes even when used by physicians they've licensed, much less when used by physicians from other states.  One can only imagine how they, or the FDA, are going to react to AIs who come up with diagnoses and treatments for reasons they can't explain to us.

Then, again, many physicians might sometimes have the same problem.  How much of a higher standard should we hold AI to?  How much better do they have to be?

In the short term, AI are likely to be a tool to help expand physicians' capabilities; as IBM's Kyu Rhee says, "as ubiquitous as the humble stethoscope."  In the mid-term, they may be partners with physicians, adding value on an equal basis.  And in the not-too-distant future, they may be alternatives to, or even replacements for, physicians.  With AI's capabilities growing exponentially and computing cost growing cheaper, human physicians may be a costly luxury many can't -- or won't want to -- afford.

One suspects that AIs will be ready for us before we're ready for them.  

Sunday, August 28, 2016

Octobot to the Rescue!

Acclaimed futurist Ray Kurzweil has a lot of bold predictions (including that computers will become smarter than us within a few decades), and some of his most interesting ones deal with how technology -- especially nanotechnology -- will soon totally revamp how we manage our health, leading to longer, healthier lives and hugely increased intelligence.  Sounds like science fiction, right?

Meet Octobot.


Harvard researchers have unveiled what they describe as the "first autonomous, entirely soft robot," which they call Octobot (it has eight arms, like an octopus).  It has no metal, no battery, no electronics of any sort, yet manages to move under its own power.  It uses a "microfluidic logic circuit" rather than a circuit board to control the movements of its arms and to power itself along, using gas reactions.

And, to make it even cooler, they 3D-printed it.

Octobot seems cute, almost cuddly, more like a child's bath toy rather than a glimpse into the future of robotics.  The researchers are careful to note that, right now, it is only a proof of concept.  It can't do much, and it runs out of power within a few minutes.  But they're already planning a next generation that can "crawl, swim, and interact with its environment," and hope their efforts inspire other researchers.  Some are already speculating about other uses, such as in marine environments -- or within the human body.

If Octobot doesn't -- yet -- quite sound like what Ray Kurzweil envisions, perhaps some work being done in Israel comes closer.  Researchers there used "DNA origami" to create nanobots, which they injected into cockroaches.  The nanobots contained drugs, which, amazingly, the nanobots released based on the brain activity of a volunteer (they had him do math).  He was hooked up to an EEG; his brain activity triggered an electromagnetic coil, which caused the nanobot to release the drugs.  When he stopped calculating, the nanobot stopped releasing the drug.

The researchers see great potential for people to trigger the release of drugs based on their own mental state, not just calculations but moods or feelings.  As they wrote,
"This technology enables the online switching of a bioactive molecule on and off in response to a subject's cognitive state, with potential implications to therapeutic control in disorders such as schizophrenia, depression, and attention deficits, which are among the most challenging conditions to diagnose and treat."  
Researchers in Canada see similar potential for using nanobots to attack cancer.  They loaded up a bacteria with cancer drugs, and used magnetic nanoparticles to steer the bacteria to tumors (in mice).  The nanobots detected the most oxygen-depleted zones -- which indicate the most rapidly growing tumor cells -- and released the drugs in them.

The researchers believe their approach will allow much more targeted chemotherapy, improving effectiveness while minimizing or even eliminating harmful side effects.  Moreover, they say,
"This innovative use of nanotransporters will have an impact not only on creating more advanced engineering concepts and original intervention methods, but it also throws the door wide open to the synthesis of new vehicles for therapeutic, imaging and diagnostic agents."
So far they've just done tests in mice; the Israeli researchers are hoping to test their approach with terminally ill cancer patients very soon.

Another set of researchers, in Switzerland, are working on yet another version of nanobots.  They are also trying to imitate bacteria to deliver drugs to targeted locations.  They layer nanoparticles in "biocompatible hydrogel," line up the nanoparticles via electromagnetic fields, solidify the hydrogel, and insert it into a fluid.  They can make the particles move using magnetic fields, and can change its shape using heat.  These allow for a wide range of movement and behaviors.

The Swiss researchers see the use of their nanobots not just in delivering drugs with great precision but also for clearing arteries.

Victoria Webster, a Ph.D. candidate in engineering at Case Western Reserve University, discussed some of their work in building what she called "biobots" -- robots powered by living cells. They're using sea slugs as a platform, both because it has evolved to survive in a wide range of environments and because we already know much about its neural network, potentially making it easier to program its neurons to do desired tasks.,  She cited targeted drug delivery, cleaning up clots, or strengthening weak blood vessels to prevent aneurysms.  

These are only a few examples of how nanotechnology is progressing rapidly.  There are plenty of others.  So far GlaxoSmithKine is the only major pharma company known to be working on nanobot treatments (which they call "bioelectronics"), but if the field pans out others will have to follow suit, or become buggy manufacturers in an automobile world.

Ray Kurzweil predicts that nanobots will be assisting our immune systems by the 2020's, and that by 2029 will annually add a year to our life expectancy.  As he describes it, "we're starting to reprogram the outdated software of life...we're programming them [genes] away from disease, away from aging."  By the 2030's the nanobots will be in our brain, giving us an additional neocortex that will make us much smarter, although he admits, "but the truth is, we don't know what it will look like."

If Dr. Kurzweil and the myriad of researchers working in the field or not, within a generation the practice of medicine will start to be unrecognizably different.  Our current surgeries, prescription drugs, chemotherapies, radiation therapies, and other interventions will start to seem crude and, in some cases, as misguided as, say, bloodletting.

That's all great news, assuming everyone has access to and can equally afford the enhancements, but could also potentially vastly exacerbate differences between socioeconomic classes, as John Koetsier fears.   The technology is going to make huge changes not just in medicine and health care but in society more broadly.  Hopefully it will help us not just be smarter but also wiser about how we use our new capabilities.

Octobot doesn't seem quite so cuddly now, does it?

Sunday, August 21, 2016

Pardon Me, Your Interface Is Showing

In a great post, "Doctor as Designer" Joyce Lee laments the "sad state of product and design in healthcare," and asks "when will device and drug companies create user-centered innovations that actually improve the lives of patients instead of their bottom line?"

I heartily agree with Dr. Lee's point, and think the question can be extended to the rest of the health care system.

Dr. Lee uses two examples to compare health care to consumer goods.  Heinz took a product design -- the glass ketchup bottle -- that had been around for over a hundred years, and greatly improved the user experience by changing to a squeezable "upside down" bottle.  This not only kept the ketchup from concentrating at the bottom but also avoided the need to hold the bottle at a special angle or to tap at a particular spot just to get the ketchup out.

She contrasts this with the Epi-pen.  It is not only hard to use correctly, but its manufacturer has used the recalls of competitors' medications to jack up its price by several hundred percent (from $100 to over $600).  Dr, Lee notes that some consumers are simply buying their own epinephrine and needles to create their DIY version, for about $5, "which means that we are paying $600 for a hunk of badly designed plastic!"    

Bad design for more money; it sure sounds like health care, doesn't it?

When I think about health care's lack of user-centered design, though, I think less about Epi-pens or medical devices and about more common patient interactions, like in doctor's offices or hospitals.

Kaiser Health News recently published two articles on the experience of elderly patients in the hospital.  The first noted that, ironically, elderly patients often are admitted sick but leave disabled.  It is important -- but uncommon -- for hospitals to focus on how to get the patient back home living as independently as possible.  Bed rest, catheters, IVs, interrupted sleep, and unappetizing food all can work against that goal.

The second KHN article stressed the need to keep hospitalized elderly patients moving.  A 2009 study found that such patients spend 83% of their stay in bed, being out of bed a median of only 43 minutes per day.  One nurse warns patients, "the bed is not your friend."

Hospitals are just protecting themselves from lawsuits.  A geriatrician explained that families won't sue if their parent gets weaker while in the hospital, but may sue if he/she falls, so preventing falls trumps preparing patients to go home independently.

As another geriatrician noted: "The older you are, the worse the hospital is for you."  Still another physician likened current approaches to a "smart bomb."  "We blow away the disease," he said, "but we leave a lot of collateral damage."

If that isn't a good description of our "health care" system, I don't know what is.

Design matters.  KHN cites examples of hospitals that have created special units that pay more attention to helping patients be more mobile -- through changes in room design, assuring that walkers are widely available, and focused care processes.  It can be done.

Certainly hospitals are much different than a generation ago, with semi-private rooms on their way out (who ever thought that was a good idea in the first place?) and amenities like WiFi more common.  Hospitals are said to be borrowing from the hotel industry to improve patient experience, but this may aimed more at marketing and revenue-enhancement opportunities than to improving patient care.

Still, I suspect that the next time a patient confuses a hospital for a hotel will be the first.

The health care system is recognizing that it needs to engage people differently.  Such engagement is seen as essential to getting them more involved in their health, especially in managing chronic conditions.  It is potentially big business, with the patient engagement market expected to grow from $7.4b in 2015 to $39b by 2024, according to Grand View Research.  

MobiHealthNews sees a big role for consumer health tech companies in this, particularly on the B2B side.  They cite numerous examples of alliances, acquisitions, and partnerships along those lines.  When it comes to improving patient experience, it asks, "What better place to turn than devices and apps that have already proven themselves engaging and delightful in the direct-to-consumer world?"

The problem may be that we're still not quite sure who the "customer" is.  According to Xerox, nearly 50% of consumers say they take "complete responsibility" for their heath, but only 6% of health professionals think that is true.  Nearly 40% of providers and payors think consumers don't even know how to take charge of their health.

It's hard to design for a health care system when we don't even agree who is "in charge" of our health.

If you think too much about the interface, it's bad design, creating friction.  The health care system is full of this kind of friction.  Think of selecting a health plan, understanding health coverage, finding a provider, getting an appointment, waiting to receive care, or understanding a diagnosis and treatment options.  And don't get me started on EHRs.

Martin Legowiecki, writing in TechCrunch, thinks UI should be "invisible" and that AI is the way to get to that.  The world, in his view, should be as easy as walking into your favorite bar and having the bartender have your favorite drink ready as soon as you sit down.  As he says, "that's a lot of interaction, without any 'interaction.'"

Or, as he puts it more pithily, "the ultimate UI is no UI."

In an Internet of Things world, we could use normal language to talk to our environment, with the omnipresent AI able to understand and apply conceptual awareness to accommodate our needs.  Picture a hospital bed that not only warns you when you've been immobile too long but also "helps" you get up, or a doctor's office that pulls together all the necessary information on you before you even arrive.

Design starts with making something functional, and good design tries to make easier to use, or at least more attractive.  Really good design doesn't make us think about how clever the designers are but, rather, allows us to forget that they did anything at all.

Health care could use some really good design.

Monday, August 15, 2016

Out With the Old...Wait, Not in Health Care

The last company still manufacturing VCRs announced it has ceased their production.  VCRs had a good run, most households had one, but their time has passed.  Meanwhile, the stethoscope is celebrating its 200th birthday, and is still virtually the universal symbol for health care professionals.  

There has got to be a moral in there somewhere.

VCRs revolutionized our TV viewing experience.   We could record television shows to not only watch programs at our own convenience, but we could also fast forward through commercials!  We could watch the movies we wanted, when we wanted to, in the comfort of our own homes.  Video rental outlets popped up everywhere, from boutique neighborhood stores to wildly successful chains like Blockbuster.    

Alas, technology moves along.  DVRs came along in the 1990's, especially TiVo.  Suddenly those VCRs seemed old-fashioned.  As broadband has become more common, streaming services are now threatening to render DVRs obsolete as well.  Blockbuster is gone, while Netflix has been nimble enough to remake itself primarily as a streaming service.  

VCRs are a classic example of how technology (usually) moves on.  Except in health care.

Like stethoscopes.  Digital advocate Dr. Eric Topol recently tweeted: "The stethoscope's 200th birthday should be its funeral."  Jagat Narula, a dean at the Icahn School of Medicine at Mt. Sinai, flatly says, "The stethoscope is dead.  The time for the stethoscope is gone."  

That's all well and good, but -- to paraphrase Mark Twain -- reports of its death are greatly exaggerated.

The well-known story is that the stethoscope was invented by René Laënnec to avoid having to listen to a female patient's chest by putting his ear to her chest, as he thought that technique improper.  Laënnec's crude tube was gradually improved upon over the years.  Stethoscopes became a de facto symbol of being a physician, along with white coats (which have their own baggage).   Google "physician" and almost all the resulting images show physicians with stethoscopes.  

It's not like stethoscopes do all that good a job, or, perhaps, that physicians use them all that well.  A 2014 study found that participants only detected all tested sounds 69% of the time.  As the authors diplomatically concluded, "a clear opportunity for improving basic auscultations skills in our health care professionals continues to exist."   

Similarly, a 1997 study found that: "Both internal medicine and family practice trainees had a disturbingly low identification rate for 12 important and commonly encountered cardiac events," while a 2006 study found that stethoscope skills did not improve after the third year of medical school and "...may decline after years in practice."  Whoops.

Oh, and stethoscopes also help carry germs.

And it's not like there aren't alternatives.  As one might expect in the 21st century, there are electronic/digital stethoscopes.  These allow for amplification of body sounds, and even for the transmission and recording of those sounds.   Their output can be converted into graphic representation and compared, either historically for the same patient or to established parameters.  

There are also handheld ultrasounds that provide another strong alternative.  Ultrasound has been a diagnostic tool for decades, but handheld units only became available in the late 1990's.  The question of whether they would make stethoscopes obsolete was soon being asked, and there is plenty of research supporting the assertion that they are as good or better than stethoscopes.   

And now, of course, there are smartphone apps for stethoscopes.  Apple was claiming 3 million doctors had downloaded its $0.99 stethoscope app as long ago as 2010, with Android versions also available.  HealthBud, a new device that uses smartphones, has research to back up its claim that it is at least as good as stethoscopes, and is seeking FDA approval.  Its developer claims, "This device is much less expensive to produce and offers a safer alternative to both traditional and disposable models without sacrificing sound quality." 

And yet stethoscopes hang in there.  

We might like to think that physicians continue to use traditional stethoscopes because they are simply being thrifty, since electronic stethoscopes and handheld ultrasounds are much more expensive, but that seems a reach.  They've certainly not been reluctant to adapt other types of newer, more expensive technology -- at least, not as long as they can charge more for it.  

It is a conundrum that has bedeviled economists: why in health care does new technology almost always increase costs, unlike most other industries?  E.g., DVRs were much better than VCRs, but quickly became comparably priced.  Professor Kentaro Toyama cites what he calls technology's Law of Amplification: "Technology’s primary effect is to amplify, not necessarily to improve upon, underlying human inclinations."

And in health care, those underlying inclinations don't drive towards greater value.

When it comes to stethoscopes, it's not about the money.  Many physicians believe that the stethoscope helps foster the patient-physician relationship.  In a recent article in The Atlantic, Andrew Bomback admitted that, "Indeed, for many doctors (myself included), the stethoscope exam has become more ceremony than utility."  He cites the case of a colleague who borrowed a stethoscope -- even though it was only a low-end model -- before examining a patient, explaining, "Patients expect you to have one of these things."

Physician/engineer Elazer Edelman argues that a stethoscope exam can help to create a bond between patients and physicians.  He worries that technology may be fraying the "tether" between doctors and patients. Still, if the relationship depends on which device a physician uses to listen to our chest, that relationship is in bigger trouble than we think.

The stethoscope illustrates that health care can be anything but rational.  Their use -- like those white coats -- persists because both patients and physicians expect them to be used.  It is a form of status worship.  Honestly, it's not dissimilar to talismans that more primitive cultures expect from their medicine men, their shamans, their witch doctors.  

Given what we know about the power of placebos, we may not be as different from those primitive societies as we like to think.  

So, R.I.P. VCRs, and thanks for the memories.  As for stethoscopes, and for health care more generally, though, maybe the moral is that we should focus less on status symbols and more on what is best for patients.