Tuesday, February 21, 2017

The Good, the Bad, and the Ugly in Health Care

I hate being a patient.

I have to admit that, although I write about health care, I am typically what can be described as a care-avoider.  My exposure to the health care system has mostly been through my professional life or through the experiences of friends and family.  The last few days, though, I unexpectedly had an up-close-and-personal experience as a hospital inpatient.

I want to share some thoughts from that experience.

Now, granted, any perceptions I gained are those of one person, in one hospital, in one medium-sized mid-western city.  Nonetheless, I offer what I consider the Good, the Bad, and the Ugly of the experience.

The Good:  The People

The various people involved in my care, from the most highly trained physician to the person who delivered meals, were great.

I loved my nurses; they fit all those great stereotypes people have about the profession.  Attentive, caring, cheerful, knowledgeable, hard-working -- the list goes on and on (full disclosure: I'm married to a nurse, so none of this came as a surprise).

I liked my doctors a lot.  Each of them spent literally hours with me -- answering my (many!) questions, discussing what they thought was going on with me, describing the various tests or procedures, developing care plans to fit me.  They were super-smart and a pleasure to talk with.

The aides, the lab techs, the imaging tech, the transportation specialists -- all of them doing jobs that I wouldn't be able to do -- were each friendly and helpful, taking pride in what they did and how it helped my care.

Whatever you might say about our health care system, you cannot say that it is not filled with people who don't care about the patients in it.

The Bad: The Processes

On the other hand, on the lists of criticisms about our health care system, many of its rules and processes truly do deserve a place.  They're like part of an arcane game no one really understands.
I'll offer three examples:

  • Check-in: I was literally on a table in a procedure lab -- still wondering how the hell I'd ended up there and not quite sure what was about to happen to me -- when I was asked to electronically sign several forms (Privacy Policy, Consent to Treat, Consent to Bill) that I could neither see nor was able to question.  No court of law could call that informed consent, but that's what the process required before I could actually receive care.
  • NPO:  At one point it was thought that, on the following day, I might have a procedure, so I had to be NPO (no food or water allowed) for at least 4 hours -- but starting at midnight.  I pointed out that it was highly unlikely that they'd be doing anything at 4 am, and even mid-morning was unlikely since nothing was yet scheduled, but that was not persuasive.  As it turned out, I'd gone something like 16 hours NPO when they finally listened to my concerns: by putting me on a saline solution IV.  I think they understood the physical problem but not the human one.  (It ended up I didn't have the procedure anyway.)
  • Discharge:  On my final day, the doctor told me around 1 pm that I was being discharged.  Around 3 pm his nurse practitioner told me she'd personally written the discharge orders.  Around 5:30 pm my nurse gave me all my discharge papers, but told me I had to wait for Transport to escort me out in a wheelchair (even though I was perfectly capable of walking).  Finally, around 6:30 pm my wife simply commandeered a wheelchair and we made a break for it.    
The rules and processes are all undoubtedly in place for good reasons, but we need to un-handcuff all those great people when rules and processes get in the way of better patient care.  

The Ugly: The Technology

Oh, health care technology.  It is equally capable of delighting as it is of frustrating.  It is truly remarkable that the doctor could go up my arm to perform a procedure in my chest, just as the detail an MRI provides is simply astonishing.  

On the other hand, those gowns...

Let's start with the perennial whipping boy, EHRs.  All of the staff used them, seemed to accept them, and even (grudgingly) acknowledged their value.  But no one liked them.  Even the youngest users, to whom technology is a given in their personal lives, were frustrated by the interface.  And, on many occasions, EHRs did not mean that people did not still often have to drag in other electronic equipment or even paper in order for them to do their job.

EHRs could be better, should be better -- and better get better.

MRIs are a wonderful technology, but as I was laying in that claustrophobic tube getting imaged, I kept thinking: what the heck are all those clanging noises?  We can make stealth submarines, but we can't make an MRI that is quiet, so that anxious patients don't have more to worry about?

I was on various forms of monitoring devices, the smallest of which was the size of a 1980's cell phone and still required countless wires attached to numerous leads.  I kept looking at the set-up and wondering, hmm, have these people heard of Bluetooth?  Do they know about wearables?

My favorite example of ugly technology, though, came when I had to fill out a form (which looked like it dated from the 1970's), so that it could be faxed to the appropriate department.  That's right, faxed.  To a department in same institution, in the same building.  I couldn't fill it out online?  A paper form couldn't be scanned and sent securely to the other department?  

I'd love to be the boss of the guy who has to request a new fax machine, just so I could look at him with my best "you've got to -be-kidding-me?" expression. 


No health care system is perfect.  Every system has its own version of the Good, the Bad, and the Ugly.  No one wants to have health problems, and no one wants to need to be in health care settings.  When we do, and when we have to be, though -- well, our system can do better.  Let's give all those great people working in health care a better chance to help us.

If any of the above strikes home for you, perhaps you'll Like/Recommend, tweet, or otherwise share with your circle.

Monday, February 13, 2017

Ask Better Questions

I've been thinking about questions.

A few things I read helped spur this.  The first was a blog post entitled "Asking the Wrong Questions" by Benedict Evans, of VC firm Andreessen Horowitz,  Mr. Evans looked at a couple of long range technology forecasts (from as long ago as 1964 and as recently as 1990), and pointed out how they both managed to miss several key developments.  He attributed this to "this tendency to ask the wrong questions, or questions based on the wrong framework."

And we're still at it.  Mr. Evans, whose background is mobile technologies, said that people are now doing a lot of speculating about what comes "after mobile," such as AR and VR.  There are lots of good questions being asked, he noted, "But every time I think about these, I try to think what questions I'm not asking."

That, my friends, sounds like some pretty good advice, especially if you fancy yourself an innovator.

Then there was an interview with Warren Berger in Singularity Hub.  Mr. Berger labels himself a "questionologist" -- a line of work I wish I'd thought of! -- and wrote a 2014 book A More Beautiful Question.

You have to admire his ability to turn a phrase; I love the notion of a "beautiful" question.

Mr. Berger defined a beautiful question as "an ambitious yet actionable question that can shift the way we think about something and may serve as a catalyst for change."  As he further explained:

  • “Ambitious” because we have to ask bold questions to innovate
  • “Actionable” because big questions we can’t do anything about don’t lead to change.
  • Critically, the question has to cause a mental shift—it makes you step back and say, “Hmmm, that’s interesting. I hadn’t really thought about this question before, and I want to explore it further.”

He sees these kind of questions as important not just for technological innovation, but even basic questions like "what business am I in" (something, for example, the folks at Snap have recently been asking, with some surprising answers).  He further suggests organizations should turn mission statements into mission questions, to remind people to keep questioning.  And, of course, he urges that leaders foster a culture of inquiry, without demanding immediate answers to every question.

One of Mr. Berger's favorite examples is how the Polaroid instant camera came about because founder/CEO Edwin Land's three-year-old daughter asked why they had to wait to see the picture he'd just taken.  As Land later recounted, “I thought, ‘Why not? Why not design a picture that can be developed right away?’”

One could argue that Polaroid's downfall came because it stopped asking "beautiful" questions.

In 2015, Tom Pohlmann and Neethi Mary Thomas, of decision science firm Mu Sigma, wrote in Harvard Business Review that we need to "relearn the art of asking questions.   They claim that "proper questioning" has become a "lost art."  They lament that, while most of small children's conversations consist of questions, the opposite is true for adults, and blame this on educational and workplace environments that reward answers, not questions.

They categorize questions as follows:

I suspect that too many questions in most organizations would be considered "clarifying," and not very many at all would be classified in any of the other three quadrants.  The authors agree with Mr. Berger that leaders need to encourage people to ask more questions, because: "In order to make the right decisions, people need to start asking the questions that really matter."

Let's turn this to health care.

Patient engagement is one of the hot topics in health care.  Improve patient engagement, the theory is, and all sorts of good things will happen.  Patient compliance with instructions would improve, we'd do a better job managing chronic conditions, and patients would have better ongoing attention to their health.  The questions being asked are often revolve around how can we use technology to improve patient engagement.

Certainly technology could improve patient engagement, but let's start from a different point.  I would be willing to bet that all or almost all providers have mechanisms to track payments, have statistics on late payments to which they pay close attention, and have procedures in place to reach out to patients whose payments are considered late.

On the other hand, I'd similarly bet that very few providers have mechanisms to track patient status post-visit/procedure/prescription, other than perhaps a simple follow-up phone call.  If the patient doesn't contact them to complain, all is considered good.  As a result, no tracking mechanism, no statistics, no procedures to escalate anything if a patient's status is not going as expected.  

So we have a form of patient engagement already, but it is built around money, not patient well-being.  Putting in a patient portal or an app doesn't change that underlying focus.  It's addressing the wrong question.

Or let's look at the big question that confounds everyone -- why does the U.S. spend so much more than every other country on health care (and yet only has mediocre health results)?  Maybe we're simply counting the wrong things.

Most of us have, by now, probably seen a version of this chart, detailing the drivers of our health care spending.

Basically, medical care gets the lion's share of the money, but is in itself not a major driver of health.

We're somewhat unique in this.  We invest in medical care, while most other developed countries spend more of their money on "social care" -- better housing, education, support income, etc.

The AHPA charts it as follows:

When you combine OCED's health spending figures and their social expenditures, the gap in "health" spending between the U.S. and other developed countries narrows dramatically (we're still higher, most likely due to our insanely high prices).

We're not asking the right questions -- or even looking at the right problems.

I'll close with two applicable quotes:

Tuesday, February 7, 2017

Dr. Leopard, Your Spots Are Showing

Just a few years ago, things were looking up for the American health care system.  We were going to start finding better ways to pay for care: call it pay-for-performance (P4P), value-based purchasing (VBP), or similar terms.  We were going to nudge -- or, rather, push -- providers into more clinically integrated systems (e.g., ACOs) to help improve outcomes and to control costs.  And, of course, with wider use of electronic health records (EHR), we'd be able to better coordinate care and make decisions based on actual data.  It all sounded very promising.

Now, though -- what's that old expression about the leopard not being able to change its spots?

wideopenspaces.com
Let's start with EHRs, As Dave Lareau of Medicomp Systems told Healthcare IT News, "the concrete has already been poured."  For better or for worse, we've got the widespread diffusion of EHRs that we were hoping for.  Unfortunately, it seems more for worse.

medscape
They're not considered user-friendly, interoperability is as much of a barrier as ever, and the sense is that they take away more from patient care as they bring to it.  Moreover, they're woefully unprepared for the flood of data that wearables and other mobile tracking devices are already starting to generate.

Mr. Lareau further noted that "their main purpose was for reimbursement -- to get it over to billing."  Jon Melling, of Pivot Point Consulting agreed: "As we move to value-based reimbursement, we have a variety of venues to select, including value-based care and fee-for-value, which are incompatible in the system."

Oh, yes, about those new payment mechanisms.

Harvard's Ashish Jha, MD, MPH, says that: "the evidence on P4P in general is largely mixed, and the evidence on Hospital Value-Based Purchasing (VBP), the national hospital P4P program, is discouraging."

According to Dr. Jha, VBP has had no positive effect on either mortality or patient experience, and this should come as no surprise.  He'd noted several years ago that successful P4P programs must have three design features:

  • incentives large enough to "motivate" investments in improving patient care;
  • focus on a small number of high-value measures to drive practice changes;
  • a simple design that people can know how they are doing.
VBP failed all of these, in his opinion.  

Dr. Jha acknowledges that some critics believe P4P and VBP "fundamentally cannot succeed," because our quality measurement abilities are "woefully inadequate" and resulting performance measures are so flawed that it is easier to game them than to use them to actually improve care.  He is more sanguine, believing that these programs can work if designed properly, but admits that "none of these changes will be easy."

Meanwhile, professors Stephen B. Soumearai and Ross Koppel, writing in Vox, flatly assert that physician P4P "doesn't work."  People believe in it, they say, because Econ 101 would predict that performance will improve if we pay for outcomes, and because several studies claimed to show a positive impact -- studies they believe have "fatal flaws."

They cite studies that either don't take into account improvements that were already happening prior to P4P, or ones where there simply were no differences in performance under P4P than from a control group (see figure below).  In their words, "when you single out the most rigorous systemic reviews, empirical support for pay for performance evaporates."

Effect of financial incentives to physicians, patients,
or both on lipid levels
. JAMA 2015.

It's worse that that.  With these programs, they point out that we're adding some $15b in regulatory burdens on physicians alone, and may also be discouraging physicians from treating sicker patients, due to concern over how they might impact their statistics.

Not exactly what we were hoping for.

Like Dr. Jha, Dr. Soumerai and Dr. Koppel aren't entirely discouraged, having faith that providers simply want concrete information -- based on better research about the reasons for poor performance -- that will help improve care.  They cite the ever-quotable Uwe Reinardt:
The idea that everyone’s professionalism and everyone’s good will has to be bought with tips is bizarre. 
I don't think I've heard P4P ever called "tips" before, but that's not far wrong.

Muhlestein & McClellan 2016
Then there are ACOs.  Their number has skyrocketed since the passage of ACA, with there being close to 1,000 nationwide.  Whether they've been effective in controlling costs or improving quality is less clear; at best the jury is still out, at worst the answer has been no.

What we have seen, though, is that provider consolidation has been on a spree in recent years, with no end in sight.  The argument for it is that such consolidation is necessary for the kind of clinical integration that ACOs and P4P require.  This is despite the fact that such consolidation has not delivered lower costs or better quality; if anything, costs have increased with it.  

As it turns out, though, the consolidation bears little relationship to ACO penetration or physician participation in them, according to research by Neprash et. alia.  The post-ACA consolidation simply continues previous trends, although it may now be "defensive consolidation in response to new payment models."

Which, it would seem, may not really work anyway.  

So, it would seem, our health care system can't quite seem to change its spots.  It's taken every reform we've thrown at it -- every new delivery approach, payment mechanism, regulatory oversight, new competitors -- and come out virtually unscathed.  Costs keep going up, unnecessary care continues to be delivered, and thousands of lives are damaged or lost that didn't need to be.

You can't blame the health system, or the people in it.  For the most part, everyone in it is just doing what they think is their job.  It's not going to change, not on its own.  Why would it?

Maybe it is us who have to change our spots.


Tuesday, January 31, 2017

Failure to Communicate

Quick: turn on the TV (no, streaming doesn't count!).  You won't have to wait too long before an ad for some prescription drug comes on.  Watch long enough and pretty soon you'll suspect that you have a variety of conditions that you may have never realized before and need to do something about immediately.  Fortunately for you, of course, the pharmaceutical industry has solutions for you.  It's all there in those ads.

Whether we really understand them or not is another question.

Direct-to-consumer (DTC) ads for prescription drugs are booming.  After a brief respite during the most recent recession, they're back up, with spending estimated at some $5.2b in 2015 (amazingly, the DTC ads are less than 20% of pharma's overall marketing budget, with the majority of that going to face-to-face "educational" efforts with physicians).

NATALIA BRONSHTEIN/STAT
DTC ads have been controversial since they were first allowed in the 1980's (although broadcast ads didn't really take off until 1997, due to more relaxed regulations).  Indeed, New Zealand is the only other country that allows them, and the AMA has called for a ban on them in the U.S.  There's a concern that, well, the ads work -- the pharmaceutical companies persuade consumers to want their drugs, whether or not they are more effective or cheaper than existing options.

New research from market intelligence firm InCrowd helps illustrate the problem.   They surveyed physicians about DTC ads, and found that they get three times as many questions about them as they did five years ago (although I was shocked that they report only six such questions a week).

Unfortunately, 65% say they do not believe their patients understood the information in the ads.  Only 13% of physicians thought that most of their patients could understand/interpret the ads, 43% thought that maybe at least some of them could, and the rest thought that few or none of their patients could.

Equally important, physicians do not believe the ads help.  In fact, almost half felt that they actually impair patients' understanding of their conditions or treatments:

Not surprisingly, 35% of the physicians would ban DTC ads entirely, while 31% want to at least provide additional patient information, and 17% call for simplifying the message.  

As evidence of the lack of understanding, a study in Annals of Family Practice found "substantial discordance between patient and physician evaluations of drug adherence and drug importance."  Nearly 20% of drugs considered important by physicians were not taken correctly by patients, and nearly half of that non-adherence was intentional. 

I guess the ads weren't quite enough.

This is serious stuff.  Harvard professor Donald Light points out that:
  • New prescription drugs have a 1 in 5 chance of causing serious reactions even after they get FDA approval;
  • Reactions from "properly" prescribed prescription drugs (e.g., for intended uses) cause some 1.9 million hospitalizations annually, with another 840,000 hospitalizations coming from other adverse drug reactions.  
  • There are some 81 million adverse reactions suffered annually by the 170 million Americans taking prescription drugs.
  • 128,000 people die annually die from reactions to prescription drugs, tying it for the 4th highest cause of death.
  • At most only 15% of new drugs offer significant clinical advantages over existing drugs.
You don't really get any of that from those (fine print) side effect warnings, do you?

Let's be fair about the problem, though.  We can't put the blame on the pharmaceutical companies, at least not entirely.  We just don't understand our health care generally.  The Institute of Medicine estimates that nearly half of adults have trouble understanding what their doctor is telling them about their conditions and treatments.  Most patients discharged from the hospital don't understand their discharge instructions.  

The National Assessment of Adult Literacy found only 12% of us have "proficient" health literacy.  Low health literacy is associated with a host of health woes, including less use of preventive services, more chronic conditions, lower health status, and higher spending.  

No wonder we want to take action when we see those DTC ads, even if that may just compound our problem.

Part of the literacy problem is a societal one.  According to the Literacy Project Foundation, 50% of us can't read a book at a eighth grade level (which may help explain why 44% of adults have not read a book in the last year).  Forty-five million Americans are functionally illiterate and read below a 5th grade level.  Blame it on the schools, blame it on the parents, blame it on our culture, but wherever the blame lies, it makes communicating any complex issue difficult.

But much of the health literacy problem is health care specific.  Every industry has its own jargon, but few of them use the jargon with the consumers to whom they sell.   Names for drugs, conditions, treatments, even insurance features -- these are not ones that are easy for consumers to understand, remember, and make decisions about.  Health care is designed around health care professionals dealing with other health care professionals, not consumers, and its language reflects that.

In the unforgettable words of the sadistic warden in Cool Hand Luke, "What we have here is, failure to communicate.

Moreover, health care is notoriously imprecise -- try asking your doctor for the effectiveness statistics of a proposed treatment or prescription.  The statistics may not exist at all, may be contradicted by other statistics, or your physician may not know them or be able to communicate them to you.  It's hard to make good decisions with bad facts.

So, yes, DTC prescription drugs ads may be confusing, even misleading but, honestly, I have more trouble with how pharmaceutical drug companies try to influence physicians than with how they try to influence us.  

I'll own my bad decisions if I at least get unbiased advice, and with many aspects of health care I'm not always sure I am, given hidden financial incentives (e.g., prescription drug rebates, or payments by pharma/medical device companies to physicians) and the that's-how-we-do-it-here syndrome.  

We can do better.  Start with simpler language -- not talking to us like we're dumb but talking to us like it is important we actually understand -- and back it up with facts instead of marketing promises.  
That's an ad I would watch, or advice I would take.

Tuesday, January 24, 2017

Living in a Retro Health Care System

Living in the 21st century is cool, right?  We've got smartphones, ultra-thin tablets, the Internet, wearables, Uber, self-driving cars, virtual/augmented reality (VR/AR), drones, digital currency, and all the TV/movies/music you could want available for streaming anytime, anywhere.  It makes Back to the Future II's 2015 look drab by comparison (except maybe for the hoverboards!).   

So why does it seem like so many people are entranced with the 1980's?

Take, for example, the resurgence of vinyl.  Vinyl was replaced by cassettes in the 1980's, which were superseded by CDs in the 1990's, which fell to digital music in the 2000's.  But not so fast.  Vinyl is back, set to become a billion dollar industry (again).  

Sales of vinyl in the U.S. rose some 26% in 2016, and brought in more revenue than YouTube Musc, VEVO, SoundCloud, and Free Spotify combined.  It actually outsold digital music in the UK.  

There's even an annual Record Store Day to help celebrate vinyl's resurgence..  

It's not just vinyl.  People are falling in love with cassette tapes again.  Their sales rose 74% in 2016. although the number of units was still well short of vinyl or CDs.  Indie bands like them, as a cheap means of exposure, although major labels are exploiting them too, such as the release of the Guardians of the Glaxcy soundtrack cassette (the movie featured a main character toting around a beloved mix tape on his Walkman -- I kid you not -- as he battles evildoers throughout the universe).  

There's a Cassette Store Day too.  

People are even inventing new ways to listen to old formats.  The Verge reports on Love, "the first intelligent turntable," which is supposed to bridge old-tech with new tech, as well as Rokblok.  

Retro isn't confined to music.  One of the hottest Christmas presents was the Nintendo NES Classic, a modern, hand-held update of the original console, complete with 30 (mostly) vintage games.  Released in November 2016, it sold out almost immediately, and continues to have supply issues.  The president of Nintendo of America told Wired that they had assumed it would appeal to 30 to 40 year-olds who had played it as children, but its appeal proved much broader (some see intentional reasons behind the scarcity, but that's another story).      


Hey, we've got the Today show doing a 1970 retro show, the NFL going crazy with throwback uniforms, and the predicted reemergence of flip phones.  People even want retro computers, for gaming and for pure nostalgia.  And if Snap's Spectacles aren't intended as a cool mixture of retro and modern, I don't know what is.  

If any industry would keep its eye relentlessly on the future, you might expect it would be health care.  Better understanding of causes of diseases and underlying risk factors, more and better treatment options, slick new technology like wearables and nanotechnology.  Few of us would want to go back to what health care was like in the 1980's, and none of us would accept the health care of the 1950's (except maybe those house calls).  

No, in health care we expect the kind of futuristic -- or, at least, modern -- experience that tech-based start-ups like Oscar Health, Zoom Health, and the newest of all, Forward are promising.  They all want to offer "Apple Store" experience for health care (although, I don't know about you, but I usually end up waiting a lot in Apple Stores).  

If health care went retro, why, we'd usually make appointments to see our doctors in their offices instead of seeing them on-demand 24/7 (as two-thirds of us say we want), wait long periods in their bland waiting rooms, fill out lots of paperwork, have our white-coated doctor listen to us with their stethoscope, have lots of unnecessary or even harmful tests and procedures, even have our information sent by fax.  No one would want to go back to all that.

Oh, wait -- that is our health care system, for the most part.  It hasn't gone retro because we haven't yet moved past retro.  

Get this: fax machines remain the predominant form of communication in health care, with fax volume hitting new records.  That's not retro, that is insanity.  

Get this: physicians hate their EHRs so much that they are cited as a leading reason for physician burnout, and in their frustration with them physicians are turning to medical scribes to do the inputting.  

Get this: after seeing a consumer revolt in the 1990's against managed care's capitation, small provider networks, and restrictive medical management, they're all back in vogue, in one form or another.

I get retro.  I'm a Baby Boomer, after all, and very partial to the music, movies, television shows, cars, and other cultural aspects of the era in which I grew up.  We all are nostalgic about things from our formative years.   

But I do not want to get care in a retro health care system.  

EHRs are a perfect example of how we took something that should revolutionize health care, and turned it into something that not only no one is happy with but that many feel often impedes care, to the point some want to go back to paper records.  That's not retro, that's just stupid.  We didn't do the wrong thing with EHRs, we just are doing it wrong.

As I've written before, we should be thinking big and bold about how we want our health care system to work in the 21st century.  We should be setting tough goals for how effectively it works for us -- and expecting to achieve them.  We should be looking forward, not backward.

We have all the technology we need to make our health care experience, well, if not like magic, then certainly more like a 21st century health care should seem.  Let's get there first -- then maybe we can think about how we can do some cute retro to it.  

Tuesday, January 17, 2017

A Little Knowledge Could Be a Dangerous Thing

One day soon, we'll have real-time or near real-time information about our health.  Not just how we are at the moment but also whether and for what we are at-risk.  I'm fairly certain about this.

I'm less certain that this will be necessarily a good thing.

Michael Snyder and wearables (Steve Fisch, Stanford.edu)
The topic has received a lot of press recently due to a study by Stanford researchers (Synder, et. al) in PLOS Biology, which concluded that:
these results indicate that the information provided by wearable sensors is physiologically meaningful and actionable. Wearable sensors are likely to play an important role in managing health.

The study collected nearly 2 billion measurements (!) on 60 participants, who wore up to 7 tracking devices.  It focused especially on identifying early signs of Lyme disease and inflammation, and risk for Type 2 diabetes, but the authors expect that its implications will go much further.

Dr. Snyder told Scientific American: "Too much of the time we spend time measuring people when they’re sick.  What we really want to understand is what does it mean to define a healthy state, then quickly identify deviations from that state."  The trouble will be that we won't always know what those deviations mean to our health.  It will take a long time to figure out what our baseline is, and when which deviation mean what.

Dr. Snyder further noted, "We have more sensors on our cars than we have on human beings," a situation he believes soon will change, as the current wave of mainly activity trackers evolve to more directly track health measures.

The reference to sensors in cars is valid.  A Reuters article profiled how insurers are betting big on sensors.  Thirty percent of North American auto insurers are using sensors in cars to track driving behavior of their insureds in order to more accurately price their policies, and this is expected to grow to 70% by 2020.  Health care and health insurance will quickly follow.
 
Tracking isn't just limited to wearables.  A new urine test can determine within five minutes how healthy your diet is, and an MIT-backed start-up is developing a "smart toilet device" that can measure, in its first iteration, glucose and hydration levels.

A new breathalyzer claims to be able to diagnosis 17 diseases with one breath, including several types of cancer and kidney disease.  The researchers hope to incorporate the technology into smartphones.

University of Cambridge
That would just add to the ever-growing capabilities of smartphones.  Just within the past few weeks there have been announcements about them tracking heartbeats, diagnosing malaria, diagnosing and managing respiratory diseases, identifying genetic conditions, even sequencing DNA.  

There seem to be no foreseeable limit on what we will be able to track and even diagnose with the various ubiquitous technologies that are being developed.  We'll need AI to sift through all the data that will be generated about us, and to synthesize it into actionable information.  If we think EHR alert fatigue is an issue now, just imagine what it will be like when we have billions or trillions more data on our health, and more of that information is directed towards us, not just to our physicians.

We just may not want to believe everything they tell us.

For example, a new study found that a third of patients who had been diagnosed by a physician as having asthma did not, in fact, have it.  The lead author diplomatically cautioned that: "It's impossible to say how many of these patients were originally misdiagnosed with asthma, and how many have asthma that is no longer active," but it is sobering that the "gold standard" of a physician diagnosis can be that fallible.  Why then should we believe a wearable or smartphone?

Of course, one could argue that wearables or other sensors would have picked up the diagnosis sooner and/or more definitively, and could better have determined when it was no longer active.

The problem is that a lot of what is considered the state-of-the-art in medical beliefs is subject to change.  Aaron Carroll recently urged that we view such beliefs with a "healthy skepticism."  He detailed several examples of where what we "knew" to be true turned out to be, well, not so much.

As he pointed out, "Sometimes it’s hard to separate what’s truly a medical certainty from what is merely solid scientific conjecture."  Sometimes even those certainties turn out to be not quite so certain, and sometimes "solid scientific conjectures" prove neither solid nor scientific (e.g., the appendix is important after all).

All this is going to make it hard for us to turn all that data we're going to be collecting into meaningful advice.

The Mayo Clinic recently published The Promise and Perils of Precision Medicine. warning that so-called precision medicine, based on genetic testing, may not be all that precise.  It can lead to misdiagnosis, as well as unnecessary or even harmful treatment.  As the article concluded: "Although the technological advances in genetic sequencing have been exponential, our ability to interpret the results has not kept pace."

For "genetic sequencing," we could equally substitute a host of other new types of data.  For example, full body imaging was supposed to provide peace of mind, catching cancer and other issues sooner, but is more likely to result in unnecessary tests and procedures than in helping.

Our ability to be more precise does not mean we'll always be more accurate.

We will track more about our health.  The data will eventually tell us more about our health than we know now.  In the meantime, though, we're going to have to take what it says with a rather large grain of salt, rather than always rushing into action.  That will not be easy.

If tracking can help teach us to listen better to our body and to take appropriate action only when necessary, that's great.  If we end up relying on it to manage our health, though, then we've taken one more step away from our health, and from ourselves.

What I hope most is that all that data are training wheels rather than crutches.

Wednesday, January 11, 2017

At Least I'm Virtually Healthy

Virtual reality (VR) is hot.  It was one of the headliners at this year's CES (as it was at last year's...).  One report predicted that VR "will change everyday human experience in the coming decade," just as smartphones have in the past decade.   We're not just talking about much, much more immersive games, although that industry has been an early adopter.  Every industry is going to have to figure out how to best make use of it (and its cousin, augmented reality).
thealmostdone.com

Including health care.

What interests me most is whether VR proves to be a path towards improving our health, or if it will end up making us care even less about it.
VR is already making in-roads in health care.  One of the ways that VR is being used is to help people manage pain, whether that is for people undergoing painful procedures, people with chronic pain such as amputees' "phantom limb" pain, even women in childbirth.

The theory is that the brain can only absorb so much information at a time, and the VR experience can essentially crowd out the information stream that is carrying the pain signals.  You are still hurt, your body is still sending out pain signals, but, if the VR is done right, those pain signals are just getting lower priority.  Our brain would rather be in VR.

VR has also been proposed as a powerful tool in addressing a variety of mental health issues, including stress, anxiety disorders, or PTSD.  As with pain relief, some of the traditional alternatives include a variety of pharmaceutical remedies, some of which can carry risks of addiction, so VR can be a boon.

People are using VR for their health too, not just their health care.  Many people find exercise boring, especially extended sessions on treadmills, exercise bikes, or ellipticals.  For several years, many gyms have allowed users to pretend they were elsewhere while they exercised, tying activity on the exercise machines to images of more scenic locales playing out on flat screen TVs in front of them.

VR takes this to the next level.  Instead of essentially watching images on television, the VR is almost as if you are actually there.  VirZoom, for example, claims to straddle esports and exercise, as its exercise bike connects to a number of leading VR headsets.  Users can compete with other players as they work out.  Fitbit has already partnered with them.  

There are no shortage of other entrants trying to make VR part of fitness efforts.  For example, Blue Goji and Holofit have similar approaches to VirZoom, while Black Box VR offers a virtual gym, complete with virtual personal trainer.

The VR fitness program I want to see, though, would help remind people why they should try to get better health habits.  Many people have gotten used to their current health status, even if that status includes being overweight, poor cardiovascular systems, and weakening muscles.  We often slide from good health to fair health to poor health without fully realizing it, and that can be a pit that is hard to climb out of.  Watching TV is easy, junk food tastes good, while exercise is hard and eating better requires some discipline.  So many don't make the effort.

What if VR not other took us to other places, but also helped show us how we could feel?  Want to see how your body would look and feel like if you walked a mile a day and lost ten pounds?  If you ran 20 miles a week and lost 30 pounds?  Actually experiencing the fruits of your efforts before you undertook them, in order to better understand the effort/reward trade-offs, might serve as a powerful motivator for those who have had a hard time making those trade-offs.

VR could similarly help people make more informed decisions about proposed treatments that can have both positive and negative trade-offs, such as knee or hip replacements.

The better VR becomes, though, the more danger will be that, well, the VR version of us might be preferable to the "real" us.  People have gotten used to the concept of avatars in games, and invest a lot of emotional energy into what that avatar is and how they can "improve" it.  Our avatar in VR may increasingly be us, only a new-and-improved us.  Once robots have taken over our jobs and the government pays us a universal basic income (as Elton Musk and others have suggested will both happen), there may be less reason to be in reality and all-the-more reason to spend our time in VR.

We're already worried about the impact of excessive screen time on the health habits of teens, and that is without VR as a common option.  The trend towards how much time is spent per capita on playing games continues to steadily increase - again, without VR.  Think of the time we'll be soon spending in VR.

Once VR is ubiquitous, inexpensive, and as nearly lifelike as we can perceive -- all of which are in our near future -- why wouldn't we want to be in VR?  

It sounds a little like The Matrix, except that we might be voluntarily making the choice to live in VR instead of having it imposed upon us by our AI overlords.  We might like to think we're Neo, the hero of our own lives, but many of us might opt to be like Cypher, who found reality bland, difficult, and dangerous, and chose a virtual steak over helping his flesh-and-blood fellow humans.

We're barely scratching the surface of what VR is and what it can do.  VR headsets are clunky and expensive, and still have limited options for what they let us experience.  They're like early PCs or early smartphones.  Not many in 1987 imagined what their PCs would be able to do in 2017, and not many in 2007 saw all that smartphones of 2017 would offer. The gap between VR of today and VR of 2027 will be wider than that of the gap between the first iPhone and today's iPhone 7.  

Virtual reality is going to do wonderful, amazing things.  It will change how we play games, how we do business, how we socialize, how we get health care -- in short, how we live our lives.  The question is, will it help us live better, more productive lives -- or will it become our lives?