Sunday, September 14, 2014

Can We (Not) Talk?

I read an article on the so-called "Internet of Things" recently and was struck by the reporter's realization that "...what’s equally hard to escape is the fact that, for now at least, the new devices being rolled out are mainly designed to connect with other new devices made by the same company."

Boy, that sounds a lot like interoperability in health care, doesn't it?

EHR use has skyrocketed due to the HITECH incentive payments (more than $24b paid to date), but the end goal of "meaningful use," especially interoperability, remains elusive.  Market leader Epic has been a frequent source of criticism.  They've responded in the time-honored American manner: they've hired a lobbyist, supposedly their first, in order to “educate members of Congress on the interoperability of Epic's healthcare information technology.”

Yeah, that's it; I'm sure it's lack of education that is the problem.

ONC has just released their final rule for 2014 EHR certification.  It makes the criteria "more flexible," and backs off a proposed set of 2015 voluntary certification criteria.  Some critics believe that, once again, ONC is giving in to pressure from the EHR industry, and it would be hard to argue.  Several other key MU deadlines had already been pushed back, with the start of Stage 3 not until 2017.  HIMSS even opposes a full year 2015 reporting requirement.

Stage 2 attestation rates remain low, and only 25% of ambulatory providers think they'll be able to meet stage 3 requirements -- down from 33% a year earlier.

Yes, yes, I know there are promising developments like FHIR standards, but on the whole it's easy to be discouraged about the prospects for interoperability. 

In Forbes, John Graham called health care interoperability the "$30 billion dollar unicorn hunt," which I think is a wonderful line.  I'm beginning to think geneticists would have an easier time engineering an actual unicorn than ONC leading us to interoperability.

There are no shortage of opportunities in health care.  It's maddening that each provider seems to collect the same administrative information and personal/family history, it's wasteful that they often repeat tests, and it can be life-threatening that they don't communicate with each other better.

Yet we're still in our silos, even if now those are more likely to be electronic silos.
 
How did we get here?  The Boston Globe quoted UCSF professor Dr. Robert Wachter: “Computerization in health care was a market failure.  The idea that you would need a federal incentive program to get United Airlines to computerize or Walmart to computerize is laughable.”

It is laughable.  I've long wondered why it is that other industries had to painfully convert to digital records and information exchange on their own, while health care not only stubbornly resisted but actually expected federal handouts to convert.  And we not only fell for it, we allowed ONC to "certify" acceptable products.

Somehow I cannot believe that federal certification of any product is a path towards excellence. 

If you're old enough, you may remember ATM cards were initially bank- and ATM vendor-specific.  Now you can use an ATM pretty much anywhere in the world.  That didn't happen out of the goodness of the banks' or ATM vendors' hearts -- or from federal certification.  There's no reason the equivalent couldn't happen in health care.

There was a market failure with EHRs, but I suspect it was more like the well-known Betamax versus VHS market failure, where the experts loved Betamax's quality of recording but those pesky customers opted for VHS's longer recording time.  With EHRs, experts wanted interoperability, decision support, electronic prescribing and the like, but providers wanted something that would speed their patient throughput, which existing EHRs don't seem to do (a recent study estimated they added a startling 48 minutes to a clinician's day).

No wonder that more than 25% of ambulatory practices are looking to replace their EHR vendor, according to KLAS research.

Back in July, The Boston Globe ran a provocative article on how we're spending $30b to spur EHR adoption, but have failed to require any error/adverse event reporting -- even when caused by the EHR.  As The Globe said:
But critics say the government’s hands-off approach is wrong. It’s as though, they say, jetliner pilots were flying in poorly designed cockpits with malfunctioning equipment, and repeatedly slamming into mountains, while the Federal Aviation Administration and the National Transportation Safety Board decided not to regulate — or even keep a list of the crashes and near misses.
Pressure from vendors was cited as a key reason for this "hands-off" attitude.  You have to wonder how many deaths or adverse events could have been avoided.

I'm much less interested in interoperability or even EHRs per se than in better reporting from providers -- not just errors and adverse events, but also provider performance and patient outcomes.  It's shameful we not only haven't better defined how we should evaluate these but also for the most part still lack the means to collect the data.

If I was a provider, I can't imagine that I wouldn't want to have at my fingertips statistics about my patient population -- what I've seen, how I've treated, and what the outcomes were.  If I ran a practice or a hospital, I can't imagine not being able to get these for the providers who work there. 

Try doing any of that with paper records.  If that kind of detailed reporting was required, providers would be flocking to EHRs on their own dime.

I think much of the problem relates to how most EHRs are designed: having the same platform for the data as for the presentation layer.  Tying both those together is a large part of what makes interoperability challenging, and makes switching vendors difficult -- who wants to risk losing patient information?

I'm no information architect, but it makes much more sense to me to have a separate patient-centric data structure, which is agnostic as to what platform it is getting data from or to which it is providing it.  Let vendors innovate on how they interface with the provider (and the consumer), but use standard APIs and standards to transmit the data.  There are some vendors moving in this direction -- e.g., Zoeticx  and my friends at Datuit -- but it's hardly the mainstream.

At least, not yet.

In retrospect, HITECH made a classic mistake, incenting the means instead of the desired behavior.  Want providers to collaborate better about patients?  Move faster to bundled payments, value-based purchasing, and ACOs, and let providers and their vendors figure out the means by which to do so.  Want to reduce duplicate tests and imaging?  Deny claims for the duplicate set, and watch how fast providers and their vendors scramble to reduce the duplication. 

Maybe, though, interoperability isn't the right goal.  After all, as Mr. Graham also said:
Nobody would expect The U.S. Department of Transportation to set up a fund to incentivize car-makers to exchange data with each other, or the U.S. Department of Agriculture to set up a fund to incentivize grocery stores to exchange data with each other.
In fact, the emerging paradigm outside health care seems to be not interoperability but rather Apple versus Android (sorry, Microsoft and Blackberry!)   Consumers are increasingly ending up in one of the ecosystems, but don't expect information or apps to crossover.  That may or may not be a good thing, but consumers seem to accept it.

"Intraoperability" can be a competitive advantage, and perhaps that is enough.

We have to remember that interoperability is not a goal in itself.  It is only important if it leads to better patient care and/or helps improve the patient experience.  And it not only is OK but desirable if some vendors find ways to accomplish that better/sooner than others. 

My fear is not that the $24b has been superfluous, but rather that it actually has the effect of stymieing the very innovation we're hoping for.

No comments:

Post a Comment