Tuesday, April 25, 2017

Clicks-and-Mortar: Health Care's Future

The woes of the retail industry are well known, and are usually blamed on the impact of the Internet.  Credit Suisse projects that 8,600 brick-and-mortar stores will close in 2017, which would beat the record set in 2008, at the height of the last recession.  There are "zombie malls," full of empty stores but not yet shuttered.

And then there's health care, where the retail business is booming.

In a recent Wall Street Journal article, Christopher Mims set forth Three Hard Lessons the Internet is Teaching Traditional Stores.  The lessons are:
  1. Data is King
  2. Personalization + Automation = Profits
  3. Legacy Tech Won't Cut It
It's easy to see how all those also apply to health care.

But health care is different, right?  Patients want to see their physician.  That physical touch, that personal interaction, is a key part of the process.  It's not something that can be replicated over a computer screen.  

Yeah, well, the retail industry has been through all that.  Retail once primarily meant local mom-and-pop stores.  They knew their customers and made choices on their behalf.  Customers had little say in the choice of products, nor much ability to compare prices.  But it was all very personal.

The 19th century saw innovations like mail order catalogs (e.g., Montgomery Ward, Sears Roebuck) and department stores (e.g., Macy's, Wanamakers, Marshall Field's), while the 20th century added shopping malls.  Each helped distance consumers from their local merchants.  

Still, though, when Amazon came along, booksellers were adamant: no one wants to buy books sight unseen!  When that truism was proven false, other sectors of retail had their turn in the Internet spotlight, and the last twenty years of results haven't been pretty for them.  

It turns out that the personal touch isn't quite as important as retailers liked to think.

As for health care, it seems to be surviving the Internet onslaught pretty well.  WebMD alone has more monthly visits than to all the doctors in the U.S., but any declines in doctor visits are more likely due to economic factors than to internet searches.  Telemedicine has been touted as one of the next great health care innovations, but research suggests that, while it may substitute some in-person visits, it more than offsets that with new visits.  

Health care is following some historical retail trends.  Independent physician practices are quickly getting purchased by health systems,  which are becoming the department stores of health care -- down to their sprawling suburban campuses and their vertical integration of services.  Even independent practices increasingly rely on impersonal billing and practice management companies.  Everyone has computers, especially for the money, but few have really changed their processes to take full advantage of them.

It's very 1960s in the health care retail world.

As for that "personal touch," well, a Harvard study found that the average doctor visit takes 121 minutes of patients' time, only 15-20 minutes of which is actually spent with the doctor.  That's after being able to actually get an appointment, which can take weeks.  


That's not the kind of personal experience that consumers really want.

So why hasn't health care been more disrupted by the Internet?  Well, for one thing, when you buy a book online, your state doesn't require that you buy it from a bookstore that is licensed by its not-so-friendly licensing board, as is true with seeing doctors over the internet.  The state medical licensing boards are ostensibly there for our protection, but time and time again have acted as though protecting physicians' livelihoods is their main concern.

Strike one for disruption.

For another thing, we (usually) trust our doctors.  Then again, we used to trust recommendations from bookstore staff too.  That is, when they had time for us, if they seemed knowledgeable, and if they were making recommendations that fit us rather than just their own preferences.  AI-based recommendations from Amazon may not be as good as those from a really good bookstore employee, but are probably a lot better than those from the mediocre workers you were more likely to encounter.

Think the same thing won't happen when AI gets better at diagnoses?

Let's go back to Mr. Mims three lessons and see how they apply to health care:

  • Data is King: Health care collects a lot of data, and will get even more with all the new sensors.  Not all of that data is meaningful, much less actionable.  Health care providers sometimes share your data, but not always with your consent and rarely to your direct benefit.  All of that will change.  For example, Google's Verily has started Project Baseline to do in-depth tracking of 10,000 volunteers.  Their motto: "We mapped the world.  Now let's map human health."  The big tech companies know their customers very well and tailor interactions accordingly; health care must as well.
  • Personalization + Automation = Profits: Mr. Mims cited Amazon Go as an example of how these two features could boost margins, and Information Age similarly described retail experiences based on more automation and better knowledge of customers.  Meanwhile, we're stuck in waiting rooms, filling out forms we've already filled out elsewhere. That is not a personal experience that can survive in the 21st century.  It has to be smoother, faster, and friction-less.  
  • Legacy Tech Won't Cut It: EHRs that no one likes.  Claims systems that take weeks to process a claim.  Billing processes that produce bills no one can understand.  Records that are siloed when we want them shared, yet all too open to being hacked.  The list could go on almost indefinitely.  All too often, health care's tech is not ready for prime time.  
The question is, are health care's leaders learning these lessons?

The future of retail appears to be in "clicks-and-mortar" (or "bricks-and-clicks").  Amazon is opening up physical stores, while Wal-Mart is beefing up its online credentials.  They and other retailers know that consumers want things fast, sometimes in person and sometimes not, like options, and always are paying attention to the cost.  It takes both online and in-person presence.

Health care can act like B Dalton or Borders, assuming until it is too late that their consumers will visit them in person, because they always had.  Or it can act now to jump to the data-driven "clicks-and-mortar" approach that other retail businesses are moving to.  

Health care organizations which get that right will be the one to survive.  The rest are zombies, dead but not aware of it.

Tuesday, April 18, 2017

Think Bigger. Fail Often.

Alan Kay recently outlined some of the principles that he thought made Xerox's PARC so successful (if you don't know who Alan Kay is or why PARC was so special, you should try to find out).  One was: "'It's baseball,' not 'golf'...Not getting a hit is not failure but the overhead for getting hits."

That doesn't quite square with my impression of golf, but I take the point.  It's about the price of success.

As psychologist Dean Simonton pointed out in Origins of Genius: "The more successes there are, the more failures there are as well."  "Quality," he wrote, "is a probabilistic function of quantity."    

We talk a lot about innovation these days, especially "disruptive innovation."  Why not?  It sounds cool, it allows people to think they're on the cutting edge, and it often excites investors.  But perhaps we've lost sight of what it is supposed to actually be.  

Vuki Vujasinovic recently wrote in Forbes that: "Almost every use of the phrase ‘disruptive innovation’ as we see it today is wrong."  He cited several such examples, including Porsche and even Uber, and reminded us that the phrase doesn't just mean "change" or a new entrant in a market.  

Mr. Vujasinovic urged would-be innovators to be more precise with their claims, reserving "disruption" for true disruption.  Instead, he suggested: "Say you are doing something different, say you are changing the way something is done, but don’t say you’re disrupting something just because it’s a nice word you want people to repeat."

To go back to the baseball analogy, in health care these days we don't have a lot of home run hitters.  We have a lot of companies who are single hitters -- or maybe are just trying to bunt.  There are too few people swinging for the fences.

And even fewer trying to invent a brand-new game, one better suited for the 21st century.

Some examples may help illustrate why.

Wired published an excerpt of Rutger Bregman's new book Utopia for Realists, in which he gives several examples (such as the 2008 financial crisis) where "cognitive dissonance" kept well-educated, intelligent people from seeing what should have been obvious problems.  We're so set in our ways that we keep going down the same track even as it should become increasingly obvious that it is a dead-end.  As a result:
When reality clashes with our deepest convictions, we’d rather recalibrate reality than amend our worldview. Not only that, we become even more rigid in our beliefs than before.
I.e., no one likes our health care system, it's demonstrably not doing a very good job and doing so at a very high cost, but, hey, let's just tinker at the edges.

Professor Bregman does believe that new ideas can change the world, but it may take some sudden shocks and persistent objectors to get people to change their mindsets.  Our trouble, he warns, is that "we inhabit a world of managers and technocrats," who focus on the problems and solutions at hand.

In other words, singles hitters.  Never fear, though, he reminds us: "Ideas, however outrageous, have changed the world, and they will again."

John Nosta similarly warns in Psychology Today that innovation is too often throttled by "the mushy middle," usually in the name of collaboration.  Innovation is not about collaboration and certainly not about consensus, because: "Innovation is not an intellectual average."

He asserts that we need those "high performers" and their sometimes outrageous ideas, instead of "having their fragile voice crushed by generic consensus."

Lastly, in Fast Company, a trio of researchers noted the subtle power of default choices.  How choices are presented -- like opt-in versus opt-out -- has a strong impact on decisions.  Organ donation is a classic example, where the percentage of people agreeing to being an organ donor is significantly higher (like 80% higher) when it is presented as the default choice.

Disclosure about the options doesn't help as much as we'd like to think, because, "Research shows that making an option the default leads people to focus on reasons to accept the default and reject the alternative first and foremost."

We may not always realize the default options we're being given, especially when confronting a highly complex, inter-dependent system like health care.  We think we're changing something, but usually we're only doing so within the default options the existing system gives us.

John Nosta also writes about how Apple and Google's recent forays into health care should be a wake-up call for the life science industry, "which oftentimes has relied on the snooze function of line extensions and extended-release drugs as the source of income and innovation."  Those two companies' "expectedly unexpected" innovation in this area is welcome, but even they may be too entrenched into the existing approaches to have truly disruptive impact.

So, all you would-be health care innovators: are you prepared to fail, lots of times, before you succeed?  Are your ideas truly disruptive, or simply twists on what we've been doing?  Is yours a bold vision of what could be, or is it of just slightly further down the road that we're already on?

For example, when I read about an interesting start-up Better, which seeks to help consumers with their health insurance claims -- fighting with insurers and providers to ensure consumers only pay what they should -- I have two conflicting thoughts:

  1. Consumers certainly need help like this;
  2. I wish they'd focused instead on making the underlying problem(s) go away.
I love telehealth.  I love digital health.  I love direct primary care.  I love having AI help doctors.  They -- and numerous other examples -- are all important developments that, arguably, will help make our health care system better.

But they are not disruptive innovations.  They are not swinging for the fences.

Our health care system is so inefficient and so wasteful that it's almost too easy for innovators to pick a problem and make it at least less bad.  It certainly needs that.  If that's all they are looking for (and to get their piece of the $3 trillion pie), well, you can't hardly blame them.  

Me, though, I'm rooting for the innovators who are swinging big and are willing to miss a lot.  They're the ones who will eventually get us to the health care system of the future.

Tuesday, April 11, 2017

Losing the Doctor Lottery

Donna Jackson Nakazawa's insightful Health Affairs article "How to Win the Doctor Lottery" is, in turn, sad, frightening, wise, and hopeful.  She recounts some of her personal travails in finding the right doctors, the ones who will truly listen and become "a partner on my path to healing," and offers several suggestions about what has to happen for us to have more chance to "win."  

The real question, though, is not how to win the doctor lottery we find ourselves in, but why we're playing it at all.

Getting the right doctor is hard.  Consider the following:
  • It's easy enough to find out where a physician went to medical school and did their residency.  It's not as easy to know what the best medical schools or best teaching hospitals are, other than by reputations (that may or may not be deserved).  Maybe your doctor went to Harvard and did their residency at Johns Hopkins, but, otherwise, you may not be so sure about how good their training was.
  • Even if you did know how good their place of training was, you still wouldn't know how your doctor did there.  They might have been last in their class.  Even if you did find this out, you don't know if it is better to have done worse at a "better" school or well at a lesser school.  
  • In fact, it's not really clear that where one went to medical school or did their residency, or how well one did in those, has any measurable impact on actual competence as a physician.
  • Being board certified as become an accepted measure of basic competence in a specialty, but there is fierce debate between physicians and the specialty boards as to whether the process -- particularly the ongoing maintenance of certification (MOC) -- does anything of the sort.  
  • It would be good to know if a physician has had drug or alcohol impairment issues, has been charged with sexual improprieties with patients, or has a large number of malpractice suits, but don't expect to be able to find any of those out.  The medical licensing boards who should know aren't likely to tell you.   
  • There are many measures for "quality" when it comes to physicians, but none that are considered definitive, many of which are not meaningful to consumers, and all-too-few of which focus on what we should care most about: patient outcomes.   
  • Even for data that should be readily quantifiable -- e.g., how many of these procedures did Dr. X do?  How many patients die under Dr. X's care?  How many patients with my diagnosis does Dr. X treat? --  are rarely actually discoverable.  
  • There are some physician satisfaction scores and patient ratings, but most of those are looked upon dubiously, due to low reporting volume and likelihood of being skewed by non-clinical factors (like wait time or how quick prescriptions were given).
  • When your physician recommends a treatment or a drug, you don't know if the physician is doing so because the latest research solidly demonstrates their efficacy.  The physician may be being paid on the side by a drug/pharma company, may be influenced by the most recent drug rep visit, hasn't kept current on the research or simply doesn't accept it because it wasn't the way he/she was trained.  
  • As skilled as you may be at researching doctors, you may still find yourself in an emergency or other rapidly developing situation, in which you end up being treated by doctors you haven't had time to research and have never heard of. 
It's a wonder any of us ever find the "right" doctor.

Calling choosing the right physician a "lottery" may be being unfair to lotteries.  At least lotteries disclose the odds of winning, low though they might be, and it is usually clear very quickly whether there is a winner and who it is.  

In health care, you may never really know if you've won or lost, or may only find out much too late to do anything about it.  You may have gone through unnecessary pain and suffering, you may have lost years of better health, or you actually die.  

And, of course, you'll get billed for everything all along the way.

The even sadder thing is that it's like this throughout health care.  It may be marginally better for hospitals, in least in terms of more available data, but the usefulness of even that is not entirely clear.  For other types of health care professionals or institutions, information is even less available than for physicians.

Similarly, data on efficacy of treatments, procedures, or drugs is highly variable, often not disclosed to or discussed with patients, and usually not easily understood by them.     


In the end, most of us select a doctor based on the recommendations of friends and family, or another doctor, all of which are likely to be subjective as well.  And if it is true that most of us have confidence in our current doctor, that may be because we've switched from doctors in whom we lacked it (a phenomenon that seems little tracked).

It's still a lottery.

Most people who play the lottery know their odds of winning are low, and aren't betting their financial future or their lives on it.  For most of us, most of the time, picking the right physician is not a life-or-death decision either.  But when it is, we'd all like the decision to be more than random luck.

It is like the scene in WarGames, where the computer concludes the only way to win is not to play:


We don't have a data-driven health care system.  We don't have a performance/outcomes-driven system.  We tolerate it because we usually don't realize it, because most of the time it doesn't impact most of us.  But those are poor excuses.  We can, and should, demand better.

Playing the lottery is not a sound financial strategy, and it shouldn't be our strategy for getting heath care either.  

Wednesday, April 5, 2017

That Does Not Compute

OK, you use your smartphone all the time: you use the latest and greatest apps, you can text or tweet with the best of them, you have the knack for selfies, and so on.  You probably also have a computer, tablet, and a gaming system, each of which you are also very proficient with.  No question: you are a whiz with electronic devices.

But, if you're like most of us, you don't really know how or why they work.

Maybe that's OK.  Most of us don't know how our cars work either, couldn't explain how heavier-than-air flight is possible, have no idea what the periodic table means to our daily lives, and would be in trouble if our lives depending on us making, say, bricks or glass.

Still, though, as Captain Kirk once said (in a very different context), you have to know why things work.
We're not all going to become computer programmers.  Everyone doesn't need to major in computer science.  But we could all use to have a better understanding of how and why computers -- or, at least, computer programs -- work.

Welcome to computational thinking.

The concept was introduced by Jeannette Wing in a seminal paper in 2006.  She suggested that it was a fundamental skill that should be considered akin to reading, writing, and arithmetic -- learning how to solve problems by "reformulating a seemingly difficult problem into one we know how to solve, perhaps by reduction, embedding, transformation, or simulation."

She further clarified that it has the following characteristics:
  • Conceptualizing, not programming
  • Fundamental, not rote skill
  • A way that humans, not computers, think
  • Complements and combines mathematical and engineering thinking
  • Ideas, not artifacts
  • For everyone, everywhere.
Dr. Wing believes we've come a long way since her manifesto, and she may be right.  For example, Microsoft sponsors Carnegie Mellon's Center for Computational Thinking, and Google offers Exploring Computational Thinking, "a curated collection of lesson plans, videos, and other resources on computational thinking (CT)."  It includes an online course for educators.

A new initiative, Ignite My Future, wants to train 20,000 teachers to help make computational thinking a fundamental skill, hoping to engage a million students over the next five years.  One of the last initiatives President Obama announced was the Computer Science for All Initiative, providing $4b to improve K-12 computer science education (how it survives the new Administration remains to be seen).

A recent New York Times article, notes that, while the number of computer science majors has doubled since 2011, there is growing appeal to learn more about computer science by non-CS majors: "Between 2005 and 2015, enrollment of non-majors in introductory, mid- and upper-level computer science courses grew by 177 percent, 251 percent and 143 percent, respectively."

There is now an Advanced Placement course Computer Science Principles that "introduces students to the foundational concepts of computer science and challenges them to explore how computing and technology can impact the world."


The Times also profiled a number of ways that "non-techies" can learn elements of computational thinking, because "Code, it seems, is the lingua franca of the modern economy." The options include CS+X initiatives in college, a number of intensive "boot camps," and an increasing number of online courses, such as through Coursera, edX, and Udacity.   

Sebastian Thrun, co-founder and chairman of Udacity, argues that this kind of thinking is important for everyone because: "It’s a people skill, getting your brain inside the computer, to think like it does and understand that it’s just a little device that can only do what it’s told."

Still, computational thinking is not a panacea; as Shriram Krishnamurthi, a computer science professor at Brown, warned The Times, in our current culture, "we are just overly intoxicated with computer science.”

One of my favorite approaches to demystifying programming, Raspberry Pi, has sold some 12.5 million of its ultra-cheap, ultra-simple computers, making the 3rd best selling "general purpose computer" ever.  

Do a search for Raspberry Pi and you'll find thousands of examples of things people are doing with them, from simple tasks that children can do to sophisticated hacks.  Heck, someone has made a LEGO Macintosh Classic using a Raspberry Pi.

There's a new industry in toys for children that help teach coding, as The New York Times also reported. including Cubetto (which, at $225, is a lot more expansive than Raspberry Pis).  

All of which is to say, there's getting to be fewer and fewer reasons why people don't learn computational thinking.

And health care sure could use some more of it.

Health care likes to think of itself as a science, and it has many trappings of science, but even in the 21st century it remains much more of an art.  After all, this is the industry in which it was just reported that 20% of patients with serious conditions have been misdiagnosed -- in fact, most people are likely to experience at least one diagnostic error in their lifetime -- and in which we have an "epidemic" of unnecessary care.

It is an industry in which the technology often frustrates both the providers and the patients (e.g., EHRs and mammograms, respectively), where design is confusing at best and harmful to patients at worst.   It is an industry in which the coding has gone beyond arcane to incomprehensible,  

And it is an industry where there is surprisingly little data on efficacy, even less agreement about how to measure quality or value, and little training to help clinicians interpret or explain the data that does exist.  It is an industry that is bracing for its era of Big Data, and may not be at all ready.

So, yes, some computational thinking in health care certainly seems like it would be in order.