Friday, September 26, 2014

Dying May Be Killing Us

The IOM got a lot of press recently with its report Dying in America -- which I'll get to shortly -- but I thought the more provocative end-of-life discussion was Ezekiel Emanuel's Why I Hope to Die at 75, published in The Atlantic.

Dr. Emanuel, whose eventual obituary will no doubt cite his role as one of the key architects of the Affordable Care Act, has a lot to say about death and, more importantly, dying.  He says he won't actively end his life at 75 but at that point he won't take active measures to prolong it either: "I will stop getting any regular preventive tests, screenings, or interventions.  I will accept only palliative -- not curative -- treatments if I am suffering pain or other disability."  No cancer treatments, no heart value replacements, no pacemakers, not even any flu shots or antibiotics.

Pretty bold claim.

Dr. Emanuel notes that the hope for longer lifespans has been for more years of good health, but cites various statistics to the contrary.  There are more years with functioning loss, disability or disease.  He quotes USC researcher Eileen Crimmins as saying that over the past 50 years, health care hasn't so much slowed the aging process as it has slowed the dying process, which is very scary. 

He admits that many people over 75 remain active, productive, even creative -- but asserts that even they are no longer what they once were.  In his own case, he doesn't want family and friends' memories of him to be colored by years of an older, sicker, more feeble version.  

I'm only a couple years older than Dr. Emanuel, and I have a lot of sympathy for his position.  I'm not what I was at 25, and it's not encouraging to think about 75.  Then again, I'm not what I was ten years ago either, and I'm not quite ready to sign off yet.  Perhaps the most surprising thing to me about aging has been not the limitations one encounters but the acceptance of them.  That may be wisdom or that may be rationalization.

We'll have to see how Dr. Emanuel feels once he actually reaches 75 and has a medical need.

It's no surprise why this is an issue.  Medicare spends 32% of its overall spending for patients with chronic illnesses in their last two years of life.  Most of that spending -- 28% of overall -- happens in the last 6 months.  This wouldn't be so bad, except that there is a strong feeling that all that spending isn't delivering what patients would actually prefer.  As David Walker, co-chair of the IOM panel, told The New York Times: "The current system is geared towards doing more, more, more, and that system by definition is not necessarily consistent with what patients want, and is also more costly."

I think "more, more, more" is a pretty good description of our health -- should I say "medical"? -- system, at all ages.

I encourage you to read the IOM report, but I'll take the liberty of summarizing some of its key findings:
  • Providers are neither properly trained for nor comfortable with end-of-life discussions, nor oriented towards a palliative-first approach in those situations.
  • Patients and their loved ones generally do a poor job of preparing for end-of-life situations, or of making sure patient preferences are known and observed.
  • The health care system and its payment approaches are poorly structured for the family-oriented, social support palliative approach that many patients would prefer.
It's no wonder that too often our loved ones end up spending their last days getting futile care from strangers in large institutions, when what they want is to die at home with dignity.   We'd rather pay hospitals to operate a Gamma Knife or robotic surgery than to pay home aides to help people stay in their homes.

The New York Times recently profiled one woman's struggle to let her father die in peace at home, fighting against the various forces that sought to put him in a nursing home or hospital.  One hospice physician summed up the absurdity of our system by asking: “Why can I get a $100,000 drug but I can’t get supper?”

Clearly, something is wrong.

In a companion piece to Dr. Emanuel's, The Atlantic also featured a piece by Gregg Easterbrook, What Happens When We All Live to 100  (evidently Easterbrook isn't expecting many people to adopt Dr. Emanuel's plan).  The article focuses on increasing life expectancy, and the implications of an aging population.  One point I found most fascinating is the belief of some scientists that many chronic diseases are not just associated with aging but may, in fact, be caused by it.

As one researcher I know puts it, many medical problems are really "aging early."

Easterbrook warns of the fiscal implications to Social Security, retirement plans, and the overall economy of increasing lifespan, with the real wild card being the health care costs.  If those years of disability don't shrink, health care costs will make today's spending problems look tame.  As Easterbook says: "Absent progress against aging, the number of people with Alzheimer’s could treble by 2050, with society paying as much for Alzheimer’s care as for the current defense budget."  Ouch!

So when I read about "breakthrough" cancer drugs like Merck's Keytruda, which costs $12,500 per patient per month but which only extends survival for a few months, I think more about Dr. Ezekiel's point-of-view.  Cures for cancer or Alzheimer's?  Great, that's worth spending money on.  Treatments that simply prolong dying, with more months of side effects?  I'm not so keen about that.

Kaiser CEO Bernard Tyson spoke recently at Health 2.0's Fall Conference, and one of his predictions was for "lifelong, holistic care."  He believes "...there is no question that the healthcare system is going to evolve from its current state of a ‘fix me’ system, to it’s future state as a total health system."  In particular, he cited the amount of spending done in the last few months of life, and  expects that we'll find ways to "move and shift resources toward maximizing the healthy life years of individuals.”

Kaiser must have a different kind of health plan contract than I'm used to.

Few health plans can assume they will have members for more than a year at a time, with the societal desire for members to be able to periodically "vote with their feet" outweighing the ability to make longer term investments in their health.  This problem was made very explicit recently with the Hepatitis C drug Sovaldi, as I wrote about in The New War on Drugs.  Payors are reluctant to pay the huge upfront costs even though the drug almost certainly will return the investment over time... unfortunately, over a long enough period that another health plan may benefit.

Dr. Ezekiel likes to make bold-but-long-range predictions (e.g., employment-based coverage will die by 2025), so I'm not holding my breath to see whether he really lives up to his vow.  I've seen a lot of criticism of Dr. Ezekiel's position, but I think many miss the point.  It's less about aging than about value, or values.

The beauty of his position is that it doesn't matter what he ends up doing, and there's nothing magic about 75.  We can each make up our own minds about how we want to be treated -- do we always want "more, more, more," or will we insist on a more thoughtful approach to costs and benefits, at every age?
 
We talk a lot about being patient-centered, empowering patients, consumer-directed, and similar terms, but we as patients are not doing our part as long as we continue to let care be done to us instead of for us.

Sunday, September 21, 2014

Put Your Money Where Your Scalpel Is

We need to take value-based purchasing to the next level.


I propose taking value-based purchasing from the payor-provider contractual backroom and putting it in the health plan benefit design, where consumers directly see and are impacted by it.

One of the most troubling things about our health care system is the lack of accountability.  Providers get paid pretty much regardless of how patients actually fare under their care, and often even if demonstrable errors are committed (unless they happen to lose in the almost random lottery that is our medical malpractice system).

Patients don't get a pass when it comes to blame either.  They don't often take good care of themselves, they don't always follow instructions, and they sometimes opt for high risk and/or unproven procedures with limited chance of success.  I blame this in large part on television, where the heroic physicians battle insurance companies and their own administrators to save their dying patients, almost always with success.  Call this the CPR distortion.

I probably wouldn't get much argument about the lack of accountability; just in the past couple days, for example, we've found that 28%of hospital physicians order unnecessary tests and procedures, and the vast majority of surgeries for ovarian tumors may be unnecessary.   Those are just the tip of the iceberg, and that iceberg is sinking us.

    The mantra to combat all this is "value-based purchasing," a phrase whose meaning, like beauty, is largely in the eye of the beholder.  In theory, it involves adding performance-based financial incentives to payment arrangements, and may also include bundled payments, shared savings programspay-for-performance, or even penalties.


    Frankly, I think none of these go far enough, nor do they adequately involve the patients.

    I want to accomplish a few things with my proposed plan design approach.  One, I want to more directly relate provider payment to patient outcome -- not in the aggregate, as many incentive programs try to do, but at individual patient level.  Second, I want to reduce how much other health plan subscribers have to subsidize care that is of little benefit.  And third, I want to stop rewarding providers for care that has little or no positive impact.

    The following chart outlines how these might be accomplished (assume the "base" plan design was 80/20):






    Estimated

    Percent of Allowable Charges:
    Prevalence

        Insurer     Patient     Provider

    Condition much improved 100 25 0
    50%
    Condition a little better 80 20 0
    25%
    Condition no better 60 15 0
    10%
    Condition a little worse 40 10 0
    10%
    Condition much worse

    -100
    5%





    100%

    Total Weighted Costs


    80 20 -5




    In other words, a surgical procedure whose allowable charges were $10,000 would pay the provider $12,500 (125%) if things went really well for the patient, only $7,500 (75%) if the patient was no better after it -- and the provider would actually owe the patient $10,000 if he/she ended up much worse after the surgery.  Providers would not be able to balance bill patients for any of the reductions.

    If I've done my math right, with the assumed prevalence rates shown above, the payouts are revenue neutral for payors (weighted cost of 80) and patients (weighted cost of 20), prior to the provider payback.  So it ends up costing 5% less overall, which doesn't bother me, because the reduction is all coming from surgeries where the patient ended up much worse.  Ideally, this approach would result in fewer of those surgeries being done at all.

    I'd initially wanted to also incorporate what the patient's expectations were prior to the surgery versus the actual outcome.  E.g., I'm more sympathetic when they were told to expect to be much better, yet ended up no better, than I am when they were told upfront not to expect any improvement yet still had the surgery.  In the latter case, it's hard to see why the health plan subscribers should subsidize the (low-value) choice.  However, I decided that building in the pre-surgery expectation would make an already complicated approach even more complex.  Perhaps later...

    Much depends, of course, on those prevalences.  I don't really know what they are.  More to the point, I don't think anyone does, and it's scary that we don't have good stats on what outcomes patients should expect.  Sure, there's lots of general or anecdotal information, but little, if any, actual data.

    That's all too typical for health care.

    Health plans and providers who want to test this approach would probably want to do at least a year of data collection so they can fine-tune the final payment levels for the different stages, based on the measured prevalences.  I think we might be surprised by what we'd learn.

    Some readers may be wondering how we'd collect the outcomes data.  After all, physician-specific patient satisfaction has been notoriously expensive to obtain.  That was in the old world.  All those providers are supposed to have EHRs, and part of Meaningful Use is getting patients to access their patient portals (although less than a third of patient report having been offered such access, according to ONC).  Patients should report their outcomes in the EHR.

    There is good evidence that direct engagement by physicians can boost patient use of portals, and I can't think of anything that would give physicians more incentive to do so than directly tying their payments to such use.  Tracking outcomes at periodic intervals following a surgery seems like a practice that responsible surgeons should want to do in any event, and I have no doubt that, once the data is in in the EHR, the physicians and health plans could figure out how to transmit it.

    Ideally, I'd like to see this approach applied not just to the surgeon's fees, but to bundled payments including the hospital/facility and any ancillary providers.  The more providers who have a direct financial stake in the actual outcome, the better.

    Bigger brains than mine would have to determine for which surgeries this approach would make the most sense, at what duration the outcomes should be measured (probably type-of-surgery-specific), and exactly what the patient would be reporting.  All those seem eminently solvable.  And I don't see any reason why it would need to be limited to only surgeries.  Those bigger brains could also think about what procedures and/or medical treatments other than surgeries this approach can be applied to as well.

    Providers may complain that predicting outcomes is impossible, that patients always vary in their results.  That may be true, but measuring is better than not measuring, and expecting to be paid the same for patients who end up worse as they are for actually healing patients is simply indefensible.  We should be paying for results, not effort.

    After reading the damning expose about gaming out-of-network charges in The New York Times, it's hard not to conclude that it's high time we ended our outdated provider network approach and start focusing on more transparent approaches that encourage the patient to shop and that reward performance.  I think this benefit design approach is one such way to accomplish that.

    What we need is a surgical practice and/or health system that has enough confidence in its outcomes to bet on it, and a health plan (or self-funded employer plan) who are willing to take not just the financial risk but also the risk of how to communicate the approach to members.

    The question is -- is anyone bold enough to try?





























































    Sunday, September 14, 2014

    Can We (Not) Talk?

    I read an article on the so-called "Internet of Things" recently and was struck by the reporter's realization that "...what’s equally hard to escape is the fact that, for now at least, the new devices being rolled out are mainly designed to connect with other new devices made by the same company."

    Boy, that sounds a lot like interoperability in health care, doesn't it?

    EHR use has skyrocketed due to the HITECH incentive payments (more than $24b paid to date), but the end goal of "meaningful use," especially interoperability, remains elusive.  Market leader Epic has been a frequent source of criticism.  They've responded in the time-honored American manner: they've hired a lobbyist, supposedly their first, in order to “educate members of Congress on the interoperability of Epic's healthcare information technology.”

    Yeah, that's it; I'm sure it's lack of education that is the problem.

    ONC has just released their final rule for 2014 EHR certification.  It makes the criteria "more flexible," and backs off a proposed set of 2015 voluntary certification criteria.  Some critics believe that, once again, ONC is giving in to pressure from the EHR industry, and it would be hard to argue.  Several other key MU deadlines had already been pushed back, with the start of Stage 3 not until 2017.  HIMSS even opposes a full year 2015 reporting requirement.

    Stage 2 attestation rates remain low, and only 25% of ambulatory providers think they'll be able to meet stage 3 requirements -- down from 33% a year earlier.

    Yes, yes, I know there are promising developments like FHIR standards, but on the whole it's easy to be discouraged about the prospects for interoperability. 

    In Forbes, John Graham called health care interoperability the "$30 billion dollar unicorn hunt," which I think is a wonderful line.  I'm beginning to think geneticists would have an easier time engineering an actual unicorn than ONC leading us to interoperability.

    There are no shortage of opportunities in health care.  It's maddening that each provider seems to collect the same administrative information and personal/family history, it's wasteful that they often repeat tests, and it can be life-threatening that they don't communicate with each other better.

    Yet we're still in our silos, even if now those are more likely to be electronic silos.
     
    How did we get here?  The Boston Globe quoted UCSF professor Dr. Robert Wachter: “Computerization in health care was a market failure.  The idea that you would need a federal incentive program to get United Airlines to computerize or Walmart to computerize is laughable.”

    It is laughable.  I've long wondered why it is that other industries had to painfully convert to digital records and information exchange on their own, while health care not only stubbornly resisted but actually expected federal handouts to convert.  And we not only fell for it, we allowed ONC to "certify" acceptable products.

    Somehow I cannot believe that federal certification of any product is a path towards excellence. 

    If you're old enough, you may remember ATM cards were initially bank- and ATM vendor-specific.  Now you can use an ATM pretty much anywhere in the world.  That didn't happen out of the goodness of the banks' or ATM vendors' hearts -- or from federal certification.  There's no reason the equivalent couldn't happen in health care.

    There was a market failure with EHRs, but I suspect it was more like the well-known Betamax versus VHS market failure, where the experts loved Betamax's quality of recording but those pesky customers opted for VHS's longer recording time.  With EHRs, experts wanted interoperability, decision support, electronic prescribing and the like, but providers wanted something that would speed their patient throughput, which existing EHRs don't seem to do (a recent study estimated they added a startling 48 minutes to a clinician's day).

    No wonder that more than 25% of ambulatory practices are looking to replace their EHR vendor, according to KLAS research.

    Back in July, The Boston Globe ran a provocative article on how we're spending $30b to spur EHR adoption, but have failed to require any error/adverse event reporting -- even when caused by the EHR.  As The Globe said:
    But critics say the government’s hands-off approach is wrong. It’s as though, they say, jetliner pilots were flying in poorly designed cockpits with malfunctioning equipment, and repeatedly slamming into mountains, while the Federal Aviation Administration and the National Transportation Safety Board decided not to regulate — or even keep a list of the crashes and near misses.
    Pressure from vendors was cited as a key reason for this "hands-off" attitude.  You have to wonder how many deaths or adverse events could have been avoided.

    I'm much less interested in interoperability or even EHRs per se than in better reporting from providers -- not just errors and adverse events, but also provider performance and patient outcomes.  It's shameful we not only haven't better defined how we should evaluate these but also for the most part still lack the means to collect the data.

    If I was a provider, I can't imagine that I wouldn't want to have at my fingertips statistics about my patient population -- what I've seen, how I've treated, and what the outcomes were.  If I ran a practice or a hospital, I can't imagine not being able to get these for the providers who work there. 

    Try doing any of that with paper records.  If that kind of detailed reporting was required, providers would be flocking to EHRs on their own dime.

    I think much of the problem relates to how most EHRs are designed: having the same platform for the data as for the presentation layer.  Tying both those together is a large part of what makes interoperability challenging, and makes switching vendors difficult -- who wants to risk losing patient information?

    I'm no information architect, but it makes much more sense to me to have a separate patient-centric data structure, which is agnostic as to what platform it is getting data from or to which it is providing it.  Let vendors innovate on how they interface with the provider (and the consumer), but use standard APIs and standards to transmit the data.  There are some vendors moving in this direction -- e.g., Zoeticx  and my friends at Datuit -- but it's hardly the mainstream.

    At least, not yet.

    In retrospect, HITECH made a classic mistake, incenting the means instead of the desired behavior.  Want providers to collaborate better about patients?  Move faster to bundled payments, value-based purchasing, and ACOs, and let providers and their vendors figure out the means by which to do so.  Want to reduce duplicate tests and imaging?  Deny claims for the duplicate set, and watch how fast providers and their vendors scramble to reduce the duplication. 

    Maybe, though, interoperability isn't the right goal.  After all, as Mr. Graham also said:
    Nobody would expect The U.S. Department of Transportation to set up a fund to incentivize car-makers to exchange data with each other, or the U.S. Department of Agriculture to set up a fund to incentivize grocery stores to exchange data with each other.
    In fact, the emerging paradigm outside health care seems to be not interoperability but rather Apple versus Android (sorry, Microsoft and Blackberry!)   Consumers are increasingly ending up in one of the ecosystems, but don't expect information or apps to crossover.  That may or may not be a good thing, but consumers seem to accept it.

    "Intraoperability" can be a competitive advantage, and perhaps that is enough.

    We have to remember that interoperability is not a goal in itself.  It is only important if it leads to better patient care and/or helps improve the patient experience.  And it not only is OK but desirable if some vendors find ways to accomplish that better/sooner than others. 

    My fear is not that the $24b has been superfluous, but rather that it actually has the effect of stymieing the very innovation we're hoping for.

    Sunday, September 7, 2014

    I Hate Apps

    Please don't infer from the title that I am some sort of Luddite, just another old geezer who hates newfangled technology.  Quite the contrary; I love what apps can do.  I just hate how they're doing it.

    I was astonished recently to count 71 apps on my phone (almost half of which came preloaded).  It takes 4 screens just to scroll through them all.  I don't like to think about how many permissions for how much of my data I've had to give to the various developers, ostensibly for my benefit but somehow, I suspect, at least as much for theirs.  It worries me.

    Two years ago Nielsen reported that the average smartphone user had downloaded 41 apps.  Just recently, they found that we spend over 30 hours each month using, on average, 27 apps.  New research from comScore suggests that mobile now accounts for 60% of our digital time, and app use is 52% of all digital time. Still, maybe we've reached our limit for apps; 65% haven't downloaded a new app in the past month.

    Any way you look at it, that's a lot of apps and a lot of time spent on apps.

    Think about how you'd feel if, instead of using your favorite word processing program, you had to use one program to write memos, another to write letters, a third to create faxes, a fourth to address envelopes, and so on.  Those programs would have to be pretty spectacular to offset the all-in-one convenience of Word and its competitors.  Apps are the mobile equivalent of those individual programs.

    Let's put this in the context of health.  Take a relatively simple case: a man with a small weight problem and Type 2 diabetes, both of which he is trying to manage through diet and exercise.  He tracks those using apps, plus his glucose levels and his (related) high blood pressure.  Oh, and it happens he has bad allergies, especially a wicked peanut allergy, so he uses apps to monitor allergen levels and which foods/restaurants are safe.  His doctor has an EHR with an app, plus he uses an app to track his appointments.  He also has the app for his doctor's health system, a health content app, and his health plan's app.

    So that's 11 apps just to track his health. The poor guy is just trying to do one thing -- take care of his health -- and it takes 11 apps to do it.  And this is not a particularly complex example.

    Health care knows all about information being siloed.  The app experience is perpetuating the silos and, arguably, adding more.

    The thing is, while managing health is an important part of anyone's life, it isn't anyone's entire life.  Most people have families, a job, their finances, and hobbies/interests, and there are lots of apps for all of those.  That's how we end up with dozens of apps.

    I fear the problem with apps will get worse due to the increasing array of mobile devices.  Apps used to just be for smartphones, then tablets came along, which usually require a second version.  Now we have "phablets" that are between smartphones and tablets, and "2-in-1" laptops that are a combination of tablet and laptop.  More versions of apps will be needed.  People have to remember which apps they have on which device and hope that they have access to the one they want when they need it.

    Personally, I wonder why it is that I can access Evernote or Yelp, for example, just fine on my laptop, but I need an app to get the best experience from them on my phone or tablet.  After all, I don't have 71 website icons on my computer screen, nor 71 unique programs downloaded, to correspond to those 71 apps on my phone.  The browser works just fine.

    Why don't websites optimize for the mobile experience automatically, rather than requiring apps?

    The greatest thing about apps is how they have spurred innovation by lowering the bar to entry.  Everyone from teenagers working in their bedroom to seasoned developers in the biggest companies is writing apps.  The range of problems being addressed, the unique approaches used to solve them, and the array of user experiences being offered, is all wonderful to see.  Maybe if we used the app approach for word processing we might not have to wait for Microsoft to incrementally improve Word, nor have one program dominate the market so much.

    The innovation is very highly desirable, but it comes at a cost.  There are over a million apps in both the iTunes App Store and Google Play Store -- and apparently almost none of the developers are making any money.  It is estimated that 2% of developers take in 50% of all app revenue, and 47% make virtually no money from their apps.  There's just too many apps to choose from.

    If I were diagnosed with some new health condition, I'd rather spend my time researching the condition and its treatment than researching apps to help me with the condition.  I'd like to have apps to do that, mind you; it's just that I'm going to have other things to worry about when it actually comes time that I might need one.  It's a dilemma.

    I've written before (Take One App and Text Me in the Morning) that I suspect apps are a transitional phase.  I wasn't smart enough to predict the emergence of apps, so I won't pretend to know what will take their place, but I'll offer a few thoughts on what I think the post-app future might look like:
    •  Platform Independent: As differences between devices get less distinct and as the Internet of Everything takes off, the notion of device-specific apps just can't hold up.   Users will want to be able to access what they want whenever they want, rather than needing a particular device.  Indeed, I'd argue that in twenty years the concept of our own device will have become old-fashioned.  The service -- i.e., website or app -- should figure out what UI to present using the best available device option.
    • Context-driven: Device-specific apps are really just a crude first step.  What we really want to get to is something that recognizes not only what device is being used but also the context in which it is being used.  I.e., reading alone is different than watching videos in a crowded room and different still from showing photos to a small group.  The UI should vary based on what the user is doing, where, and with whom.
    • Integrated:  Tracking health with 11 apps, as in my example above, may be no better, and possibly worse, than not tracking at all.  This is the dilemma of Big Data: what to do with all the information?  We will either have to have single purpose apps add functions, or use aggregator apps (like Apple's HealthKit hopes to be) to pull things together.  Or -- and this is my hope -- we will develop some AI intermediary that helps us make sense of all the various data.  Apple's Siri, Google Now, and Microsoft's Cortana are all early versions of this approach.   The more we recognize how inter-related health is with other aspect of one's life -- e.g., family & friends' health habits, job stress, environment -- the more this kind of broad-based, smart approach will be necessary.  
    I'm not saying that any of these will be easy.  I recognize that they are far from where we are, and may require a conceptual leap that would make current web browsers look like what the Internet looked like before Mosaic made it truly the World Wide Web.  Apps are a very useful and fun intermediate step, but they are just that.

    It's not really that I hate apps; it's that I can't wait to see the next stage of their evolution.

    Tuesday, September 2, 2014

    Twitch Should Make You Twitch

    The most interesting article -- make that multiple articles -- I've read in the last week dealt with a technology development that, on the face of it, has nothing to do with health care.  It's the $970 million acquisition of an online service called Twitch, by Amazon  If you're like me, you may not have previously heard of Twitch, but you may want to read on to try to avoid being left behind.

    Bear with me and I'll even try to loop it back to health care.

    I wrote last year (Games (Some) People Play) about both how big the videogame industry is, and how "gamification" is coming to play a more important role in health care, such as in wellness programs.  Twitch is a whole another world removed: it doesn't have games for people to play, but rather allows people to watch others play them. 

    We were worried about kids becoming couch potatoes due to their insatiable appetite for playing video games, but now we have people who don't even play the games, just watch others do so.

    The NPD Group estimates that there are 34 million "core" gamers in the U.S., averaging 22 hours of gaming a week.  Twitch has garnered as many as 32 million viewers to watch single events -- e.g., the championship of something called the League of Legends -- and has over 55 million unique viewers.  Those viewers average over 20 hours per week watching other gamers.  TV and cable channels would drool for those kinds of numbers; not much on television except the Super Bowl pulls more viewers.

    Watching videogames is so popular that viewers don't just watch on their computer; they actually show up in person to watch, filling arenas.  The New York Times wrote a long article on what it calls "e-sports," highlighting competitions that have as much as $11 million in prize money at stake and draw up to 73,000 spectators for an event.  The NFL should be looking over its shoulder.

    Oh, and Twitch is fourth in U.S. Internet traffic, after Netflix,. Google, and Apple.  Still think this is just fun & games?

    Google had originally tried to purchase Twitch -- for even more than Amazon ended up paying -- but the deal fell through.  Amazon is drooling over Twitch not just for those millions of eyeballs but also for the content and the array of shopping synergies it offers.  Google may try to combat Twitch with YouTube and/or Google TV, but experts say it won't be easy for them -- or other players -- to succeed.

    It's not so hard to see the appeal of video-games -- not just how interactive they are but also the production values and attention to detail -- but what non-gamers (like me) may not realize is the social connectivity they generate.  Gamer generations thrive on social interaction; "Sharknado" wasn't a  hit (by SyFy Channel standards, anyway) because of its plot or acting, but because of the social media buzz it generated -- and encouraged.  That is increasingly becoming true of any TV show, movie, or other event. 

    Why is any of this important for health care?  Let's say that you're in charge of marketing for your health care organization.  You probably do some print ads, maybe send out some fliers, and have radio/TV ads if you have a large enough budget.  You're proud that you've moved some of your efforts to online ads and/or email blasts, have a Facebook page, and maybe even are on Twitter.  But, seriously, do you think Twitch users will even see any of those efforts, much less form a positive impression of your organization?

    I'll posit this: if you didn't know what Twitch was, or are still puzzled at its appeal, your organization is going to have a tough time competing over the next couple of decades.

    For better or worse, our health care system is used to dealing with people from the so-called "Greatest Generation" and the Baby Boomers.   The former grew up with radio and watching movies in the local theaters, and were delighted to get broadcast TV in their houses.  In any case, they were used to not much content, and that content being dictated to them.  The Baby Boomers (of whom I am one) pride ourselves on being rebellious, but spent our formative years watching those limited broadcast TV options, at the specified times.  We were as delighted to get cable option as our parents were to get broadcast options.

    It wasn't until Gen X/Gen Y that cable, with its much wider range of content options, was prevalent, and that VCRs allowed viewers some control over when they could watch what.  Gaming also started to become important as a media option.  But it was not until the Millennials that TiVo/DVRs plus streaming really put viewers in charge.  Equally as important was that channels like YouTube made user-generated content easy, and gaming went from isolated gaming consoles to Massively Multiplayer Online Games (MMOG).

    And here we are at Twitch.

    A health care system that has been built around people who have been brought up to passively accept content when it happens to be delivered is not going to fit a gamer generation.  I'm not just talking about content; I'm talking more generally about expectations.

    I wrote in Always Fighting the Last War that health care has to adapt to these new expectations.  Content and services delivered when and where consumers want, on the platform of their choice, and in the manner they are used to.  I.e., highly interactive and capable of being supremely social.  It doesn't sound like much in health care, does it?

    The auto industry is already terrified that Millennials don't care about cars, and are desperately trying to turn cars into yet another mobile gadget for them.  The housing, apparel, and -- oddly enough -- golf industries, to name a few, are also at a loss at how to appeal to them.  Health care needs to figure it out too, not just for the Millennials but also for the different perspectives that Baby Boomers, Gen X, and Gen Y are bringing.

    It's not about how to gamify health care further (although I'm convinced that a videogame that accurately portrays the various challenges consumers can face in our health care system would be very eye-opening!) as it is for health care to identify what characteristics of younger generations that playing and watching videogames reveal.  We better get it right, because they're the ones who will increasingly be using and paying for health care (not to mention funding Medicare for Baby Boomers like me!)

    My crystal ball isn't good enough to predict exactly how health care will have to change to suit these Twitch users and their peers, but I'm willing to bet that the organizations that do not will end up like the DuMont Television Network.  Haven't heard of them?  Exactly.