Sunday, December 27, 2015

Better Think Again

Usually this time of year people like to either look back at significant events of the year just ending, or to prognosticate about what might happen in the new year.  Well, neither my rear view mirror nor my crystal ball are quite that good, so I'll use my last post of the year to cite some examples of the kind of innovations that most fascinate me, ones that suggest the future may come sooner and/or be quite different than we expect.

Or maybe they'll prove to be red herrings.  It's hard to say.

I'll give three examples.  How and even whether any of them relate to health care, we'll get to later.  In no particular order:

Tell me your password:  If you go online with any regularity, chances are good that you've got a password.  Probably, in fact, a whole bunch of them.  With online security becoming ever-more important, more sites require passwords, and tougher/harder-to-remember ones at that.  The trouble is keeping track of them all.

People use different strategies to deal with this ever-growing plethora.  Some people have good enough memories to recall them all, although I don't know any such people.  Others create lists to keep all their passwords, or utilize apps to store and even create passwords for them.   Ironically, those apps themselves require a password, which creates kind of a cat-chasing-its-own-tail scenario.

But passwords are so 1990's.

Google and Yahoo, separately, are testing getting rid of passwords, replacing that step with a message sent to your smartphone, which you can use to authenticate the log-in attempt.  Of course, if you have failed to lock your phone, or have forgotten its password, you're out of luck.

Other new approaches include using fingerprints, which have the drawback that they, too, are now digital and thus can be stolen, or facial recognition, as Windows 10 now allows.  Hey, as if recognizing your face isn't enough, UK-based start-up AimBrain claims its software can recognize you by how you use your device, making passwords unnecessary.

I don't know what approach will win out, but, given how much people hate them, how poorly they use them, and how easy they are to hack, I'm willing to bet that in five or ten years we won't be using passwords.

Give me the cash:  We love our credit and debit cards.  Not all that long ago, it seems, we mostly used credit cards only for larger purchases, and debit cards not at all, but now you can pretty much use them about everywhere, for any amount.  Still, you probably go out with some cash on you, just in case.

Unless you live in Sweden, that is.

Sweden appears to be closer to a cash-free economy than just about anywhere.  According to The New York Times, cash is only used for twenty percent of consumer payments in Sweden, versus around seventy-five percent in the rest of the world (surely that is a typo, right?  75%?).   Some Swedish banks don't even keep cash on hand.  Seriously.  As one student said, "No one uses cash.  I think our generation can live without it."

In a cashless society, people use credit/debit cards, or smartphone-based approaches like Apple Pay or Google Wallet.  Those approaches still are primarily based on card networks, but don't require you to give your card info to a merchant.

Money is, after all, notational, even actual currency.  It only works because we all agree it has value, and whether it is cash or digital isn't fundamentally important.  As Bitcoin is slowing proving, "money" doesn't even need to be something issued by governments or central banks; just a bunch a people agreeing to accept it allows it to have value.

I don't know if the winner is going to be cards, electronic transfers, Bitcoins, or something else, but  in ten or twenty years you may have trouble getting a merchant to accept your cash.

Cutting the cords:  This is a hot topic.  I've written on it myself.  Instead of using a landline for telephone service, we figuratively cut that landline and rely on our mobile phone.  Instead of being forced to take whatever array of channels our cable company forces upon us, we choose our own shows or our own packages of shows, usually streaming via the Internet.

The cable companies prepared for the future by becoming ISPs, so they love broadband, which they can charge more for (despite our abysmal speeds).  The landline telephone companies are either out of business now or have become mobile carriers.  But neither may be really ready for the future.  Here's the fact that makes me think so: home broadband use is actually declining.

According to the Pew Research Center, broadband use declined from 70% in 2013 to 67% in 2015.  That doesn't sound like much, but it is statistically significant, and it is a shocking reversal of prior trends; remember, at the beginning of the century broadband use was essentially zero and was still below 50% as late as 2006.  Fifteen percent have dropped cable or satellite service; a third of younger Americans have dropped or never had pay TV.

The reversal is attributed to more people thinking that kind of connectivity is too expensive, especially when thy can get much of the content on, you guessed it, their smartphones.  

All this sounds bad for cable but surely good for the mobile telephone companies, yet they shouldn't get too cocky.  We may not need them either.  Google has launched Loon -- "balloon-powered Internet for all" -- while Facebook is using drones to accomplish the same.  Right now both giants are testing these approaches in rural or third-world areas, but to the extent they succeed they will certainly help change the paradigm.  

Cable and mobile phone companies should remember that consumers' using the Internet through them was, in some ways, a fortuitous happenstance, and that if they are too greedy -- how much are those packages and data plans? -- or too shortsighted, the future may no longer include them.

In 20 or 25 years, cables and mobile phone networks may be as outdated as analog broadcasting and television antenna are now.

OK, so these examples may not be about health care, or may only impact health care in the same way they impact other industries.  The importance of them, to me, is less in their direct applicability as in their reminding us that the world -- even for health care -- doesn't always evolve incrementally or even predictably.

And I love that.

Wednesday, December 16, 2015

Oh, And It Is Also An EHR

You wouldn't -- I hope -- still drive your car while trying to read a paper map.  Hopefully you're not holding up your phone to follow directions on its screen either.  Chances are if you need directions while you are driving, you'll be listening to them via Bluetooth, glancing at an embedded screen on your dashboard, or maybe looking at a heads-up display on your windshield that doesn't even make you take your eyes from the road.  Or maybe you're just riding in a self-driving car.

But when it comes to your doctor examining you, he's usually pretty much trying to do so while fumbling with a map, namely, your health record.  And we don't like it.

A study in JAMA Internal Medicine found that patients were much more likely to rate their care as excellent when their physician didn't spend much time looking at their EHR while with them; 83% rated it as excellent, versus only 48% for patients whose doctors spent more time looking at their device's screen.  The study's authors speculate that patients may feel slighted when their doctor looks too much at the screen, or that the doctors may actually be missing important visual cues.

Indeed, a 2014 study found that physicians using EHRs during exams spent about a third of the time during patient exams looking at their screen instead of at the patient.  It is a dilemma; the records hold important information, and inputting new information is generally thought to be more accurate when done at point-of-care rather than at some point after the exam, so doctors are damned if they do and damned if they don't.

As one physician told the WSJ, "I have a love-hate relationship with the computer, with the hate maybe being stronger than the love." 

No wonder that the president of the American Academy of Family Physicians says: "We've taken this technology and we've embraced it, but I think a lot of us don't believe it's ready for prime-time. We've got this interloper in the exam room, but it's not there to help with the medical side as much as it's there to check boxes for insurers."

I might quibble that the familiar physician shibboleth about EHRs being there to serve insurers' purposes rather than to improve care perhaps is one reason why they are not ready for prime time, but I certainly don't dispute the fact that they are not.  After we've spent the past several years and over $30b of federal incentives to persuade physicians to adopt EHRs, physician satisfaction with them appears to be declining.

Know any health care professionals who rave about their EHR like they do their iPhone?

The problem is that we forget that the record is not the point.  It wasn't the point when it was on paper, and putting it in an electronic format doesn't make it the point.  The information in it is a tool -- just a tool.  It is supposed to help the physician diagnose the patient, and record what happens to the patient, so he/she can be better diagnosed in the future.  Figuring out what is wrong with a patient and what to do about it is the point. 

Paper records were siloed and made the physician draw his/her own conclusions without providing any assistance.  EHRs have the potential to draw data from larger patient populations, even if they don't yet do so very effectively, and can also give some assistance to physicians, like warning about drug interactions.   But working with them still involves looking at too many screens and having to populate too many boxes.  No wonder physicians are employing scribes

Don't get me started on medical scribes.

Let's picture a different approach, one that doesn't start with paper records as its premise.  Let's start with the premise that we're trying to help the physician improve patient care by giving him/her the information they need at point of care, when they need it, but without getting in the way of the physician/patient interaction.

Let's talk virtual reality.

Picture the physician walking into the office not holding a clipboard or a computer or even a tablet.  Instead, the physician might be wearing something that looks like Google Glass or OrCam -- not a conspicuous headset like Oculus but something unobtrusive (a concept that investors are already pouring money into developing).  There might be an earbud.  And there will be the health version of Siri, Cortana or OK Google, AI assistants that can pull up information based on oral requests or self-generated algorithms, transcribe oral inputs, and present information either orally or visually.

When the physician looks at the patient, he/she sees a summary of key information -- such as diabetic, pacemaker, recent knee surgery -- overlaid on the corresponding portion of the patient's body.  Any significant changes in blood pressure, weight, and other vitals are highlighted.  The physician can call up more information by making an oral request to the AI or by using a hand gesture over a particular body part.  List of meds?  Date of that last surgery?  Immunization record?  No problem.

The physician can indicate, via voice command or hand gesture, what should be recorded.  It shouldn't take too long before an AI can recognize on its own what needs to be captured; the advances in AI learning capabilities -- like now recognizing handwriting -- are coming so quickly that this is surely feasible.  Keeping an EHR up-to-date should be child's play compared to, say, beating Ken Jennings at Jeopardy! or Gary Kasparov at chess.

In short, the AI would act as the medical scribe, without the patient even realizing it or the physician having to worry about it.

More importantly, the AI could quickly pull up/synthesize any pertinent literature, or assist the physician in coming up with a diagnosis and/or treatment plan -- as Watson is already doing for cancer.  Maintaining and presenting the EHR are the finger exercises, if you will; helping the physician deliver better care is the main function.  And without intruding on the physician/patient relationship.

Building better EHRs is certainly possible.  Improving how physicians use them, especially when with patients, is also possible.  But it's a little like trying to make a map you can fold better while driving.  It misses the point. 

We need a whole different technology that subsumes what EHRs do while getting to the real goal: helping deliver better care to patients.

Friday, December 11, 2015

It's a Doc's Life

There is an old expression "it's a dog's life" used to describe a life that is hard and unpleasant.  That expression is probably outdated; most dogs seem to live pretty comfortable lives.  Based on recent research, though, maybe we should be saying "it's a doc's life" instead.

It has kind of a certain ring to it, don't you think?

Let's look at some of the new research, starting with a study by the Mayo Clinic.  They updated a survey they did in 2011, and found a number of disturbing issues, including:

  • More than half (54%) of physicians report at least one symptom of burnout.  That compares to 45.5% in 2011.
  • Only 41% of physicians reported being happy with their work-life balance, compared to 48.5% in 2011.
  • Physicians fared worse on both burnout and happiness with work-life balance than the overall population, even adjusting for age, gender, relationship status and hours worked.
  • Pretty much every specialty showed declines on both burnout and work-life balance.
The authors believe that American medicine is at a "tipping point" due to the burnout and lack of work-life balance, and that there is an urgent need to address the underlying causes.  It'd be hard to disagree with them.

The study would be disturbing in its own right, but it is not the only such study that showed up just this month.  A study in JAMA by Mata, et. alia found that almost 30% of resident physicians reported depression or depressive symptoms.  As one of the authors said:  "What we found is that more physicians in almost every specialty are feeling this way and that's not good for them, their families, the medical profession or patients,"

No kidding.

The study was a meta-study, analyzing results of other studies, and the prevalence ranged from 21% to 43% in the various studies, so the 30% may be conservative.  As with the Mayo study, the problem appears to be getting worse, with the results showing a slight but statistically significant increase over the five decades analyzed.

An accompanying editorial called resident depression "the tip of a graduate medical education iceberg."  It notes that training itself has changed little from the 1950's or 1960's, while "the actual delivery of medical care in 2015 would be unrecognizable to those same physicians."  New care options and resulting ethical dilemmas, more pressures on reimbursement and on demonstrating value, malpractice concerns, EHRs, and increased patient demands create a world that the graduate medical education system leaves residents ill-equipped to deal with.

The editorial calls for a fundamental rethinking of our approach to the graduate medical education system.  Again, it'd be hard to argue with that conclusion (and the rethinking shouldn't be limited to graduate medical education).

If any further evidence of a problem was needed, the 2015 Commonwealth Fund International Health Policy Survey of Primary Care Physicians provides some.  I'll leave the international comparisons for another day, but I was struck by a few of the findings for U.S. primary care physicians.  Twenty-four percent report not being well prepared to manage patients with multiple chronic conditions.  Less than half are well prepared to handle patients needing palliative care or long term home care, or patients with dementia.  And less than a third are prepared for patients needing social services, those with mental health issues, or those with substance use issues.

Given that one in four of American adults have multiple chronic conditions, one in five have mental health issues, and about one in ten have substance use issues, well, I'd say primary care physicians should be pretty worried.

No wonder that only 16% of those U.S. primary care docs think our health care system works well (which was, by the way, by far the lowest across the 10 countries), or that 43% report their job is very or extremely stressful.  No wonder they're getting burned out.

Despite the above findings, another JAMA study found that at least one kind of primary care physician -- family practice residents -- still had high hopes.  Family practice residents reported that they planned to provide a broader scope of services than practicing family practice physicians, such as prenatal care and inpatient care management.  Whether that is recognition of a changing role or simply naive expectations remains to be seen.  As one of the authors told Reuters, "it may be that the previous generations have had these same intentions and for numerous reasons are not able to practice the way they intended."  

In other words, real world, meet residents.  Residents, real world.  Try to get along.

Look, I get it.  Being a physician is not what it once was.  No more physician-as-God, no more white coat mystique.  Their business model has radically changed from largely independent artisans to more typically being employees with productivity expectations, whose judgement is constantly challenged by patients, payors, administrators, and/or lawyers.  That must be hard to accept.

But, then again, I don't know many people who don't think their job hasn't changed significantly in the past twenty years, with more pressure, higher expectations, 24/7 demands, and more reliance on technology.    Physicians can rightfully argue that their role is different in that people's lives depend on their decisions, but other professions -- e.g., police officers, air traffic controllers, even civil engineers -- could claim the same.  

It's tough all over.

We should be worried that physicians are depressed.  We should be worried that they feel burnt out.  We should be worried that they don't feel ready to manage the kind of complex patients they are seeing more of.  These are problems that need to be recognized and addressed.  But the life of the physician isn't going back to what it was in the 1960's, and that is not a bad thing.  

It should be a great time to be a physician.  We've never known as much about what causes various health issues, never had as many diagnostic tools, never had as many treatment options, and never had as much potential for people to be educated about their health and to be an active participant in their care.  If all that isn't exciting to someone, perhaps being a physician isn't the right profession.

For what it is worth, both medical school applicants and enrollees have reached record levels.  Let's hope they're not in for a big disappointment when they find out what a doc's life is really like.

Wednesday, December 2, 2015

The White Coats Are Coming! The White Coats Are Coming!

Let's say you were in a social setting, or even some business settings, and you introduced yourself to someone using your first name but that person's response was to introduce himself/herself using their last name and a honorific.  You might think they were oddly formal.  If, in those same settings, someone greeted you by your first name while introducing himself/herself using an honorific and his/her last name, well, you might think he/she was stuffy, if not a jerk.

Yet this happens all the time in health care settings.

Now, in the past, I've been critical of the use of the term "patients" to describe us laypeople in the health care system, arguing that it connotes a certain passive, secondary status about us.  Ashley Graham Kennedy, a philosophy professor at Florida Atlantic University, goes me one further: in a BMJ opinion piece, she asserts that "the title 'doctor' is an anachronism that disrespects patients."

How about that?

Professor Kennedy cites situations where doctors introduce themselves as doctors while not taking into account their patients' own professional titles.  How many of us have had a physician casually use our first name while expecting us to use their title?  If we happen to be sitting on an exam table or in a hospital bed wearing a gown that leaves us half exposed, the asymmetry is even more pronounced.

She notes that we don't need titles or even white coats -- more on that in a bit -- to figure out who our caregivers are or what their role in our care is.  More on point, she argues that the title is an explicit expectation that we are to treat them with respect, due to the training that the title signifies, whereas respect is something that deserves to be earned, such as by how we are treated.

It is the 21st century after all.  We know that not all physicians are equal, that not all medical education and training is the same, and that not even physicians know everything, even within their specialty.  If we're supposed to automatically respect all physicians, it better work both ways.

Personally, I don't mind calling a physician "doctor," although if he/she calls me by my first name (which I'd prefer) I'd expect that to be reciprocal.  What I wonder is what the title really means anyway.  There are a lot of "doctors" out there.  If someone introduces themselves as "Dr. X," you don't know if that means an M.D./D.O., or if it means DDS, DMD, DC, DPM, Pharm.D., DVM, OD, Au.D, Ph.D or ScD.  I'm sure that list isn't even complete, even within health care.  So as a means of automatically signifying respect for our physician, it's a pretty poor marker.

Some of the reactions to Professor Kennedy's argument are even more interesting.  While she believes that the deference the title expects is incompatible with patients being equal partners with their physicians, some respondents -- who usually seem to be physicians -- argued that the supposed partnership is not, in fact, equal, since physicians' training and experience makes them experts in a way patients can never equal, no matter how much Internet research they do.

I think those kind of responses kind of make Professor Kennedy's point.

The doctor/patient relationship is at its most asymmetrical when there is some acute event -- e.g., we have a heart attack, we need our appendix out, we need chemo.  But with more of our health care spending going to chronic conditions that, in many cases, are linked to lifestyle choices, the asymmetry is greatly reduced, and physicians should think twice about assuming they know more about maintaining our health, especially if they can't demonstrate that they "practice what they preach" when it comes to those kinds of healthy choices.

If the title "doctor" is a verbal indicator of expected respect, the white lab coat is a tangible one.  Almost all U.S. medical schools bestow one as part of their graduation ceremony (although this tradition is, surprisingly relatively new).  The fashion of physicians wearing them had to do with the (belated) acceptance by the medical establishment in the latter part of the 19th century that, yes, germs mattered; the coat was to suggest they kept their environment as sterile as in a lab.

Ironically, of course, the white coat itself may (or may not) be a carrier for germs, which has led the NHS to adopt a "bare-below-the-elbows" policy.

This is, apparently, a hot topic.  There are more issues than one might have imagined, including what physicians think patients want and -- my favorite -- how cartoons would portray physicians without a white coat.  Another opinion piece in BMJ bemoaned how the NHS "bare-below-the-elbows" policy has led to "scruffy doctors," urging them to "put your ties back on."

Not everyone agrees that the more casual attire leads patients to view doctors as scruffy and thus possibly lacking in hygiene.  (Dr.) Phillip Lederer wrote an excellent article on the controversy recently, reminding physicians that they'd still be a doctors even without the white coat.  He concluded: "There is no harm in avoiding white coats, but there could be danger in wearing one."

That would seem like the killer argument, but apparently it is not.

I mean, really, I can see wearing a white coat if the physician actually works in a lab, such as a pathologist, but it is hard to see it as much else other than a status symbol if they are actually seeing patients.  Health care is full of status symbols, including not just the white coats and automatically calling physicians "doctor" but also those nice parking spaces reserved for physicians that patients and their families often have to walk past, or, for that matter, major donors getting their names on health care buildings.

We shouldn't take any of them more seriously than if, say, all physicians started wearing monocles to further model those 19th century physicians.  The point is, it's not supposed to be about their status, but about our health.

Paul Revere may have never actually shouted "The British are coming!  The British are coming!" but he did help herald a revolution.  Maybe by rethinking some of the traditional status symbols in health care we can signal a revolution of our own, fighting for a health care system in which we are more responsible for our own health and are expected to be more equal partners with the people who help us with that.

Or we could try the monocles.

Friday, November 27, 2015

Hoping for Some Health Care Pi

Raspberry Pi thinks even $20 or $35 is way too much to pay for a computer.  That's why they just announced the Raspberry Pi Zero, which they're happy to sell for a startling $5.  That's right, $5.  Not $5 on some sort of monthly installment plan or as a cloud-based monthly subscription service, but a straight up purchase of a programmable computer for only $5.

I wish health care had a Raspberry Pi to drive down its costs in a similar fashion.

For those of you who had not been familiar with Raspberry Pi, their mission is to get inexpensive computers in the hands of more people, especially children.  They recognized that many families can't afford several hundreds or thousands of dollars for a computer, as you'd normally pay at Best Buy or Amazon.  And they believe it is imperative to let more people -- again, especially children -- have the opportunity to experiment with programming.

Their first computer, the Raspberry Pi 1 Model B, was introduced in 2012 for $35, and was followed over the next few years with other models that were as low as $20.  They made them cheaper and/or more powerful, but never more expensive.  Not content with selling some 6 million of these low cost computers, and following the advice of Alphabet Inc's Chairman Eric Schmidt that it was "hard to compete with cheap," they developed and rolled out the Zero, figuring $5 was as low as they could go.  No kidding.

The initial batch of the Zeros sold out within a day.    

Sure, for the $5 you're not getting an Apple MacBook or a Microsoft Surface Book, but it is powerful enough to play games, connect to the Internet, and use its simple programming language to connect it to home devices, create your own games, maybe even build a robot or two.   I'll probably be sticking to my PC -- I'm not likely to be building any robots anyway, much as they fascinate me -- but I'll bet there are a bunch of teens or even preteens who could do some pretty cool things with a Zero.

For $5, why not?  It might be a great stocking stuffer or Hanukkah gift.  

Meanwhile, in health care we're not surprised to read headlines like "Cost of Skin Drugs Rising Rapidly," reporting on a study that found prices of 19 brand name dermatologic drugs have risen 5 fold, on average, over the past six years, with most of those increases happening more recently.  It'd almost be funny if we weren't the ones paying these huge increases, which of course we are, either directly or through higher health premiums.

I'm not going to pile-on any further about prescription drug increases, especially not after reading the analysis in Health Affairs by Kenneth Thorpe and Jason Hockenberry that suggests prescription drugs may not be quite the culprits we're getting used to thinking they are.  No, there are plenty of parties in the health care system that make the health care equivalent of a $5 computer almost impossible to imagine.

We have, after all, a health care system in which heart patients apparently do better when the top cardiologists aren't around, because fewer things are done to them, and in which "inaccurate and unreliable tests" are resulting in unnecessary care, raising costs, and putting patients at risk, according to the FDA.

Sure, we have organizations like Diagnostics For All, which works on developing "low-cost, easy-to-use, point-of-care diagnostics," with particular interest in their use in developing countries.  (A couple of months ago I might have cited lab pioneer Theranos as another example, but that's probably not such a good idea right now).  Many people think mHealth is a particularly potent way to introduce low-cost care options, again mostly for developing nations.  UNICEF is sponsoring The Wearables for Good campaign to help spearhead this kind of effort.

But let's face it: when it comes to health care, we are a developing nation; we just pay more for it than anyone else.  

Even after ACA, we still have some 32 million non-elderly uninsured, some 44 million insured people avoid getting care due to concern about costs, only half of Americans rate the quality of our care as at least good, and by more objective measures our health care system is closer to the bottom than to even the middle when compared to other "developed" countries.

So where is, say, our $5 EHR?  We can't even get EHRs that physicians like.  Where is our $5 MRI?  Their range in prices is eye-opening, but it's safe to say even the cheapest would make a full price computer look cheap.  One could argue that we have $5 drugs, what with close to 80% of prescriptions filed being generic, but I don't see many people jumping up and down with excitement about how cheap their prescriptions are.

For that matter, where is our Mozilla, our Wikipedia, or our Linux, offering widely used free services of tremendous value?

In our health care system, we think we're winning if we slow the rate of growth, although it remains above overall inflation.  Even new entrants seem to be more interested in getting their piece of the $3 trillion pie than in doing what Raspberry Pi is doing for computers.  

Who is going to be our Eben Upton, our Jimmy Wales, our Linus Torvalds?  Wouldn't it be cool if it ended up being one of those kids tinkering on his Zero?

Wednesday, November 18, 2015

Breaking Barriers

If you are literally starving to death, you can't expect a restaurant to feed you.  If you are homeless and the forecast calls for sub-zero temperatures, you can't expect a hotel to put you up.  But if you think you are having an emergency health problem, by law you can go to a (Medicare-participating) hospital emergency room to get evaluated and at least stabilized, even if you can't pay.  

Similarly, I've heard many calls for a single payor health care system, but I've never, not once, heard anyone advocate that the government should pay for all our food or our housing.

For some reason, we think about health care differently, even from other life-sustaining needs like food and shelter.

I like reading predictions about how healthcare is soon to be radically disrupted as much as anyone -- Oliver Wyman's The Patient to Consumer Revolution and  David Chase's new piece in Forbes are two of the latest -- but I worry that many of these ideas are like putting nicer lipstick on the pig.  

Yes, innovations like digital health, Big Data, and mHealth hold great promise, but even they eventually run into what most people might view as two of the foundations of our health care system but which I fear may actually be hitherto impenetrable barriers to change: "the practice of medicine" and "the business of insurance."  My calling them barriers should come as no surprise to readers of my previous posts.

Let me get to each of these in turn.

Having physician-run state medicine boards oversee who can practice medicine is usually positioned as a benefit to patients, supposedly ensuring that we get care only from qualified professionals.  Some (all right, I) have argued that, whatever their intent, the temptation for such self-policing bodies to end up enabling a cartel is hard to resist.  

Here's two examples of the problem:

  • Let's say I have a rare condition, and it turns out that the best physician to help me with it is in California.  Or Germany, or India.  Fortunately, modern technology allows me to consult with them via video, and increasingly would allow them to perform tests and even procedures on me from where they are.  Unfortunately, under our current approach, unless they are licensed in the state I'm in, they can't help me.
  • As health care accumulates Big Data and the AI programs to sift through it, someday soon such a program will figure out someone's obscure diagnosis and perhaps propose a novel treatment, one that isn't (yet) supported by clinical research or even medical theory.  We may not be able to understand how the program concluded it was the right diagnosis and treatment, but that doesn't mean it won't be.  But, of course, such a program can't practice medicine.   


State-based licensing is an artifact of an earlier, more geographically restricted day.  The medical boards trumpet the new telemedicine compact to show they they are making progress about crossing state borders, but it falls woefully short of what technology allows.  We're starting to have more options, and we'll increasingly want even more of them.

As I've written before, the more we learn about the body and how to keep it healthy, the more it may be that physicians may not be the best people to treat us in all cases.  I'm not talking about physician substitutes like nurse practitioners but, say, geneticists, robotics experts, or microbiologists.  If we subject all the coming advances to our existing ideas about who can "practice medicine," we will be missing out.

And no one that I know of is seriously thinking about how to "license" the inevitable AI-based experts.  That human-centric point-of-view is natural, but will not survive the 21st century.  

To be sure, proof of competence for our health care providers is to be desired, but state licensing is not the only, or the best, way to accomplish this.  Board certification exams, for example, don't vary by state.  Whatever proof we demand should be more universally comparable, more empirically/performance based, and more transparent.  There's no reason that resulting "license" has to be geographically limited, or even limited to our traditional types of providers.

As for the "business of insurance," here are quotes from two dissatisfied customers in a recent The New York Times article about high deductible plans:
"Our deductible is so high, we practically pay for all of our medical expenses out of pocket.  So our policy is really there for emergencies only, and basic wellness appointments." 
"I will never be able to go over the deductible unless something catastrophic happened to me. I’m better off not purchasing that insurance and saving the money in case something bad happens.”
Just what, exactly, do they think insurance is?

Instead of primarily protecting us  from the risk of catastrophic expenses, we now expect health plans to pay for our routine care as well, tell us which providers we can see, negotiate discounts on our behalf, and help us manage our care.  Maybe those are good things, maybe not, but what they are not are things that only health insurance could do.  In fact, even helping us finance care is not something that only health insurance can do.  

We could be thinking of different approaches to financing our health care.  Why shouldn't anyone be allowed an HSA?  Crowdsourcing and microloans are all the rage in financial circles, especially with the FinTech revolution to enable them.  There's no reason these couldn't be used for health care expenses.  

Or think of approaches analogous to life insurance, with payouts based not on death but on catastrophic health events -- a lengthy hospital or nursing home stay, for example.  I'm pretty sure actuaries could price these, but I'm not as sure that many people would be willing to buy them.

Yet.

But none of that is going to happen if financing innovators have to worry about being considered in the business of health insurance, which would then impose a raft of requirements on them that would force them to look and act much like the health insurance we have now.  And that'd be a shame.

Innovating in health care shouldn't just be about doing what we are doing with better technology, but must also be about rethinking what can help us achieve better health, who is truly best qualified to assist us with that, and what our range of options is to finance our health needs.  We can't be limited by traditional notions about the practice of medicine or the business of insurance.

If we want to think about health care differently, let's really be different.

Wednesday, November 11, 2015

Someone Must Be On Drugs

As is probably true for many of you, I'm busy looking at health plan open enrollment options for 2016.  I have to confess that for the past few years I've been guilty of just sticking with the same plan, so it has been too long since I've had to shop.  Plus, I'm helping my mother pick her Medicare options for next year.  All in all, I'm awash with health plan options.

I'm torn between thinking that the people designing the plans are extremely clever, have a perverse sense of humor, or were under the influence of psychedelic drugs at the time.

It's not that there aren't plenty of options.  I've got different levels of HMO, POS, and PPO options, from multiple carriers.  My mother has many choices of Medicare Supplements, with Part D options, as well as Medicare Advantage options (both HMO and PPO), each from multiple health insurers.

Nor is it that having choices is bad.  Researchers have discussed the "tyranny of choice" (or "paradox of choice") for some time, meaning too many choices can be paralyzing for consumers.  I have to admit that when I go to the cereal aisle in the grocery store I feel overwhelmed, but, whether it is cereal or health plans, I'd still rather than more choices than fewer choices.  

It's just that, well, the options are so damn confusing.  I was in the health plan business for a long time, and helped develop some of the first plan selection tools for consumers,  But when it comes to evaluating some of the options now available, I find it practically impossible.

Austin Frakt recently wrote in The New York Times about this problem.  He cited a few studies specifically on point about health insurance, such as:

  • One study found that 71% of consumers couldn't identify basic cost-sharing features;
  • Less than a third of  consumers in another study could correctly answer questions about their current coverage;
  • Researchers found that consumers tended to choose plans labeled "gold" -- even when the researchers switched the "gold" and "bronze" designations, keeping all other plan details the same.  
Many consumers tend to stick with their existing choice even when better options are available, simply because switching or even shopping is perceived as too complicated.  But, hey, cable companies and mobile phone carriers have relied on this kind of inertia for a long time, so why should health insurers be any different?

I'm most frustrated with prescription drug coverage.  Not that long ago, the only variables were the copays for generic versus brand drugs.  Now there are often five or six different tiers of coverage -- such as preferred generic, other generic, preferred brand, other brand, and "specialty" -- with different copays or coinsurance at each tier, each of which can also vary by retail versus mail order, and for "preferred pharmacies." 

Moreover, the health plan's formulary, which determines what tier a drug is in, can change at any time.  Plus, as has been illustrated recently, the prices of any specific drug can change without notice, sometimes dramatically.  If either of those happens to one of your drugs, say goodbye to your budget.

It's all enough to make your head spin.  

The health plans would no doubt argue that their various approaches to prescription drug coverage are necessary in their efforts to control ever-rising costs for prescription drug costs.  Well, they aren't working.  

Prescription drug prices continue to soar, even for generic drugs.  They have become a political issue, with the Senate now launching a bipartisan investigation into prescription drug pricing and the Presidential hopefuls from both parties being forced to take positions on how they would control them.  For once, politicians are in sync with their constituents; the latest Kaiser Health Tracking Poll found that affordability of prescription drugs tops their priority list for Congress and the President.

I've long thought that the pharmaceutical industry was ahead of the rest of the health care industry.  They were doing electronic submission of claims over forty years ago.  They pushed for direct-to-consumer advertising in the late 1980's, and quickly jumped on that bandwagon.  While providers only grudgingly adopted EHRs, they quickly moved to e-prescribing.   Other health providers had to move away from discounted charges twenty years ago, whereas drug companies still mostly use that approach and are only starting to tip-toe into more "value-based" approaches, as with the recent Harvard Pilgrim-Amgen deal.  

And the backroom rebate deals between drug manufacturers and payors put a lie to any claim that at least drug pricing is transparent.  

It's not only prescription drug coverage that is increasingly complicated, what with narrow networks, gatekeepers, different copays for different types of medical services, bundled pricing, or numerous other gimmicks used in health plan designs.   The collateral damage in the ongoing payor-provider arms race is consumer understanding. 

Making things more complicated for consumers is not the answer.  

In typical fashion, the health care industry has tried to address the confusion by creating a new industry that doesn't actually solve the problem but does manage to introduce new costs.  Many enrollment sites --the Medicare plan finder, public exchanges, private exchanges, broker sites like ehealth, or health insurer sites -- offer tools that purport to estimate your costs under your various health plan options.  Yet consumers still don't understand their options.

We keep treating health care as a multi-party arrangement between providers/health plans/employers/government/consumers, which is why everything ends up so complicated.  Drug company rebates or medical device manufacturers' payments to providers are prime examples of the kind of insider trading that goes on.  It's usually the consumers that come last.  And that's the problem.  

I think back to 1990's cell phone plans.  Consumers never knew what their next bill would bring, between peak/non-peak minutes and the infamous roaming charges.  No one liked it, no one understood it, and for several years no one did anything about it.  Then AT&T came out with a flat rate plan that essentially said, "we'll worry about all those for you," and soon all carriers had to adopt a version of it.  

I keep hoping for that kind of breakthrough with health insurance. 

Wednesday, November 4, 2015

My Phone Says I've Looked Better

I admit it; I'm a sucker for artificial intelligence.  For example, this week Google said it has developed "Smart Reply," which uses AI to suggest potential replies to your emails, based on their contents, your previous replies, and other emails Gmail has spied on read.  The responses are brief, but you have to assume the feature will only get smarter.

Meanwhile, Facebook says the facial recognition software it uses to help users tag photos -- which they claim is already almost as accurate as a human's -- can now recognize not only people in pictures but objects in the pictures with them.  It can even answer questions about what is in the photos.  

Of course, if Google's auto-correct suggestions or Facebook's success in tagging my photos are any indication, AI is still has a ways to go.

Despite that, experts are excited about the potential application of AI to health care, as evidenced by some speakers at last week's Connected Health Symposium, hosted by Partners HealthCare: 

  • Partner's Dr. Joseph Kvedar foresees us having automated health coaches in our smartphones.  "This is quite doable,” he said.  "It’s as if the puzzle pieces are there and we haven’t put the jigsaw puzzle together yet.”  
  • MIT's Joi Ito sees the doctor-computer combination as "the winning combination."  He used the now-familiar example of IBM's Watson, saying: "I think there was an announcement recently that Watson is almost finished with med school,  It’s sort of a joke, but sort of true...Now imagine if you had a computer that had all of the knowledge that you needed for med school and if it were available all of the time, maybe there’s an argument to be made that you don’t have to memorize it if it’s available all of the time.” 
  • VC mogul Vinod Khosha predicts AI will eventually help us make better diagnoses.  As he points out: "The error rates in medicine, if you look at the Institute of Medicine studies, are about the same as if Google’s self driving car was allowed to drive if it only had one accident per week."  Ouch.

We will someday have AI-based clinical support systems to guide physicians in real time interactions with patients.  We will someday have our own AI-based health coaches, integrated with our Cortana/Google Now/Siri/M interfaces on our various devices.  And we will someday have our own AI physician avatars, supplementing or, in some cases, replacing our need for actual physicians.  

Right now, though, I'm thinking about that facial recognition AI.

Current AI can sift through millions of photos to pick you out of a crowd, with varying degrees of success.  Camera angles, make-up, hats, quality of image all factor into how successful such software is.  Given the recent rapid rates of improvement, though, these are bumps in the road, not insurmountable barriers.  

Other software can process your facial expressions, allowing them to make some good guesses about your emotions.  If you are a marketer, or a law enforcement officer, this information might be gold, but if your privacy is important, it might be a scary invasion.  Someone is always watching.

What I want to know is when this AI can tell if I look sick.

It can already, I assume, look at a set of pictures featuring me in varying degrees of illness and still determine if that the images are all, in fact, me.  It would not seem much of a stretch for such software to look at a picture of me and determine if I don't look "normal."  E.g., I'm pale, I'm in pain, or I'm unusually tired.  Jaundice or measles would seem to be a piece of cake.  Issues like a swollen ankle or a gash would also be obvious, as would a limp or rapid breathing, if a video or sequential still images are available.

Facial recognition software has had to learn to ignore certain variations from the norm  -- e.g., a frown versus a smile, standing versus sitting --in a person's image, so this would amount to learning which variations are inconsequential and which may be attributable to a physical problem, like an illness or a severe injury.  The next step would be to distinguish what that physical problem might be. 

AI can already predict how you'll look as you age; it wouldn't seem hard to use similar AI to detect premature aging that could be linked to an illness -- or to visually illustrate longer term impacts of bad health habits.  Even young "invincibles" might pay attention to the latter.

Physicians stress the importance of face-to-face visits with patients, claiming that they can pick up clues that they might miss over the phone or even via a video visit.  AI should eventually be able to pick up on at least some of those clues.

Research indicates that smartphone users check their phones an astonishing 83 times a day (and the rate for younger users is some 50% higher than that!).  That presents an easy opportunity for your phone to check on you and spot early warning signs.

One can easily imagine an app that, say, takes a photo of you at times throughout the day, perhaps periodically prompting you to note how you are feeling so it can better learn what your variations mean.  The easy first stage would be for it to recognize variations that are unusual, even if the AI doesn't quite know what they might mean.  That might warrant a "hey, maybe you should call your doctor," or a "GO TO THE ER NOW!"

The second, and harder stage, would be to match your variations with how selected conditions visually present.  For example, it could recognize a rash or a growth, and maybe effects of a stroke.  One can imagine an AI synthesizing large data-sets of photos of people with specific issues to derive some commonalities, and use that learning to suggest what you might be experiencing.  The AI might need to ask you what you were feeling before suggesting a potential diagnosis, but at least it could help narrow what might be the problem -- if you even realized there was a problem.

I don't really mind writing my own emails and I'm happy to tag my own photos, thank you very much, but if an AI can help me recognize when I have a health issue, that'd be something I could use.

Wednesday, October 28, 2015

Health Care After People

There is lots of interesting news about hot health start-ups like Theranos, Amino, and 23andme, any of which deserves a post (and may yet get one), but I find myself thinking about battleships...and robots.  

I've had in mind a post comparing hospitals to battleships.  Battleships were once the crown jewel of navies, massive and full of firepower, but which now are at best only museum pieces.  A conventional wisdom is that battleships grew obsolete due to higher tech innovations like aircraft carriers and guided missiles.  I recently read an analysis that brought up another factor that led to their demise: labor costs.  They're just incredibly expensive to staff.

Just like hospitals, or nursing homes.  

Consider the results of a new study on health care spending.  It retrospectively looked at spending for Medicare beneficiaries who died between 2005 and 2010.  The most expensive cohort?  Patients with dementia.  Not heart disease, not cancer, not other conditions, but dementia.  And not by a little; they spent more than 50% higher than the other cohorts, averaging some $287,000.  

It gets worse.  Medicare spending was actually fairly similar across all cohorts, but out-of-pocket spending was much higher for the dementia patients, leading to the higher overall spending.  They're not racking up huge bills getting invasive surgeries or expensive chemotherapy.  They're not taking advantage of all these slick new machines in these beautiful new hospital additions.  Instead, they're spending time -- lots of time -- getting care in nursing homes or at home.  

If they're lucky, they may qualify for Medicaid, which may help pick up some of these costs for custodial care, as it is not typically covered by Medicare.  Of course, that's a strange kind of lucky, because it means they've spent virtually all their assets in order to qualify.     

Well, at least they get the care they need, right?  Unfortunately, many people with dementia, or with other long term disabilities, spend too much time waiting for care.  Not so much medical care as help with things most of us take for granted -- getting in and out of bed, going to the bathroom, taking a shower, even eating.  Go into a nursing home or assisted living facility and it won't take too long to find residents who are waiting -- knowingly or not -- for assistance with those kinds of tasks.  

It doesn't help that there is already a labor shortage for the kind of workers who provide such care, whether in institutions or at home, and that shortage is predicted to grow.  The field already has an older workforce, it is a very demanding job, and the pay is low.  No wonder turnover in nursing homes averages over 50%.  Last year The Wall Street Journal estimated that the need for health care aides will increase by 48% from 2010 to 2020.  And 2020 is nowhere near the peak of the baby boomers aging, so the need for these workers will keep growing.  It's a recipe for disaster.

Enter the robots.

There are already robots in health care.  Robotic surgery, delivery robots, robotic prescription dispensing systems, even therapeutic robots used in lieu of pet therapy  But we've just scratched the surface, because we still think of care as being something that is delivered by a person.

People like to talk about the importance of the human touch, but when it comes to something like getting out of bed when I want to, I think I'd rather have immediate service from a robot than an indeterminate wait for help from an aide.  And there are some more unpleasant tasks -- like assistance with going to the bathroom -- where I'd prefer not to have to ask another person to help me at all.  Sometimes impersonal is better (just be gentle, please).

A 2012 Georgia Institute of Technology survey found that even the current generation of seniors was surprisingly open to having robots help them with household tasks, although they tended to still prefer humans for personal care.  The respondents were healthy and independent, and I wonder how much more open they'd be to robotic help for personal care as well if they'd had more experience with receiving such care from health aides.  

I also wonder how their children might have responded, if it came to a decision about using robot aides for their parents versus putting them in a nursing home, paying for home care, or providing it themselves.  

A follow-up survey of healthcare workers found them also receptive to assistance from robotic helpers -- even preferring them to humans for some tasks, like transferring, medication reminders, or taking vitals.  As one of the lead researchers said: "In fact, the professional caregivers we interviewed viewed robots as a way to improve their jobs and the care they’re able to give patients."

The robots are ready, or nearly so.  I've previously written about Toyota's Partner Robot Family.  Toyota announced in July that they were putting the home helper robot R&D in "high gear," specifically citing the goal of assisting independent living for the elderly and disabled.   Japan is also the home of Robear, which is billed as an experimental nursing-care robot.  Robear can already assist with transfers, and the leader of its development team said: "We really hope that this robot will lead to advances in nursing care, relieving the burden on care-givers today."

Japan is a natural locus for these efforts, as it has both expertise in manufacturing -- which has already been revolutionized by robots -- and has one of the oldest and most rapidly aging populations.  When the care shortage hits, it is going to hit first in Japan.

We really don't have a lot of options.  We can come up with cures that prevent people from getting conditions that rob them of their independence.  We can throw more people at the problem, if we can find the money -- or the people.  Or we can use technology to help, and that probably means some kind of robots.  

The first two options might be nice, but I think we better be getting the robots ready.  

I may not live long enough to see artificial intelligence serving directly as a clinician, as I've previously written about, sorry to say.  But a personal care robot to help us to stay independent, or at least less dependent on health aides?  That's something that we should be able to do sooner rather than later.

OK, iRobot -- maybe spend more time on this and less time on building a better Roomba.

Wednesday, October 21, 2015

I'm Shocked, Shocked

Some new research on the effect of physician practice arrangement has on spending offer some disappointing -- but not entirely surprising -- results.

Take physician groups.  The death of the independent physician practice, working solo or in a small practice, has long been predicted (and nostalgically lamented).  Honestly: would you rather be treated by a doctor practicing alone, or by one at the Mayo Clinic?  Physician groups allow for things like development of best practices, administrative efficiencies, and, in this era of Big Data, larger data sets that can be used to improve patient care.  When it comes to physician groups, bigger would seem to be better.

If physician groups are good, the theory goes, then integrating them clinically and financially with hospitals, such as through partnerships or common ownership, should even better.  That allows for more aligned incentives and better coordination across the continuum of care.  Everyone loves Kaiser-Permanente, right?

The AMA says solo practice physicians now are only 17% of all physicians, down from 40% in 1983, and that physician ownership of their practice has declined from 76% in 1983 to just over 50% now (although other surveys say as few as 35% of physicians described themselves as independent practice owners, down from 62% as recently as 2008).  Our health care system, it would seem, is destined to be made up of large physician groups, many of which will be owned by hospitals.  

Too bad both larger groups and hospital ownership apparently end up costing us more.

A new study in Health Affairs found that as physicians concentrate in larger groups, prices tend to go up, at least for the 15 high volume, high cost procedures the authors looked at.  Twelve of the 15 procedures had prices that were 8 to 26 percent higher in areas with the highest physician concentrations; they found no significant relationship for the remaining three.   

It might seem that whatever savings might be gained by becoming part of a group are not being passed on to consumers (or their health plans), and/or larger size allows groups to bargain for better reimbursement rates from payors.  

An earlier survey, by one of the lead authors of the new study, found that more competition among physicians did, in fact, result in lower prices, at least for office visits.  One might conclude that more concentration into larger physician practices may have less to do with greater efficiency or higher quality than it does with reducing competition.

The moral appears to be, if you don't want to compete with them, join them!

Then there is the hospital ownership effect.  A study in JAMA Internal Medicine found that increased hospital/physician financial integration led to greater spending, primarily in outpatient care and almost entirely due to higher prices, not higher utilization.  Again, the price increases are attributed to greater bargaining power.  As one of the authors told The Wall Street Journal: "The market power that is in the hospital’s hands is conferred to the physician practice."  

The AHA protests that the study "is not reflective of the changes happening in today’s health-care market," citing newer value-based payment arrangements and hospital price increases that are at historically low levels.  That's kind of like saying, well, we weren't taking advantage of you before, but -- trust us -- in the future we really won't take advantage of you.  
One visible impact of hospital ownership on physicians is the infamous facility fee add-on.  You've been going to the same doctor for years, then the practice gets bought by a hospital, and the next time you go your bill suddenly has this "facility fee" added onto it.  Same services, same office, same doctor -- but more expensive.  

A good example of this practice is in Pittsburgh, where Highmark unilaterally decided to stop paying such fees for chemotherapy done at UPMC-owned oncologists' offices.   Highmark says the fees are "irrational."  UPMC says they are not only necessary but standard practice, including at Highmark's own hospital system, Allegheny Health.  The matter is in court. 

These kinds of fees, based on "place of service," are expressly permitted by Medicare, although one has to assume that they were not the intent of those rules.  Of course the AHA defends them, saying that hospital-owned practices can bill as an outpatient facility because they are subject to the more onerous requirements that other hospital outpatient services are, but they just don't pass the sniff test.  That's not supposed to be why we're doing integrated delivery systems.  

Maybe the AHA is right.  Maybe once we move more fully into the wonderland of value-based payment arrangements everything will work out: better quality for same or lower costs.  The American Medical Group Association (AMGA), the long-time trade association for physician groups, similarly says that their vision is: 
"Dramatically improved population health and care for patients at lower overall costs will be achieved by high-performing and clinically integrated medical groups and health systems."  
They've got all the right buzzwords (except they missed "value-based"), but AMGA has been around for 65 years, so where is that "dramatically improved" health and where are those lower costs?  

I've lived through DRGs, RBRVS, capitation, global capitation, staff model HMOs, IPAs, and an array of cost/quality incentive programs -- each of which was supposed to be the next magic bullet -- so I'm not holding my breath that payors will finally be able to outsmart providers when it comes to controlling revenue.  

Don't get me wrong: I've long been a believer both in large physician groups and in clinical integration between physicians and institutional care.  But I worry that those strategies to improve health care delivery are now being more used more as tactics to maintain and even improve revenue. Heck, we don't seem to be able to get physicians to stop providing services even they admit are "low-value," as the Choosing Wisely initiative has tried to do.  

As I've written before, when you have to create a new model that is supposed to be patient-centered (e.g., PCMH), and providers demand to get paid more just for participating it in, it's a pretty clear indication that our health care system isn't about patients but rather is about the providers.  

The problem isn't the structures themselves but rather their focus.

Wednesday, October 14, 2015

The Grass Is Always Greener

Reuters reports that more hospitals are interested in having their own health plan, citing a 2014 survey by The Advisory Board that found one-third of 45 large health systems already had a health plan, with three-quarters of the rest already planning one or seriously considering it.  A new report from Moody's predicts the same trend.

As the saying goes, be careful what you wish for, else you may get it.

Part of this trend is out of concern about the proposed mergers by mega-health plans like Aetna/Humana and Anthem/Cigna.  Both the AMA and AHA have voiced their strong opposition to these mergers, citing anti-trust concerns (they were, of course, silent about the intra-market consolidation going on among health systems).   Evidently many health care systems figure that the best way to fight an 800 pound gorilla is to have your own monkey.

In some ways, the time has never been better for health systems to have a health plan.  Such plans typically operate in a single market, which is often a barrier for employer coverage.  ACA has made individual coverage a hot market, with close to 10 million people receiving coverage through the exchanges, so operating only in single markets is less of a barrier.

Similarly, prior to ACA the trend in health plan networks had been wider networks, but under ACA that trend has been markedly reversed.  According to McKinsey, almost half of exchange plans have narrow networks, and the percentage is higher in larger cities.   Health system-specific health plans should fit in with this trend very well. 

Health systems love to cite examples of integrated delivery/financing systems like Kaiser, Geisinger, Group Health Cooperative, Intermountain Healthcare, or UPMC.  Yes, there are examples of health systems that offer successful health plans.  No, that doesn't mean it is easy to do.  Geisinger spun-off a consulting company, xG Health Solutions, to help other health systems become more like them, and UPMC did something similar with Evolent Health.  Still, the other successes have been slow in coming.

As I've written before, several of those models have been around for decades.  If they were bending the cost curve by even 1 percentage point a year, by now they should be 15%, 25%, even 50% lower in cost than their competitors.  They are not.  If the models were easily replicated, one would expect to find that they'd already spread widely.  They have not.  For the most part, they've stayed in their home markets; even Kaiser has struggled outside California.    

It's not as easy as it looks, and it doesn't look all that easy.

As Moody's warns: "Not-for-profit hospitals with a health insurance business (often known as an integrated delivery system, or IDS) tend to operate at noticeably lower operating cash flow margins than similar health systems without insurance."  

As if that wasn't cautionary enough, there's the example of the health insurance coops created by ACA to compete with the traditional health insurers.  The largest coop, Health Republic Insurance of New York, is being shut down, despite their having gained over 200,000 members.  The Kentucky Health Cooperative also recently announced it was closing.  Coops in Iowa, Louisiana, Nevada, and Tennessee have also closed

Indeed, this past July the HHS Office of Inspector General found that 21 of the 23 coops lost money in 2014, due in part that 13 of them had "significantly" lower enrollment than expected.  Ray Herschman, president of xG Health Solutions, told Reuters that new health plans needed to aim at enrollment of at least 100,000, so the coop enrollment struggles should be particularly concerning to health systems.  

The Kentucky Coop in particular blamed the failure of the federal risk corridor program, designed to cover higher-than-expected losses in the early years of the exchanges.  HHS recently announced it is only granting about 13% of the almost $3b in payments requested by health plans.  That's not HHS just being stingy; rather, it means that fewer health plans did better than expected and thus were able to "fund" the corridor.  Even health plans with significant experience and market share had trouble making money in the exchanges.  

This is not an easy business in which to start-up. 

Even assuming health systems are comfortable with the financial risk of having a health plan, there are some other factors to consider:

  • Brand: Consumer ratings for health care providers are dipping, but health plans would love to have the kind of ratings providers get.  Fairly or not, health plans get blamed for what they don't pay.  That kind of negative perception could harm a health system's overall brand.
  • Marketing: Health systems have tried to broaden their appeal to younger and healthier consumers, but still love to tout their expertise with serious conditions.  That can lead to a risk selection disaster for a health plan.
  • Regulation: Health systems certainly have plenty of regulation, but they may be surprised by the degree and types of regulations that health insurance brings.  And they are not usually from the regulators health systems are used to.  It is a big compliance leap.
  • Customer service: If health systems think they get a lot of calls now, it's nothing like what they can expect with a health plan.  Likely, more irate calls too.  Customer service can be outsourced, of course, but that puts both customer contact and customer satisfaction in the hands of other entities.
  • Focus: Both health plans and health systems are edging closer to population health management and coordination across the continuum of care, but the simple truth is that health insurance is not the same as providing health services.  Health systems need to be comfortable that a having health plan aids in their focus rather than distracts from it.  Does a health system CEO really want to add claims backlogs to the list of worries?
As Moody's said: "Different management expertise is needed to operate a commercial health insurance business versus an acute care hospital."  

If a health system asked my opinion, I'd tell them to forget about developing a health plan.  Instead, I'd recommend they focus on taking more risk via value-based payments, bundled payments, even global or semi-global payments as in an ACO.  They'd learn about managing populations and dealing with upside/downside risks, without getting dragged all the way into the morass that having a health plan could bring.

Instead of coveting the what they perceive as the greener grass of health plans, perhaps health systems should, as Voltaire suggested, cultivate their own garden.