Sunday, December 27, 2015

Better Think Again

Usually this time of year people like to either look back at significant events of the year just ending, or to prognosticate about what might happen in the new year.  Well, neither my rear view mirror nor my crystal ball are quite that good, so I'll use my last post of the year to cite some examples of the kind of innovations that most fascinate me, ones that suggest the future may come sooner and/or be quite different than we expect.

Or maybe they'll prove to be red herrings.  It's hard to say.

I'll give three examples.  How and even whether any of them relate to health care, we'll get to later.  In no particular order:

Tell me your password:  If you go online with any regularity, chances are good that you've got a password.  Probably, in fact, a whole bunch of them.  With online security becoming ever-more important, more sites require passwords, and tougher/harder-to-remember ones at that.  The trouble is keeping track of them all.

People use different strategies to deal with this ever-growing plethora.  Some people have good enough memories to recall them all, although I don't know any such people.  Others create lists to keep all their passwords, or utilize apps to store and even create passwords for them.   Ironically, those apps themselves require a password, which creates kind of a cat-chasing-its-own-tail scenario.

But passwords are so 1990's.

Google and Yahoo, separately, are testing getting rid of passwords, replacing that step with a message sent to your smartphone, which you can use to authenticate the log-in attempt.  Of course, if you have failed to lock your phone, or have forgotten its password, you're out of luck.

Other new approaches include using fingerprints, which have the drawback that they, too, are now digital and thus can be stolen, or facial recognition, as Windows 10 now allows.  Hey, as if recognizing your face isn't enough, UK-based start-up AimBrain claims its software can recognize you by how you use your device, making passwords unnecessary.

I don't know what approach will win out, but, given how much people hate them, how poorly they use them, and how easy they are to hack, I'm willing to bet that in five or ten years we won't be using passwords.

Give me the cash:  We love our credit and debit cards.  Not all that long ago, it seems, we mostly used credit cards only for larger purchases, and debit cards not at all, but now you can pretty much use them about everywhere, for any amount.  Still, you probably go out with some cash on you, just in case.

Unless you live in Sweden, that is.

Sweden appears to be closer to a cash-free economy than just about anywhere.  According to The New York Times, cash is only used for twenty percent of consumer payments in Sweden, versus around seventy-five percent in the rest of the world (surely that is a typo, right?  75%?).   Some Swedish banks don't even keep cash on hand.  Seriously.  As one student said, "No one uses cash.  I think our generation can live without it."

In a cashless society, people use credit/debit cards, or smartphone-based approaches like Apple Pay or Google Wallet.  Those approaches still are primarily based on card networks, but don't require you to give your card info to a merchant.

Money is, after all, notational, even actual currency.  It only works because we all agree it has value, and whether it is cash or digital isn't fundamentally important.  As Bitcoin is slowing proving, "money" doesn't even need to be something issued by governments or central banks; just a bunch a people agreeing to accept it allows it to have value.

I don't know if the winner is going to be cards, electronic transfers, Bitcoins, or something else, but  in ten or twenty years you may have trouble getting a merchant to accept your cash.

Cutting the cords:  This is a hot topic.  I've written on it myself.  Instead of using a landline for telephone service, we figuratively cut that landline and rely on our mobile phone.  Instead of being forced to take whatever array of channels our cable company forces upon us, we choose our own shows or our own packages of shows, usually streaming via the Internet.

The cable companies prepared for the future by becoming ISPs, so they love broadband, which they can charge more for (despite our abysmal speeds).  The landline telephone companies are either out of business now or have become mobile carriers.  But neither may be really ready for the future.  Here's the fact that makes me think so: home broadband use is actually declining.

According to the Pew Research Center, broadband use declined from 70% in 2013 to 67% in 2015.  That doesn't sound like much, but it is statistically significant, and it is a shocking reversal of prior trends; remember, at the beginning of the century broadband use was essentially zero and was still below 50% as late as 2006.  Fifteen percent have dropped cable or satellite service; a third of younger Americans have dropped or never had pay TV.

The reversal is attributed to more people thinking that kind of connectivity is too expensive, especially when thy can get much of the content on, you guessed it, their smartphones.  

All this sounds bad for cable but surely good for the mobile telephone companies, yet they shouldn't get too cocky.  We may not need them either.  Google has launched Loon -- "balloon-powered Internet for all" -- while Facebook is using drones to accomplish the same.  Right now both giants are testing these approaches in rural or third-world areas, but to the extent they succeed they will certainly help change the paradigm.  

Cable and mobile phone companies should remember that consumers' using the Internet through them was, in some ways, a fortuitous happenstance, and that if they are too greedy -- how much are those packages and data plans? -- or too shortsighted, the future may no longer include them.

In 20 or 25 years, cables and mobile phone networks may be as outdated as analog broadcasting and television antenna are now.

OK, so these examples may not be about health care, or may only impact health care in the same way they impact other industries.  The importance of them, to me, is less in their direct applicability as in their reminding us that the world -- even for health care -- doesn't always evolve incrementally or even predictably.

And I love that.

Wednesday, December 16, 2015

Oh, And It Is Also An EHR

You wouldn't -- I hope -- still drive your car while trying to read a paper map.  Hopefully you're not holding up your phone to follow directions on its screen either.  Chances are if you need directions while you are driving, you'll be listening to them via Bluetooth, glancing at an embedded screen on your dashboard, or maybe looking at a heads-up display on your windshield that doesn't even make you take your eyes from the road.  Or maybe you're just riding in a self-driving car.

But when it comes to your doctor examining you, he's usually pretty much trying to do so while fumbling with a map, namely, your health record.  And we don't like it.

A study in JAMA Internal Medicine found that patients were much more likely to rate their care as excellent when their physician didn't spend much time looking at their EHR while with them; 83% rated it as excellent, versus only 48% for patients whose doctors spent more time looking at their device's screen.  The study's authors speculate that patients may feel slighted when their doctor looks too much at the screen, or that the doctors may actually be missing important visual cues.

Indeed, a 2014 study found that physicians using EHRs during exams spent about a third of the time during patient exams looking at their screen instead of at the patient.  It is a dilemma; the records hold important information, and inputting new information is generally thought to be more accurate when done at point-of-care rather than at some point after the exam, so doctors are damned if they do and damned if they don't.

As one physician told the WSJ, "I have a love-hate relationship with the computer, with the hate maybe being stronger than the love." 

No wonder that the president of the American Academy of Family Physicians says: "We've taken this technology and we've embraced it, but I think a lot of us don't believe it's ready for prime-time. We've got this interloper in the exam room, but it's not there to help with the medical side as much as it's there to check boxes for insurers."

I might quibble that the familiar physician shibboleth about EHRs being there to serve insurers' purposes rather than to improve care perhaps is one reason why they are not ready for prime time, but I certainly don't dispute the fact that they are not.  After we've spent the past several years and over $30b of federal incentives to persuade physicians to adopt EHRs, physician satisfaction with them appears to be declining.

Know any health care professionals who rave about their EHR like they do their iPhone?

The problem is that we forget that the record is not the point.  It wasn't the point when it was on paper, and putting it in an electronic format doesn't make it the point.  The information in it is a tool -- just a tool.  It is supposed to help the physician diagnose the patient, and record what happens to the patient, so he/she can be better diagnosed in the future.  Figuring out what is wrong with a patient and what to do about it is the point. 

Paper records were siloed and made the physician draw his/her own conclusions without providing any assistance.  EHRs have the potential to draw data from larger patient populations, even if they don't yet do so very effectively, and can also give some assistance to physicians, like warning about drug interactions.   But working with them still involves looking at too many screens and having to populate too many boxes.  No wonder physicians are employing scribes

Don't get me started on medical scribes.

Let's picture a different approach, one that doesn't start with paper records as its premise.  Let's start with the premise that we're trying to help the physician improve patient care by giving him/her the information they need at point of care, when they need it, but without getting in the way of the physician/patient interaction.

Let's talk virtual reality.

Picture the physician walking into the office not holding a clipboard or a computer or even a tablet.  Instead, the physician might be wearing something that looks like Google Glass or OrCam -- not a conspicuous headset like Oculus but something unobtrusive (a concept that investors are already pouring money into developing).  There might be an earbud.  And there will be the health version of Siri, Cortana or OK Google, AI assistants that can pull up information based on oral requests or self-generated algorithms, transcribe oral inputs, and present information either orally or visually.

When the physician looks at the patient, he/she sees a summary of key information -- such as diabetic, pacemaker, recent knee surgery -- overlaid on the corresponding portion of the patient's body.  Any significant changes in blood pressure, weight, and other vitals are highlighted.  The physician can call up more information by making an oral request to the AI or by using a hand gesture over a particular body part.  List of meds?  Date of that last surgery?  Immunization record?  No problem.

The physician can indicate, via voice command or hand gesture, what should be recorded.  It shouldn't take too long before an AI can recognize on its own what needs to be captured; the advances in AI learning capabilities -- like now recognizing handwriting -- are coming so quickly that this is surely feasible.  Keeping an EHR up-to-date should be child's play compared to, say, beating Ken Jennings at Jeopardy! or Gary Kasparov at chess.

In short, the AI would act as the medical scribe, without the patient even realizing it or the physician having to worry about it.

More importantly, the AI could quickly pull up/synthesize any pertinent literature, or assist the physician in coming up with a diagnosis and/or treatment plan -- as Watson is already doing for cancer.  Maintaining and presenting the EHR are the finger exercises, if you will; helping the physician deliver better care is the main function.  And without intruding on the physician/patient relationship.

Building better EHRs is certainly possible.  Improving how physicians use them, especially when with patients, is also possible.  But it's a little like trying to make a map you can fold better while driving.  It misses the point. 

We need a whole different technology that subsumes what EHRs do while getting to the real goal: helping deliver better care to patients.

Friday, December 11, 2015

It's a Doc's Life

There is an old expression "it's a dog's life" used to describe a life that is hard and unpleasant.  That expression is probably outdated; most dogs seem to live pretty comfortable lives.  Based on recent research, though, maybe we should be saying "it's a doc's life" instead.

It has kind of a certain ring to it, don't you think?

Let's look at some of the new research, starting with a study by the Mayo Clinic.  They updated a survey they did in 2011, and found a number of disturbing issues, including:

  • More than half (54%) of physicians report at least one symptom of burnout.  That compares to 45.5% in 2011.
  • Only 41% of physicians reported being happy with their work-life balance, compared to 48.5% in 2011.
  • Physicians fared worse on both burnout and happiness with work-life balance than the overall population, even adjusting for age, gender, relationship status and hours worked.
  • Pretty much every specialty showed declines on both burnout and work-life balance.
The authors believe that American medicine is at a "tipping point" due to the burnout and lack of work-life balance, and that there is an urgent need to address the underlying causes.  It'd be hard to disagree with them.

The study would be disturbing in its own right, but it is not the only such study that showed up just this month.  A study in JAMA by Mata, et. alia found that almost 30% of resident physicians reported depression or depressive symptoms.  As one of the authors said:  "What we found is that more physicians in almost every specialty are feeling this way and that's not good for them, their families, the medical profession or patients,"

No kidding.

The study was a meta-study, analyzing results of other studies, and the prevalence ranged from 21% to 43% in the various studies, so the 30% may be conservative.  As with the Mayo study, the problem appears to be getting worse, with the results showing a slight but statistically significant increase over the five decades analyzed.

An accompanying editorial called resident depression "the tip of a graduate medical education iceberg."  It notes that training itself has changed little from the 1950's or 1960's, while "the actual delivery of medical care in 2015 would be unrecognizable to those same physicians."  New care options and resulting ethical dilemmas, more pressures on reimbursement and on demonstrating value, malpractice concerns, EHRs, and increased patient demands create a world that the graduate medical education system leaves residents ill-equipped to deal with.

The editorial calls for a fundamental rethinking of our approach to the graduate medical education system.  Again, it'd be hard to argue with that conclusion (and the rethinking shouldn't be limited to graduate medical education).

If any further evidence of a problem was needed, the 2015 Commonwealth Fund International Health Policy Survey of Primary Care Physicians provides some.  I'll leave the international comparisons for another day, but I was struck by a few of the findings for U.S. primary care physicians.  Twenty-four percent report not being well prepared to manage patients with multiple chronic conditions.  Less than half are well prepared to handle patients needing palliative care or long term home care, or patients with dementia.  And less than a third are prepared for patients needing social services, those with mental health issues, or those with substance use issues.

Given that one in four of American adults have multiple chronic conditions, one in five have mental health issues, and about one in ten have substance use issues, well, I'd say primary care physicians should be pretty worried.

No wonder that only 16% of those U.S. primary care docs think our health care system works well (which was, by the way, by far the lowest across the 10 countries), or that 43% report their job is very or extremely stressful.  No wonder they're getting burned out.

Despite the above findings, another JAMA study found that at least one kind of primary care physician -- family practice residents -- still had high hopes.  Family practice residents reported that they planned to provide a broader scope of services than practicing family practice physicians, such as prenatal care and inpatient care management.  Whether that is recognition of a changing role or simply naive expectations remains to be seen.  As one of the authors told Reuters, "it may be that the previous generations have had these same intentions and for numerous reasons are not able to practice the way they intended."  

In other words, real world, meet residents.  Residents, real world.  Try to get along.

Look, I get it.  Being a physician is not what it once was.  No more physician-as-God, no more white coat mystique.  Their business model has radically changed from largely independent artisans to more typically being employees with productivity expectations, whose judgement is constantly challenged by patients, payors, administrators, and/or lawyers.  That must be hard to accept.

But, then again, I don't know many people who don't think their job hasn't changed significantly in the past twenty years, with more pressure, higher expectations, 24/7 demands, and more reliance on technology.    Physicians can rightfully argue that their role is different in that people's lives depend on their decisions, but other professions -- e.g., police officers, air traffic controllers, even civil engineers -- could claim the same.  

It's tough all over.

We should be worried that physicians are depressed.  We should be worried that they feel burnt out.  We should be worried that they don't feel ready to manage the kind of complex patients they are seeing more of.  These are problems that need to be recognized and addressed.  But the life of the physician isn't going back to what it was in the 1960's, and that is not a bad thing.  

It should be a great time to be a physician.  We've never known as much about what causes various health issues, never had as many diagnostic tools, never had as many treatment options, and never had as much potential for people to be educated about their health and to be an active participant in their care.  If all that isn't exciting to someone, perhaps being a physician isn't the right profession.

For what it is worth, both medical school applicants and enrollees have reached record levels.  Let's hope they're not in for a big disappointment when they find out what a doc's life is really like.

Wednesday, December 2, 2015

The White Coats Are Coming! The White Coats Are Coming!

Let's say you were in a social setting, or even some business settings, and you introduced yourself to someone using your first name but that person's response was to introduce himself/herself using their last name and a honorific.  You might think they were oddly formal.  If, in those same settings, someone greeted you by your first name while introducing himself/herself using an honorific and his/her last name, well, you might think he/she was stuffy, if not a jerk.

Yet this happens all the time in health care settings.

Now, in the past, I've been critical of the use of the term "patients" to describe us laypeople in the health care system, arguing that it connotes a certain passive, secondary status about us.  Ashley Graham Kennedy, a philosophy professor at Florida Atlantic University, goes me one further: in a BMJ opinion piece, she asserts that "the title 'doctor' is an anachronism that disrespects patients."

How about that?

Professor Kennedy cites situations where doctors introduce themselves as doctors while not taking into account their patients' own professional titles.  How many of us have had a physician casually use our first name while expecting us to use their title?  If we happen to be sitting on an exam table or in a hospital bed wearing a gown that leaves us half exposed, the asymmetry is even more pronounced.

She notes that we don't need titles or even white coats -- more on that in a bit -- to figure out who our caregivers are or what their role in our care is.  More on point, she argues that the title is an explicit expectation that we are to treat them with respect, due to the training that the title signifies, whereas respect is something that deserves to be earned, such as by how we are treated.

It is the 21st century after all.  We know that not all physicians are equal, that not all medical education and training is the same, and that not even physicians know everything, even within their specialty.  If we're supposed to automatically respect all physicians, it better work both ways.

Personally, I don't mind calling a physician "doctor," although if he/she calls me by my first name (which I'd prefer) I'd expect that to be reciprocal.  What I wonder is what the title really means anyway.  There are a lot of "doctors" out there.  If someone introduces themselves as "Dr. X," you don't know if that means an M.D./D.O., or if it means DDS, DMD, DC, DPM, Pharm.D., DVM, OD, Au.D, Ph.D or ScD.  I'm sure that list isn't even complete, even within health care.  So as a means of automatically signifying respect for our physician, it's a pretty poor marker.

Some of the reactions to Professor Kennedy's argument are even more interesting.  While she believes that the deference the title expects is incompatible with patients being equal partners with their physicians, some respondents -- who usually seem to be physicians -- argued that the supposed partnership is not, in fact, equal, since physicians' training and experience makes them experts in a way patients can never equal, no matter how much Internet research they do.

I think those kind of responses kind of make Professor Kennedy's point.

The doctor/patient relationship is at its most asymmetrical when there is some acute event -- e.g., we have a heart attack, we need our appendix out, we need chemo.  But with more of our health care spending going to chronic conditions that, in many cases, are linked to lifestyle choices, the asymmetry is greatly reduced, and physicians should think twice about assuming they know more about maintaining our health, especially if they can't demonstrate that they "practice what they preach" when it comes to those kinds of healthy choices.

If the title "doctor" is a verbal indicator of expected respect, the white lab coat is a tangible one.  Almost all U.S. medical schools bestow one as part of their graduation ceremony (although this tradition is, surprisingly relatively new).  The fashion of physicians wearing them had to do with the (belated) acceptance by the medical establishment in the latter part of the 19th century that, yes, germs mattered; the coat was to suggest they kept their environment as sterile as in a lab.

Ironically, of course, the white coat itself may (or may not) be a carrier for germs, which has led the NHS to adopt a "bare-below-the-elbows" policy.

This is, apparently, a hot topic.  There are more issues than one might have imagined, including what physicians think patients want and -- my favorite -- how cartoons would portray physicians without a white coat.  Another opinion piece in BMJ bemoaned how the NHS "bare-below-the-elbows" policy has led to "scruffy doctors," urging them to "put your ties back on."

Not everyone agrees that the more casual attire leads patients to view doctors as scruffy and thus possibly lacking in hygiene.  (Dr.) Phillip Lederer wrote an excellent article on the controversy recently, reminding physicians that they'd still be a doctors even without the white coat.  He concluded: "There is no harm in avoiding white coats, but there could be danger in wearing one."

That would seem like the killer argument, but apparently it is not.

I mean, really, I can see wearing a white coat if the physician actually works in a lab, such as a pathologist, but it is hard to see it as much else other than a status symbol if they are actually seeing patients.  Health care is full of status symbols, including not just the white coats and automatically calling physicians "doctor" but also those nice parking spaces reserved for physicians that patients and their families often have to walk past, or, for that matter, major donors getting their names on health care buildings.

We shouldn't take any of them more seriously than if, say, all physicians started wearing monocles to further model those 19th century physicians.  The point is, it's not supposed to be about their status, but about our health.

Paul Revere may have never actually shouted "The British are coming!  The British are coming!" but he did help herald a revolution.  Maybe by rethinking some of the traditional status symbols in health care we can signal a revolution of our own, fighting for a health care system in which we are more responsible for our own health and are expected to be more equal partners with the people who help us with that.

Or we could try the monocles.