Wednesday, January 11, 2017

At Least I'm Virtually Healthy

Virtual reality (VR) is hot.  It was one of the headliners at this year's CES (as it was at last year's...).  One report predicted that VR "will change everyday human experience in the coming decade," just as smartphones have in the past decade.   We're not just talking about much, much more immersive games, although that industry has been an early adopter.  Every industry is going to have to figure out how to best make use of it (and its cousin, augmented reality).

Including health care.

What interests me most is whether VR proves to be a path towards improving our health, or if it will end up making us care even less about it.
VR is already making in-roads in health care.  One of the ways that VR is being used is to help people manage pain, whether that is for people undergoing painful procedures, people with chronic pain such as amputees' "phantom limb" pain, even women in childbirth.

The theory is that the brain can only absorb so much information at a time, and the VR experience can essentially crowd out the information stream that is carrying the pain signals.  You are still hurt, your body is still sending out pain signals, but, if the VR is done right, those pain signals are just getting lower priority.  Our brain would rather be in VR.

VR has also been proposed as a powerful tool in addressing a variety of mental health issues, including stress, anxiety disorders, or PTSD.  As with pain relief, some of the traditional alternatives include a variety of pharmaceutical remedies, some of which can carry risks of addiction, so VR can be a boon.

People are using VR for their health too, not just their health care.  Many people find exercise boring, especially extended sessions on treadmills, exercise bikes, or ellipticals.  For several years, many gyms have allowed users to pretend they were elsewhere while they exercised, tying activity on the exercise machines to images of more scenic locales playing out on flat screen TVs in front of them.

VR takes this to the next level.  Instead of essentially watching images on television, the VR is almost as if you are actually there.  VirZoom, for example, claims to straddle esports and exercise, as its exercise bike connects to a number of leading VR headsets.  Users can compete with other players as they work out.  Fitbit has already partnered with them.  

There are no shortage of other entrants trying to make VR part of fitness efforts.  For example, Blue Goji and Holofit have similar approaches to VirZoom, while Black Box VR offers a virtual gym, complete with virtual personal trainer.

The VR fitness program I want to see, though, would help remind people why they should try to get better health habits.  Many people have gotten used to their current health status, even if that status includes being overweight, poor cardiovascular systems, and weakening muscles.  We often slide from good health to fair health to poor health without fully realizing it, and that can be a pit that is hard to climb out of.  Watching TV is easy, junk food tastes good, while exercise is hard and eating better requires some discipline.  So many don't make the effort.

What if VR not other took us to other places, but also helped show us how we could feel?  Want to see how your body would look and feel like if you walked a mile a day and lost ten pounds?  If you ran 20 miles a week and lost 30 pounds?  Actually experiencing the fruits of your efforts before you undertook them, in order to better understand the effort/reward trade-offs, might serve as a powerful motivator for those who have had a hard time making those trade-offs.

VR could similarly help people make more informed decisions about proposed treatments that can have both positive and negative trade-offs, such as knee or hip replacements.

The better VR becomes, though, the more danger will be that, well, the VR version of us might be preferable to the "real" us.  People have gotten used to the concept of avatars in games, and invest a lot of emotional energy into what that avatar is and how they can "improve" it.  Our avatar in VR may increasingly be us, only a new-and-improved us.  Once robots have taken over our jobs and the government pays us a universal basic income (as Elton Musk and others have suggested will both happen), there may be less reason to be in reality and all-the-more reason to spend our time in VR.

We're already worried about the impact of excessive screen time on the health habits of teens, and that is without VR as a common option.  The trend towards how much time is spent per capita on playing games continues to steadily increase - again, without VR.  Think of the time we'll be soon spending in VR.

Once VR is ubiquitous, inexpensive, and as nearly lifelike as we can perceive -- all of which are in our near future -- why wouldn't we want to be in VR?  

It sounds a little like The Matrix, except that we might be voluntarily making the choice to live in VR instead of having it imposed upon us by our AI overlords.  We might like to think we're Neo, the hero of our own lives, but many of us might opt to be like Cypher, who found reality bland, difficult, and dangerous, and chose a virtual steak over helping his flesh-and-blood fellow humans.

We're barely scratching the surface of what VR is and what it can do.  VR headsets are clunky and expensive, and still have limited options for what they let us experience.  They're like early PCs or early smartphones.  Not many in 1987 imagined what their PCs would be able to do in 2017, and not many in 2007 saw all that smartphones of 2017 would offer. The gap between VR of today and VR of 2027 will be wider than that of the gap between the first iPhone and today's iPhone 7.  

Virtual reality is going to do wonderful, amazing things.  It will change how we play games, how we do business, how we socialize, how we get health care -- in short, how we live our lives.  The question is, will it help us live better, more productive lives -- or will it become our lives?  


Wednesday, January 4, 2017

2017 Prediction: Some "Oops" Ahead

Predictions for 2017 are everywhere this time of year, and it is no wonder.  There are so many technological advances, in health care and elsewhere, and a seemingly endless appetite for them.  We all want the latest and greatest gadgets, we all want the most modern treatments, we all have come to increasingly rely on technology, and we all -- mostly -- see an even brighter technological future ahead.

Here's my meta-prediction: some of the predicted advances won't pan out, some will delight us -- and all will end up surprising us, for better or for worse.  Like Father Time and entropy, the law of unintended consequences is ultimately undefeated.

What started me thinking about this was an article in Slate, "Self Driving Cars Will Make Organ Shortages Worse."  Self-driving cars are a hot area these days, with auto makers trying to prepare for a future where car ownership lessens in importance and driving services like Uber and Lyft trying to make that happen.

One of the key appeals of self-driving cars is that, well, humans generally are pretty crummy drivers, being prone to distractions, falling asleep, driving while impaired, and so on.  Without us, the reasoning goes, there should be a lot fewer accidents and deaths.

That sounds like good news (unless you are in the auto repair business), but, as the Slate article points out, 1 in 5 organ donations come from of victims of car accidents.  Stop us from killing ourselves or others on the road, and suddenly a huge problem crops up for those roughly 120,000 people on the organ transplant waiting list.

Talk about unintended consequences.

Well, you might say, that's a good problem to have, and technology will fix it too.  After all, soon we'll have artificial organs, even 3D printing them.  As big a fan of bionics and 3D as I am, though, somehow I suspect there are some shoes yet to drop with them as well.

An even more worrisome example is gene editing.  In a recent post, I likened its potential to magic.  Snip a few undesired genes out, substitute some other ones, and we can potentially cure or prevent some important health problems.

Even more startling, the technology can be used to ensure that such edits persist and spread in subsequent generations, potentially changing entire species.  This process of changing entire future generations is called "gene drive."

Magic indeed.

Michael Specter did a deep dive on the topic in The New Yorker.  He profiled the work that Kevin Esvelt is doing at MIT.  One project that Dr. Esvelt is working on is getting rid of Lyme disease by ensuring that the disease's repository -- mice -- can no longer carry it, so that ticks biting them don't get inflected and thus can't infect people.

The same could be done with say, mosquitoes and the malaria they can carry.

The potential dangers are becoming recognized.  The director of national intelligence listed gene drive as a potential weapon of mass destruction, a concern echoed by the National Academy of Science.  As Dr. Esvelt said. "My greatest fear is that something terrible will happen before something wonderful happens. It keeps me up at night more than I would like to admit."

Worse yet, the terrible things may not even be deliberate.  As another researcher warned Mr. Specter:
But gene drives affect entire communities, not single individuals. And it can be almost impossible to predict the dynamics of any ecosystem, because it is not simply additive. That is exactly why gene drives are so scary.
The real danger may be that an overwhelming need in a local area -- e.g., Ebola, AIDS, malaria -- may cause local officials to take chances.  As an African public health official told Mr. Specter, "Principles matter to us as much as they do to Americans. But we have been dying for a long time, and you cannot respond to death with principles."

Applying gene edits in such a situation might solve immediate concerns, but with broader implications.  Dr. Esvelt put it bluntly: "A release anywhere could be a release everywhere."

Because CRISPR is making the technology much cheaper and much more accessible, how it  is used becomes especially important.  We don't really want someone in their garage lab inadvertently wiping out all cats, for example.  As Dr. Esvelt stressed, "The only way to conduct an experiment that could wipe an entire species from the Earth is with complete transparency,"

That may be a big ask; we're not even completely transparent about adverse outcomes of current types of medical treatments.

Mr. Specter himself appears to be cautiously optimistic:
We have engineered the world around us since the beginning of humanity. The real question is not whether we will continue to alter nature for our purposes but how we will do so. Using a mixture of breeding techniques, we have transformed crops, created countless breeds of animals, and converted millions of wooded acres into farmland. Gene drives are different; one insect could affect the future of our species. But it is a difference of power, not of kind.
Pew Research Center did a study on using biomedical technologies to enhance human abilities, and found that we are decidedly skeptical.  About two-thirds were worried about each of the three specific scenarios -- gene editing, brain implants, and synthetic blood -- but about three-fourths thought that the technologies would end up being used before they were fully tested or understood.

Ironically, respondents were most enthusiastic about gene editing, but only in regards to doing so to give babies a reduced disease risk -- not changing our entire population through a gene drive.

Technology change is going to happen.  It will inevitably change our culture and, to some extent, us.  We're not very good about not using technology once invented.  The question is how we prepare for it. Christopher Mims believes that "the art and science of futuring is fast becoming a necessary skill."  It is less about predicting the future than preparing for a variety of futures

As Donald Rumsfeld infamously once said, there are known unknowns and unknown unknowns.  We think more about the former than the latter, but that needs to change.   

We should be spending as much time thinking about the potential consequences -- good and bad -- of cool new technologies as we do being excited about how cool they seem.

Tuesday, December 27, 2016

Indistinguishable From Magic

One of the late Arthur C. Clarke's great (and often cited) quotes:

Evidently, most of health care's technologies are not yet sufficiently advanced.

For example, just think about chemotherapy.  We've spent lots of money developing ever more powerful, always more expensive, hopefully more precise drugs to combat cancers.  In many cases they've helped improve cancer patients' lifespans -- adding months or even years of life.  But few who take them would say the drugs are without noticeable side effects -- e.g., patients often suffer nausea, vomiting, hair loss, fatigue, appetite loss, sexual issues, or a mental fog that is literally called "chemo brain."

No, when you are having chemo -- or radiation, or surgeries to remove tumors -- you'll know your treatment isn't magic.
Antibiotics seemed like magic when we first started using them, and use them we did.  They allowed millions of people to survive infections that might have previously killed them, and helped more millions to get better faster from others.  But we've painfully learned that they are not without consequence.  Taking them as prolifically as we have led to antibiotic resistance, to the point the UN has declared it a crisis.  Equally as bad, we've belated realized the havoc that they have on our microbiome, causing a yet-unknown wide scope of adverse health outcomes.

If antibiotics are magic, they are the kind of magic that is of the "be careful what you wish for" variety.
We've gotten better about some of our treatments.  Getting cataract surgery is light years ahead of what it used to be, allowing the procedure to be done on an ambulatory basis instead of requiring extended hospital stays.  Similarly, laparoscopic surgery allows for much smaller incisions, smaller resulting scars, and shorter recoveries.

But, in both cases, you'll still know you had surgery -- before, during, and after it.

In some sense, we've built our health care system this way, which is why it is actually a medical care system.  We increasingly don't think we can get well until and unless we see a physician, and he/she does something.  It often seems as though it doesn't really matter what they do -- the infamous placebo or care effect -- as long as a physician does something to us.

They can give us sugar pills, they can pretend to give us injections, they can even fool us into believing we had surgery, and we get better almost as well -- sometimes as well! -- as if we actually had medical care.  That says something about us, and about our expectations as to how we achieve health.

It also helps explain why we put up with a number of at best unpleasant, at worst harmful tests and procedures.  It is as if we believe that the more we suffer from our treatments, the more likely it is that they will be successful. Many primitive cultures might recognize this principle.

We should be aiming higher.  This is the 21st century, after all.  We should be aiming for interventions that are, well, indistinguishable from magic.

We've done a poor job of taking into account cost-effectiveness in medical treatments.  Outrage over prescription drug price hikes and concern about the ballooning costs of new cancer drugs have helped us be more aware of the problem, but cost-effectiveness is not the only issue.  A new study in JAMA Internal Medicine found many new cancer drugs not only didn't extend patients' lives but also that their impact on patients' quality of life wasn't even evaluated.

Pretty much, as long as a new drug, device, or treatment can demonstrate that it provides some clinical value, we seem to end up using it, even if it is vastly more expensive and/or produces no better outcomes than existing options.  Even if we as patients suffer more from the process of getting or using it.

This has to change.

We need to look at how much things cost to make us better, how much better they actually make us, and whether they make better the process of getting better.

A couple examples illustrate the kind of interventions that more closely fit the goal of delivering some magic:

  • Gene editing: Recent advances in a technology called CRISPR-Cas 9 has allowed the field of gene editing to advance several years.  It can snip out defective genes and, potentially, replace them with "correct" versions.  Instead of treating a condition or disease, gene editing could eliminate the precursors that cause or allow them to develop,
  • Nanobots:  Nanotechnology has been on health care's radar screen for many years now, but the supporting technology is finally starting to make it more of a reality.   Simply inject nanobots into a patient's bloodstream and it might deliver targeted drugs, destroy cancer cells, repair tissue damage, clear clots, and so on.  Like our immune systems, the nanobots could wage a never-ending war against things that might cause us harm.
Not only are these approaches extremely well targeted, but they would essentially be invisible to patients.  You might have an injection or swallow something, but, after that, all the hard work is happening without you realize it was going on.  You'd just start getting better.

Now, that's more like magic. 

There are, of course, other examples of magic on the horizon.  Using virtual reality to teach anatomy.  "Organic electronics" that you could wear, or have implanted.  3D bioprinting for organs or tissues.  Wide-ranging blood tests from a single drop of blood -- oops, that one proved to be fake magic.

This may come as a blow to medical professionals, but we don't really care about medical care.  We care about our health, and about being healthy.  We may have grown accustomed to needing medical professionals and medical treatments to try to achieve that goal. but it's like the old adage about sausage-making: we don't really want to see (or be part of!) the process.

A few months ago Martin Legowiecki wrote in TechCrunch that "the ultimate UI is no UI."  It should be "invisible" to users.  It should be the same for health care; we should seek to have care be invisible.

Health care innovators, don't settle for good, or even better.  Shoot for magic.

Tuesday, December 20, 2016

Health Care Should Be Five By Five

People love to talk about "moonshots" in health (e.g., Joe Biden, GE).  I'm not exactly sure why that is a good goal.  The actual moonshot took thousands of people many years and tens of billions, all to send a few people far away for a short period and never again.  It may or may not have produced otherwise useful technological advances (Tang, anyone?).  Sounds a lot like health care now, actually.

I suggest a different goal: let's make health care "Five by Five."

Five by five is a communications term to quantify the signal-to-noise ratio.  It means the best possible readability with the best possible signal strength.  I.e., the signal is loud and clear.  By contrast, "one by one" would essentially mean "I can't figure out what you're telling me but that's OK, because I can't really hear you."

Health care is full of signals but also, unfortunately, full of "noise."  Many people don't get care when they need it, some people get the wrong care, too often people don't get better -- or get worse -- from their care, and everyone has their own horror stories of health care bureaucratic nonsense.  And we pay way more than any other nation for all this, without getting much for it.

Here's my proposed Five by five: 
  • No more than 5% wasted care
  • No more than 5% administrative costs
Let's take each of those in turn.

Wasted Care

Estimates are that as much as a third of U.S. health care is "waste" -- mostly care that isn't necessary or appropriate for the patient.  It may be care that statistics show won't benefit most people receiving it, or it may be care for which there really aren't any efficacy statistics available at all.  Both situations are much more common than most of us, or even physicians, realize,   

It is easy to see how that happened.  Physicians learned what they were trained, which was highly variable, and by practicing, which historically was in solo or other small practice.  Everything was paper-based, which made collecting statistics hard and applying any learnings from them harder.  John Wennberg and his colleagues have been documenting the resulting geographic variability in care for decades,

As physicians like to say, medicine is more of an art than a science.  

It doesn't have to be this way.  Although our current EHRs are clunky, loathe to communicate with each other, and hard to get meaningful advice from, this is a transitional issue.  With more data, better interfaces and more use of artificial intelligence (AI), we should be expecting EHRs (or their technological successors) to participate in the evaluation, diagnosis, and recommended treatments for patients.  They should be able to do real-time searches for comparable patients, check the latest applicable research and clinical guidelines, and produce statistically-based recommendations for the clinician (unless, in fact, the AI is the clinician).    

As patients, we shouldn't passively submit to treatments that are of dubious value, nor pay for ones that do not produce expected outcomes.  And in this connected day and age, there is no reason we shouldn't know patients' outcomes.  

With the right data and the right analysis applied to it, we should know what appropriate care is, and expect it (and only it).  Maybe 5% is too high a bar.

Administrative Costs

If there is one thing about our health system most people seem to agree on, it is that its administrative costs are too high.  Too much of our health care dollar is spent on tasks that are not directly involved in delivering care.  Estimates vary, from lows around 15% to highs of 25% or more.  And virtually all of the job growth in the health care sector in the last fifty years has come from administrative jobs

Much of the administrative costs are associated with payment: who is going to pay how much for what.   We have thousands of health plans (included self-funded employer plans), each with their own schedules of allowable amounts, and each with their own benefit designs.

Those benefit designs vary not just in cost sharing provisions but also the "fine print" of what is covered.  Since no one knows what care is appropriate, we've fallen back to incomprehensible benefit designs to define it, and those designs do not well serve the patients/members, their providers, or the health plans themselves.

No wonder no one ever seems to know who is covered for what, or for how much. 

Again, it doesn't have to be this way.  Here are a few changes we should make:
  1. Uniform patient identifier: Most industries are using cell phone numbers to identify customers, and heath care should follow suit.  Put the security around who can access what information about such a number, not in creating it.  It will make tracking and transactions much easier.
  2. No provider networks: Provider networks have outlived their purpose.  Their existence creates confusion and frustration for consumers, and involve significant cost to both providers and health plans.  We should want people to go to the best providers.
  3. Clearinghouse: Rather than providers and health plans doing direct connections with each other -- count all those! -- in an era of cloud computing (or blockchain!), providers be able to simply submit transactions to a neutral database, which patient's health plan can use to act on and return to the provider. 
  4. Appropriate care: Health plans should pay for appropriate care; period, end of story.  Health plans don't get to unilaterally decide what that is; nor do individual physicians, or patients.  As described above, determining this will be more clear-cut.  Benefit design and premiums should just reflect how much of the care the health plan pays versus the member, not which care.
  5. Real prices:  Providers must cease their nonsense about "charges," and charge actual prices, which should be transparent.  Health plans should only pay market prices -- pegged at some level of what other providers charge for the same service and same outcome (and higher for better outcomes).
Why Not?

None of this is easy.  None of it will happen overnight.  But nothing in it is impossible either.  I believe in making big plans.  Moonshots are nice, but Five by Five provides meaningful, measurable goals for changes that would benefit the health care system, and each of us.  Maybe we won't get to those goals, but we certainly can do better than the current 33 by 25.  

It'd be easy to point out why Five by Five won't happen, but it's harder to argue that it shouldn't.  

Tuesday, December 13, 2016

Just Doing Our Jobs

Health care fraud is bad.  Everyone agrees about that (except those who profit by it).  We'd similarly agree it is all too pervasive.  Just in the past few days racketeering charges have been brought against former executives of Insys Theraputics, numerous charges brought against leaders of Forest Park Medical Center (Dallas), 18 people in Pittsburgh were charged in a prescription fraud scheme, a New Jersey chiropractor was arrested for health fraud, and the feds settled a $4.5 million fraud case against a Florida orthopedic clinic.

The list goes on and on, week after week, in every state, for every type of medical specialty, and against most health insurers.  Some estimate that fraud could account for up to 10% of health care spending.  But that's chump change: estimates are that other kinds of wasteful spending, such as unnecessary care and excessive administrative costs, are easily double that.

An op-ed in The Boston Globe may have it right:  we need an overdiagnosis awareness month.

The op-ed was a tongue-in-cheek suggestion to highlight the various cancer awareness months, the most famous of which is October's Breast Cancer Awareness.  These campaigns promote the need for the associated screenings, but don't typically also mention how controversial many of them are.  As the op-ed noted, many screenings result in false positives that end up with expensive additional testing and significant patient anxiety, or in detection of early stage cancers that might never actually present any actual threat.

Overdiagnosis goes much further than screenings.  As Atul Gawande wrote last year, we're getting an "avalanche of unnecessary care," getting too many services of not just low value but of at best no value to patients -- and, at worse, actually harmful to them.  Not just pointless tests or unneeded prescriptions, but also too many questionable procedures, such as total knee replacements, heart stents, or spinal fusions.

Now, in some of these cases -- such as when physicians have direct investment interests in the drugs or devices being used, or in the facilities in which they are done -- the parties involved may be knowingly letting dollar signs outweigh patient interests, just as there are people committing fraud.  But those are by far the minority of people working in health care.

The real problem is that most people involved in the "epidemic" of overdiagnosis and over-treatment our health care system, well, they think they're just doing their jobs.

They don't think they're trying to rip anyone off, they certainly don't think that they're harming anyone, and they most definitely don't think their role is superfluous.  From the lowliest claim adjustor to the most overworked front desk attendant to the highest paid surgeon, and everyone in between: they all think they are performing a necessary service.

This is all only possible because it is still too hazy about what is the right treatment for who, when, not to mention what a "fair" price might be for anything.  So, when in doubt, do more.

As a result, health care employment is booming.  Some project it will be largest job sector within three years.  Indeed, as the chart below shows, virtually all of the U.S. job growth this century has been in health care jobs.  That, quite simply, is astounding.

Yet, despite all this growth, there continue to be urgent cries of shortages of key health care professionals.  We just cannot seem to get enough qualified health care workers.  If you're looking for a job, that's good news, but if you're paying the bill for all those jobs, it should be scary.

Unlike manufacturing, we're not seeing productivity increases in health care, despite massive "investments" in health care IT.  Some argue that health care productivity is actually decreasing, a notion that fits the stereotypes of doctors struggling to input into their newfangled EHRs.

In health care, we just add more jobs.

When hospitals expand, drug companies grow, or health care start-ups jump in the fray, local politicians get all excited about all those added jobs.  Cities like Cleveland and Pittsburgh have been touted as reinventing themselves from dying Rust Belt cities to regional health care hubs.  But those jobs mean more spending, all of which has to get paid for by someone.

Even new research which argues that, contrary to popular belief, market forces do work in health care had to admit:
In other words, we found that patients were attracted to hospitals that used more inputs over hospitals that were just as good but used fewer inputs. This is not a good thing because society is paying for those inputs.
Overtreatment works, at least if you're the one doing the treating.

Health care has won the war.  We all think we need medical attention and treatment.  We've given up any hope of reducing health care spending; we're happy if it just doesn't grow too fast.  We complain about our health insurance premiums, but we don't have any idea if our local hospital is charging more than its nearest competitor (nor do we seem to care if, indeed, there is a nearby competitor).  If our medical treatments don't make us better, or even make us worse, we humbly just submit to more of them; it never seriously occurs to us to ask for our money back, at the very least.

And everyone in health care keeps doing their job.

Look, this fantasy isn't going to continue.  Health care isn't going to become 100% of GDP.  It's not going to get to 50%, or 40%.  At some point the revolt will happen, the revolution will occur, and health care spending will finally slow, stop, and eventually plunge.

Then all those health care jobs are not safe.  People will lose their jobs.  A lot of people.  People who, until then, thought they were doing good.

It's nice to pretend that it will mostly happen to paper-pushers (or, nowadays, keystroke enterers), but in truth some of losses will be for people now providing care.  It's also nice to assume that, if so, it will only be people providing unnecessary care, but there probably won't be such a bright line.  Job losses will cut across the board.

So when the next health care innovator comes along, we should try to get past the hype and ask: OK, specifically, what jobs will this eliminate -- which ones, how many, when?  If they don't have answers, or only offer vague promises, well, smile politely and get out your wallet.

In health care, perhaps one way to do your job might just be to find a way to eliminate it.

Monday, December 5, 2016

No Forms For You!

What do you hate worst about health care?  It could be the uncertainty about diagnoses, or the impreciseness of treatments.  Or there is the opaqueness about the actual performance of our providers.  Maybe it is the drabness and/or confusing layout of many health care settings, or the interminable waiting we do in them.  But somewhere on the list has to be having to fill out all those forms, over and over, at practically every stop along the way.

If only someone would do for health care what Amazon is trying to do with grocery stores with Amazon Go.

If you've missed the many stories about Amazon Go, or don't want to bother with the above video, it goes something like this:

  • You scan an app on your mobile phone when you enter the grocery store.
  • Each time you pick up an item from the shelf, it registers in your "virtual cart" (don't worry, if you decide to put it back, it gets deducted).
  • When you are done shopping, you simply walk out with your items, and the total is charged to your account (presumably using one of your Amazon payment options).
No waiting in lines, no putting items on the conveyor belt, no cashiers -- not even a self-serve checkout.  As Amazon says, "grab and go."

At this point, Amazon is just testing the concept with a prototype convenience store, but The Wall Street Journal reports that the pilot could lead Amazon to open up as many as 1,000 locations by the end of 2017.  It is one of at least three grocery store concepts Amazon is testing.

Take that, Walmart.

Grocery stores have jumped on board with self-service checkouts, with that option having been widely available for several years.  They do help cut labor costs, but also are believed to double the losses from shoplifting, which arguably wipe out any financial advantage the self-service offers.  

With Amazon Go, though, the store "knows" what you take from the shelves and charges you for every item, so shoplifting would become much harder (of course, installing the technology to track what you take would not be inexpensive).  No missed items hiding in your cart (or pocket), no items that did not scan in the checkout.  Checkout is finished as soon as you are done shopping.

Contrast this to most health care visits:
  • The front desk insists on verifying your current coverage, even if you were there the day before.  They may take a photocopy of your insurance card (s).
  • If you are a first time patient, or haven't been in for a few months, you have to fill out various forms: health history (including family histories), prescription list, contact details, notice of privacy policies, source of problem (e.g., work related or auto accident), and current complaint/symptoms/reason for visit.  It doesn't seem to much matter if the information is already in an EHR, even theirs.
  • If you are lucky, the office may have let you fill out some of these forms online, or at least print out the forms so you can fill them out in advance, but odds are you are not seeing the provider until you have completed some piece(s) of paper.  
  • When you leave, of course, they're likely to ask you for some payment, to the extent they know it at the time.  In any event, at some future point they'll likely submit a claim to your health insurer, which will eventually lead to you being billed whatever it ends up that you owe them (assuming that neither they nor the insurer made any mistakes, in which case you start over).
It all makes a trip to the grocery store look pretty pleasant.  And you don't even end up with any cookies.

Let's imagine what the process might look like it someone like Amazon re-imagined it:
  • Your phone (or other device) has access to all the pertinent information: insurance, health records, and any information you want the providers to know about the need for the current visit.  For example, this could be stored in the patient-facing EHR app.
  • Upon entering the office, you could either scan the app through a reader or have it communicate via Bluetooth through a secure connection.  That automatically updates the provider's records.  
  • As services are provided to you, they get uploaded to your care summary -- ideally, using consumer-friendly terminology and actual prices.  You can see it at any point.
  • Any prescriptions that result from the visit (e.g., drugs, PT, imaging) get added to your app, which you can then share with the applicable provider(s)..  
  • When you are all done, you pass the front desk entirely, and the care summary (or a version of it) gets sent to your insurer as the claim.
I know, it sounds too easy, and it probably is too simplistic.  The hard part, though, isn't the part about the forms.  That all seems entirely feasible, if EHR and billing vendors offered a modest of cooperation (all right, that's not a given).  

Even itemizing services shouldn't be terribly difficult, since health care is full of standardized lists of services and procedures.  Admittedly, it's not like picking out a UPC code from a can of soup, but it's not impossible to imagine doing in real-time or near real-time.

Getting the prices right, especially with the right negotiated/allowable rates is probably the hardest part, which speaks to why health care needs to get away from its absurd list charges and to more retail-oriented price lists.   And, even in the short term, there are plenty of vendors (e,g,,  Castlight HealthHealthcare Bluebook, or HealthSparq) who would probably say they could assist.  

Over its history, Amazon has done a great job of reinventing our retail experience.  Many probably didn't expect that the grocery experience would be one of those, but not many thought buying books online would work 20 years ago, or that moving beyond books would also succeed.  You have to give Amazon credit for pushing the envelope about how to make the purchasing process easier.  

Unfortunately, health care isn't quite as good about reinventing its experience.  

It's not that we want people to buy more health care services (although help in getting us to buy health services more prudently would be greatly appreciated).  But we could certainly make the process of dealing with all those forms easier.  

Maybe Amazon should go into the retail clinic business.  

Monday, November 28, 2016

I'm OK -- You, Maybe Not So Much

It is widely agreed that competition, or lack thereof, in health care is a problem.  The Wall Street Journal recently showed how Viagra and Cialis prices seem to move -- up, of course -- in lockstep.   USA Today found Walgreens charging 1237% more than Costco, for the same drug.  Economists like Martin Gaynor have been discussing problems with competition in health care for years.  The Harvard Business Review just published a lengthy article on the problem.

But, it turns out, we may be ignoring an important competition that has real impacts on our health: with each other.

We've been becoming increasingly aware that there are numerous social determinants that have dramatic impacts on health (e.g., Healthy People 2020 and the RWJF).  Where you live, how much you make, how much education you have, what your family situation is -- all are closely correlated with your health.  But so is where you stand in the social pecking order.

It isn't the kind of thing that can be easily tested in a blind clinical trial, so researchers did the next best thing: they studied its impact in monkeys.  Macaques, to be more exact.  In a paper published in Science, researchers from Duke, Emory, and the University of Montreal found that social status alters the immune function.

The researchers studied 45 female macaques, all with the same access to resources (and care).  They broke the group into 9 subgroups, allowing for different dominance patterns, and measured the resulting immune responses in each macaque.  Lower status individuals showed a higher ongoing inflammatory response, indicating higher levels of stress.

An inflammatory response is, of course, how the body deals with infections, but when the immune system works too hard for too long, it can attack the body's own cells, leaving it at higher risk for a variety of illnesses, such as heart disease.  And the increased inflammatory response in the lower status individuals didn't even serve its intended purpose; the higher status macaques still had a stronger anti-viral immune response.

What made the study especially powerful was that the researchers didn't just have to observe the response among a fixed set of status levels, as they might with human subjects.  They mixed and matched the sub-groups, creating new status levels.  Once previously low status macaques achieved high social status, their immune response changed accordingly.

As one of the lead researchers said: "There was nothing intrinsic about these females that made them low status versus high status. But how we manipulated their status had pervasive effects on their immune system.”

In a press release, another of the researchers summed up:
In short, two individuals with access to the same dietary resources and the same health care and exhibiting the same behaviours have different immune responses to infection depending on whether they have a high or low social status
This isn't about macaques, or monkeys, of course.  The chair of clinical microbiology at the University College London (who was not involved in the research) told BBC News: "All the evidence is showing the findings are terrifically applicable to humans."

One of the researchers further noted: "Some of the diseases that we know about that show the strongest social gradients in health in humans are in fact diseases that are closely associated with inflammation."  We're already seeing the health impacts of our social status; we're just not doing much about them, at least not intentionally.

There's a lot of "blame-the-victim" that is sometimes done to explain away poor health, The researchers beg to differ.  As one told BBC:
It suggests there's something else, not just the behaviours of these individuals, that's leading to poor health.  We know smoking, eating unhealthily and not exercising are bad for you - that puts the onus on the individual that it's their fault.  Our message brings a positive counter to that - there are these other aspects of low status that are outside of the control of individuals that have negative effects on health."
It's not always our fault.

The researchers made a point of stressing the plasticity of the immune response.  One told The New York Times: "I think there’s a really positive social message.  If we’re able to improve an individual’s environment and social standing, that should be rapidly reflected in their physiology and immune cell function."  Status is not necessarily fixed, either in time or place.  As an accompanying editorial suggested: "Think of a mailroom clerk acquiring prestige as the captain of the company softball team."

We've known for some time that income and other kinds of social inequality has measurable impacts on health.  Many have probably suspected that social status inequality might have the same kind of impact.  This research helps solidify those suspicions.

Of course, there is a lot we don't know from the new findings.  The researchers haven't yet confirmed that similar impacts happen with male macaques (although one might hypothesize that the effect is even greater).  We don't know how having varying levels of social status in different parts of our lives might mitigate how having low social status in only some of them.  We don't know if there is a pharmacological solution that might mimic the effect of higher social status, or if there might be behavioral training that could do so.

In short, it is like a lot of health care.  Much of the medical care we give to people may not be necessary, and can be even harmful.  We focus too much on medical treatment, not enough on behavioral change, even less on underlying social conditions, and virtually none on social status.  

The big takeaway is that we're not doomed to health based on the social status to which we were born, or have achieved.  In the words of one of the researchers, "But the hopeful message is how responsive [immune] systems are to changes in the social environment. That's really different than the  possibility that your social history stays with you your entire life."

There will probably always be at least some social inequality.  Even if we magically took away all income inequality, there would most like still be some social inequality.  We are, after all, primates, and primates tend to form hierarchies.  But, as one of the researcher hoped, "It's a hard problem that might never be fixed, but it might be possible to make it less worse."

Sometimes "less worse" is all we can hope for.