Monday, July 30, 2018

Say It Ain't So, HAL 9000

The classic film 2001: A Space Odyssey featured HAL 9000, one of the first -- but certainly not the last -- popular examples of artificial intelligence (AI) gone wrong.  Just as the mythical heartbroken kid asked Shoeless Joe Jackson to deny he was involved in throwing the 1919 World Series, it's starting to seem if we need to ask if perhaps we're finding out that AI performance in healthcare is more hope, or hype, than reality.
The biggest headlines came from a startling StatNews report about IBM Watson.  They obtained internal IBM documents that indicated the collaboration with Memorial Sloan Kettering Cancer Center. was something of a bust. with "multiple examples of unsafe and incorrect treatment recommendations."  The "often inaccurate" recommendations posed "serious questions about the process for building content and the underlying technology." 

Fortunately, the reports indicated, no patient deaths resulted from the faulty recommendations. 

IBM had gotten lots of good publicity about the project.  MSK is, of course, one of the premier cancer centers in the world, and the partnership was a coup for IBM.  MSK even touted itself as "Watson's oncology teacher." 

IBM blamed poor training for the flaws, as Watson based its recommendation on a few hypothetical cancer cases instead of on actual patient data, despite the fact that IBM was promising that Watson used the latter.  It was bragging about Watson even after it had become clear that Watson's recommendations often conflicted with national guidelines and that physicians were ignoring them. 

"This product is a piece of shit," one physician told IBM executives.  "We bought it for marketing and with hopes that you would achieve the vision. We can't use it for most cases."  His was not the only physician complaint. 

IBM maintains a positive attitude.  A spokesperson told Gizmodo that Watson is used in 230 hospitals worldwide, is able to treat 13 cancers, and supports care for over 84,000 patients.  IBM also promised STAT:
We have learned and improved Watson Health based on continuous feedback from clients, new scientific evidence and new cancers and treatment alternatives. This includes 11 software releases for even better functionality during the past year, including national guidelines for cancers ranging from colon to liver cancer.
Well, then, it's all right.  That is, unless one remembers the debacle at another leading cancer, MD Anderson Cancer Center.  The failure there may be attributed at least as much to MD Anderson as to any problems with Watson, but, still...

This is not the only recent example of healthcare AI overreaching.  A few weeks ago British start-up Babylon Health claimed its AI chatbot could diagnose patients at least as well as human doctors. 

Babylon tested its chatbot using questions that the Royal College of General Practitioners (RCGP) uses to certify actual doctors, and said that its score beat the average score of human doctors.  They also gave both the chatbot and seven human doctors 100 symptom sets, and say that the chatbot scored better on those as well. 

A Chinese AI, BioMind, similarly claimed to outperform China's "top doctors" -- by a margin of two to one. 

The RCGP, for one, is not buying it.  A spokesperson said
...at the end of the day, computers are computers, and GPs are highly-trained medical professionals: the two can't be compared and the former may support, but will never replace, the latter...No app or algorithm will be able to do what a GP does."
Babylon's medical director fired back:
We make no claims that an ‘app or algorithm will be able to do what a GP does’...that is precisely why, at Babylon, we have created a service that offers a complete continuum of care, where AI decisions are supported by real-life GPs to provide the care and emotional support that only humans are capable of...I am saddened by the response of a college of which I am a member. The research that Babylon will publish later today should be acknowledged for what it is — a significant advancement in medicine
The question of AI doing what physicians do was further illustrated by a new study from MIT.  The research computed "sentiment analysis" on the physicians' notes, outside of the data in the medical records themselves, and found a positive correlation between those scores and the number of diagnostic tests ordered.  The effect was strongest at the beginning of a hospital stay and when there was less medical information to go on.

The researchers concluded that physicians "provide a dimension that, as yet, artificial intelligence does not."  They attribute this to physicians' "gut feelings," saying:
There’s something about a doctor’s experience, and their years of training and practice, that allows them to know in a more comprehensive sense, beyond just the list of symptoms, whether you’re doing well or you’re not.  They’re tapping into something that the machine may not be seeing.
The researchers hope to find ways to teach AI to approximate what physicians are doing by capturing other types of data, such as their speech. 

Lastly, Eric Topel, MD, tried to summarize in a tweet, saying that we still simply don't know how good AI is in medicine.  His summary of the research:
All in all, despite the claims that some are making about AI -- whether it is Watson or Babylon or BioMind or any of the hundreds of other AIs being developed -- one would have to admit that they are not the same as human physicians and not yet a threat to replace them. 

But...

The fact that AI don't replicate what human physicians do doesn't bother me.  Those "gut feelings" and clinical insights that they are so proud of aren't always right and often aren't based on solid, or any, science.   Let's not mistake the value of human touch and interaction with infallibility.

I would hope we can do better.  I would hope that an AI's "gut feelings" will be based on millions of experiences, not the hundreds or even thousands that a human physician might have had for a specific issue.  I would hope that an AI wouldn't order unnecessary or even harmful tests or procedures.  I would hope that an AI wouldn't commit errors due to fatigue or impaired judgement or simple miscommunication. 

AI in healthcare is only going to get better.  It is going to play a major role in our care.  That is something to look forward to, not to fear (HAL 9000 notwithstanding!). 

Monday, July 23, 2018

The Sounds of Silence

Listen closely, healthcare organizations and professionals: those sounds you are not hearing are the voices of people not speaking up, including patients.  And that's a problem.

Let's start with the elephant in the room: a new study found that even when physicians actually asked patients why they were there, on average they only listened to the patient's explanation for eleven -- that's 11 -- seconds before interrupting them. 

Think about that, and then think back to a doctor's visit you had about something that was worrying you: could you have explained it in eleven seconds? 

Believe it or not, that's not the worst of it.  Only 36% of the time did patients even get a chance to explain why they were there.  Even then, two-thirds of them were interrupted before they had finished. 

Primary care doctors did better, allowing 49% of patients to explain their agenda for being there, versus only 20% for specialists.  Hurray for the primary care physicians...

Lead author Naykky Singh Ospina, MD, MS, drolly concluded: “Our results suggest that we are far from achieving patient-centred care."

Now, sometimes the physicians already know, or think they know, why patients are there, and sometimes the interruptions are to ask clarifying questions.  But, as Dr. Singh Ospina added:
If done respectfully and with the patient’s best interest in mind, interruptions to the patient’s discourse may clarify or focus the conversation, and thus benefit patients.  Yet, it seems rather unlikely that an interruption, even to clarify or focus, could be beneficial at the early stage in the encounter.
The researchers say there are many reasons why physicians aren't listening better, including time constraints, burnout, and lack of communications training.  But still...11 seconds?  For the minority that even get the chance to talk?


As Bruce Y. Lee said in Forbes, "A doctor’s visit shouldn’t feel like a Shark Tank pitch."

As bad as this is, it is not the only area where not feeling able to speak up is a problem  in healthcare.  For example, a study in BMJ Quality and Safety found that 50% to 70% of family members with a loved one in the ICU were hesitant to speak about common care situations with safety implications.  They didn't want to be labeled a "troublemaker," they thought the care team was too busy, or they simply didn't know how to speak up.  Only 46% felt comfortable speaking up even when they thought there was a possible mistake in care. 

Co-lead author Sigall K. Bell said:
Speaking up is a key component of safety culture, yet our study—the first to our knowledge to address this issue—revealed substantial challenges for patients and families speaking up during an ICU stay
If patients are hesitant to speak up when their loved one is in the ICU, how reluctant must they be in other healthcare settings? 

iStock
It's not just patients who are silenced.  One study found that 90% of nurses don't speak up to a physician even when they know a patient's safety is at risk.  Another survey, of medical students in their final year of school, found that 42% had experienced harassment and 84% had experienced belittlement. 

A couple of years ago ProPublica looked at why physicians stay silent about other physicians they know commit medical errors, including ones who do so repeatedly.  The reasons are varied, of course, but include concern about losing referrals, fear about causing lawsuits, lack of seniority, and gender or racial discrimination.

One physician, speaking about his hospital, told them: 
There’s not a culture where people care about feedback.  You figure that if you make them mad they’ll come after you in peer review and quality assurance. They’ll figure out a way to get back at you.
Anyone who has worked in an organization -- big or small, for-profit or not-for-profit, academic or commercial -- has probably had the experience of not feeling they can speak up, and healthcare organizations are no exception.  Even when the ones not speaking up are physicians themselves. 

The TV series The Resident (which I gather is one of Dave Chase's favorites) is about the trials and tribulations of a resident and a nurse who dare to speak out about senior physicians who are impaired and/or unethical; one of duo is [SPOILER ALERT] falsely imprisoned as a result.  It's TV, so you can rest assured that all will end well for our heroes, but, in the real world, things don't always work out so well for outspoken people. 

It's about power: who has it, or at least who we think has it.  We trust our doctors (although not as much as our nurses!).  We assume that more experienced doctors have more knowledge than newer doctors, that doctors know more than nurses, and that healthcare professionals know more than we do.  We're at the bottom of the knowledge tree.    

But that may not be true.  Dave deBronkart -- e-patient Dave -- likes to cite Warner Slack's great quote: "Patients are the most underused resource."  

I also think about what Dr. Jordan Shlain (Tincture's founder & leader) realized about patients: "No news isn't good news.  It's just no news."  This epiphany led him to found Healthloop to help ensure physicians were always hearing from their patients.  

But healthcare professionals must be willing to listen, and they must ensure that they ask.  And we must take the initiative to speak up.   

When it comes to our health, we are the experts.  We are the only ones who can report what we are feeling.  We are the ones who suffer the consequences of missed diagnoses, ineffective treatments, and medical errors.  We cannot be passive and we cannot be silent.

Our values are wrong if we allow reimbursement considerations to squeeze our time with physicians to the point we're not talking and they're not listening.  Our values are wrong if we're conditioned to think our opinions and concerns do not matter.  Our values are wrong if everyone is not only empowered but also expected to speak up, especially when we see or experience something we think is a problem.

Anybody listening?     



Tuesday, July 17, 2018

Healthcare's Blockbuster Moment

Here's a newsflash: there is now only one Blockbuster store left. 

It's hard to discern what is more newsworthy about that sentence: that there is only one left, or that, in 2018, there are any Blockbusters left.  After all, Netflix and other streaming services decimated Blockbuster's once powerhouse business, to the point it went bankrupt in 2011 and Dish Network bought its assets. 

The last corporate stores closed in 2013, leaving an increasingly small number of privately-owned stores who licensed the name.  Two Blockbusters in Alaska announced last week they were closing, leaving the Blockbuster store in Bend, Ore. as the sole remaining licencee. 
The national news media was fascinated by the story, with profiles in The New York Times, The Washington Post, Time, and Yahoo Finance, CBS News, CNN, among others.  People are flocking to the Bend store, not to rent movies but to take selfies of themselves in front of an actual anachronism. 

Some wax nostalgic.  "There are still people in America, and especially in this town, who enjoy the experience of strolling into the video store on a Friday night," Time reported.  General manager Sandi Harding was more pragmatic, citing customer service and their local connection as the reasons the Bend store has managed to survive. 

It's almost hard to remember quite how dominant Blockbuster once was, with 9,000 locations at its peak.  It killed off countless smaller chains and mom-and-pop video stores, only to fall victim to an upstart -- Netflix -- that it failed to take seriously enough until it was too late.  Blockbuster could have even bought Netflix for a measly $50 million back in 2000, eliminating or incorporating the threat before it had a chance to truly threaten its business. 

Healthcare, are you paying attention? 

There have been countless dissections of Blockbuster's various missteps and missed opportunities.  Customers hated those pesky late fees, which Blockbuster had grown heavily reliant on to keep its profits healthy.  While Blockbuster had grown big due to the convenience it offered -- more locations and more titles --  those still could not match the convenience of streaming or even of Redbox. 

Still, its management, particularly then CEO Jim Anticoco did identify the threats and came up with a strategy to counter them.  It actually was starting to work until he was fired due to investor concerns about the cost of the strategy.  His replacement reversed course and, well, as I've said, there is now only one Blockbuster store left, and it is not even owned by Blockbuster. 
The Blockbuster store in Bend, Ore. (Sandi Harding)
There are a few lessons we might try to apply to healthcare:

1.  Bigger is better! No doubt about it; big is often good.  Just look at the reach and market cap of the big tech companies -- Apple, Amazon, Alphabet, Microsoft, Facebook, and Alibaba.  Blockbuster thought it could dominate by putting a retail location as many places as possible, and it worked. 

Until it didn't.

Healthcare is going through its own version of this.  Hospital chains are consolidating, gobbling up hospitals and physician practices.  Health insurers are doing the same, and not just with other health insurers -- witness United Healthcare's acquisition of DaVita's primary care and urgent care centersCigna's buying Express Scripts, or Humana's purchase of Kindred Healthcare, not to mention CVS buying Aetna.  

Having lots of retail locations, consolidating market power, vertical and horizontal integration -- all time tested strategies, and ones healthcare organizations are following.  Just like Blockbuster relied on.  It works until something new like Netflix comes along and crashes the party. 

2.  It's the personal touch!  People like the human touch.  People like interacting with other people.  People like advice from knowledgeable experts.  People like the social aspect of going someplace that has other people seeking the same experience. 

Sandi Harding is still touting all those, bless her, but it's not unrelated that she's at the last Blockbuster.  People voted with their keyboards: they liked having DVDs mailed to them -- with no late fees! -- and liked streaming even better.  They didn't even have to leave the house!  They could order, even watch, movies in their bathrobes if they wanted to.  And they could get recommendations from algorithms well enough to not need recommendations from the retail clerks, who may have had encyclopedic knowledge of movies or may have just known the latest blockbuster. 

It is entirely possible that having the same doctor over a period of years actually can improve your mortality, as a new study seems to confirm, but people are notoriously short-sighted when it comes to their health.  If online doctors or even bots can give health advice that is good, or good enough, for many people that is going to beat getting in the care and driving someplace.

People say they like the personal touch, but never underestimate the American consumer: given a reasonable choice, they'll opt for convenience almost every time.  Even in healthcare. 

3.  Stop annoying people! I have to confess -- I once was a Blockbuster customer.  I don't think I ever had a late fee, but I was aware of them, and I made sure I returned videos even before I'd seen them just to avoid paying them. 

With Netflix, though, I didn't have to worry about it.  Sure, I could only rent so many at a time, but I could keep them until I was ready to return them.  No late fees.  Netflix didn't seem like it was nickle-and-diming me (although those nickles and dimes added up to billions for Blockbuster).

Healthcare has plenty of "late fees."  Try facility charges, try inflated charges, try prescription drug pricing, try unnecessary tests, not to mention literally going after people with collection agencies when they have a hard time paying some of those exorbitant costs. 

And these are from professionals and organizations who claim to have our best interests at heart.   

Customers will revolt.  They will switch to options that don't penalize them for things they don't think they've done wrong.  Blockbuster couldn't convince itself, or its investors, that killing the golden goose of late fees was necessary, and healthcare is now finding itself in the same situation. 


If healthcare doesn't listen to, and learn from, these and other lessons, as hard as it may be to imagine now, in a few years we may be reading about the last remaining hospital or even the last doctor's office. 

Just ask Blockbuster.

Tuesday, July 10, 2018

When AI Meets DNA

DNA is hotter than ever.  We're doing more DNA sequencing to identify genetic risks.  We're using tools like CRISPR to "fix" DNA.  We've been using DNA to help identify criminals for some time, but now we're using relatives' DNA from ancestry sites to identify even more. 

Less than a couple years ago, using DNA as a storage medium was still at the laboratory level; now the first commercial DNA storage company -- a start-up named Catalog -- is set to launch in 2019.  Even U.S. spy agencies are trying to leap on the DNA storage bandwagon

If all that is hot, then here's what is really cool: using DNA as the basis for a neural network.  I.e.: AI DNA. 
Credit: The Sociable
Researchers from Caltech announced that they have developed a neural network made from synthetic DNA.  The network "learned" how to correctly identify handwritten numbers, a task that is not always easy for humans to do (as anyone who has to read my handwriting can attest).  The results were published in Nature

Lead researcher Lulu Qian explained what they did: "In this work, we have designed and created biochemical circuits that function like a small network of neurons to classify molecular information substantially more complex than previously possible."  

Translated, they created a molecular "smart soup" made of bio-engineered strands of DNA, and taught it to recognize handwritten numbers through a "winner-takes-all" process.  The neural network looks for certain concentrations of molecules and produces specified reactions when it finds them. 

Professor Qian elaborated to The Register:  
A single-stranded DNA molecule with just the right sequence of nucleotides can bind to another double-stranded DNA molecule that has a single-stranded tail. Once grabbed onto the tail, it can force the nucleotides in the double strands to open up, one nucleotide at a time, until the previously bound strand is released.

The invading strand can be seen as an input signal while the released strand an output signal, resulting in a simple input-output function. Once released, the output strand can then take on a different role as an input to interact with yet another double-stranded DNA molecule, leading to a network of molecular interactions that compute more complex input-output functions.
It looks something like this:
Source: Nature
OK, it's not pretty, but it's pretty impressive.  

Qian and first author Kevin Cherry, a graduate student of Professor Qian, plan to expand their work to have the neural network to form "memories" from examples added to the test tube, allowing it to be trained to do a wider range of tasks.  As Mr. Cherry sees it:
Common medical diagnostics detect the presence of a few biomolecules, for example cholesterol or blood glucose.  Using more sophisticated biomolecular circuits like ours, diagnostic testing could one day include hundreds of biomolecules, with the analysis and response conducted directly in the molecular environment.
Right now, the DNA neural networks have a limited set of tasks they can accomplish, and the computation using chemical processes is much slower than "traditional" computing.   Still, Professor Qian sees the potential:
Similar to how electronic computers and smart phones have made humans more capable than a hundred years ago, artificial molecular machines could make all things made of molecules, perhaps including even paint and bandages, more capable and more responsive to the environment in the hundred years to come.
People have been talking about nanobots in healthcare for many years now, with several interesting applications already being tested and some typically optimistic predictions about the market potential, but we may have been thinking about them all wrong.  Instead of tiny versions of traditional computers, perhaps built with organic materials, they could be DNA-based neural networks.  The possibilities truly are mind-boggling.

If you think all that is far-fetched, U.S. spy agency effort -- Molecular Information Storage (MIST) -- referenced above calls not just for DNA-based storage and retrieval, but also an operating system. 

The thing to keep in mind is that, although the processes DNA uses may be slower (now) than traditional computing, the storage capabilities are exponentially greater than the methods we use now.  Hyunjun Park, CEO and co-founder of Catalog, told Digital Trends: "If you're comparing apples to apples, the bits you can store in the same volume comes out at something like 1 million times the informational density of a solid-state drive." 
Credit: Pasieka/Science Photo Library, Cosmos
As I put it before, you could literally be your own medical record, using DNA storage.  Mr. Park seems to agree, noting:
Imagine a subcutaneous pellet containing all your health data, all your MRI scans, your blood tests, your X-rays from your dentist...If you had that with you in the form of DNA, you could physically control that data and access to it, while making sure that only the authorized doctors could have access to it.
With the new work from Caltech, now I'm wondering if we could be our own EHR as well -- not just the data but also acting upon it.  A DNA-based computational device using DNA as the storage medium, stored within us -- possibly even encoded within our own DNA. 

Mind.  Blown.

Here's an even more out-there idea: maybe we could "teach" our microbiome to speak up for itself and tell us how we can help it help us better.  Think of how that could improve our health.

We are, in many ways, still in the first generation of computing; conceptually, modern computers are not really different than the bulky computers of the 1950's -- just much, much smaller and faster.  The next generation may use approaches like quantum computing or distributed computing -- or perhaps DNA computing.

Similarly, we're barely in the first generation of artificial intelligence, but we've been building it using our traditional concepts of computing.  That is certainly going to continue to evolve, rapidly, but we should also be thinking about how and when DNA-based AI might be more applicable, especially for healthcare. 

We're a long way from a robust DNA neural network, much less a true DNA AI, and who knows where they may lead, but I, for one, am going to be watching closely.

Tuesday, July 3, 2018

Reinventing the Wheel

When people talk about "reinventing the wheel, " it is often meant to discourage, even disparage.  As in, "why reinvent the wheel?"  It usually refers to a technology or a process that works well enough and is widely enough distributed that trying to replace it would be a fool's errand. 

Fortunately, the folks at DARPA aren't afraid of fool's errands -- and they are literally reinventing the wheel.   Healthcare could use some fool's errands of its own.
We all know what a wheel is.  We know a wheel when we see one, we know what one does, we know how they do it.  We've all traveled on wheels -- skates, bikes, cars, buses, whatever.  It's hard to imagine a world before the wheel, before that beautiful circular shape, and it's hard to imagine improving on it. 

DARPA can.  The DARPA effort is part of its Ground X-Vehicle Technologies (GXV-T) initiative, aimed at coming up with "disruptive technologies for traveling quickly over varied terrain."  It includes some impressive innovations in suspensions and "crew augmentation" as well as the wheel reinvention, which consists of two distinct changes:

  • Reconfigurable Wheel-Track (RWT): Wheel are round, but they don't always have to be.  RWT allows the wheel to change on the fly from the classic round shape to a triangular track, and then back again.  This is valuable when moving from a hard surface like a road to a soft surface like mud or snow.  It's like having an on-demand tank tread.  
  • In-hub motor: Wheels are on axles that are turned by other sources.  But maybe they don't have tobe.  DARPA has tested putting motors directly inside the wheel, allowing for  "heightened acceleration and maneuverability with optimal torque, traction, power, and speed over rough or smooth terrain."  And it still fits within the standard military 20" rim.  

Here's their video:

OK, so these aren't ready for prime-time, especially in civilian settings.  Maybe they never will be.  But that's hardly the point.  The point is, even with something as basic, as time-tested, and as prevalent as the wheel, there is more than can be done, and, sometimes, that should be done. 

Not "why reinvent the wheel," but, rather, "why not reinvent the wheel?" 

Here's a perfect example, and it's in healthcare.  It's not much of a leap from thinking about wheels to thinking about wheelchairs.  We have all kinds of wheelchairs, from your basic hospital model to motorized ones to ones used in racing marathons.  Stephen Hawking controlled his using minute movement of his cheek muscle, for heaven's sake.   You'd have to say there has been plenty of innovations.

So why can't wheelchairs climb stairs?

It turn out, they can.  Or, rather, they could.  As Allison Marsh recounted in IEEE Spectrum, an inventor named Ernesto Blanco invented one -- in 1962.  He built a prototype for a design competition (which he didn't even win), but never did a full-sized model and never commercialized the idea. 

In 1995 researchers at Nagasaki University did finally build an actual, full-sized wheelchair that could also climb stairs, but only built one model, which was donated to the local city council but retired in a few years due to lack of use.  DEKA Research and Development Corporation had slightly more success with their iBOT, but never sold more than a few hundred units per year before it, too, was discontinued; price -- upwards of $29,000 -- was a factor. 

Why would we even need a wheelchair that can climb stairs?  After all, we have elevators, ramps, even in-home lifts.  That's the why-reinvent-the-wheel mentality.  That's not thinking of the people who need them.  Maybe they just want to be able to get up stairs on their own.  Or maybe they'd rather be out of the wheelchair in something like a robotic exoskeleton

That's reinventing the wheel(chair). 

It's not about wheels, it is about re-imagining the given.  Here's an example: in our current healthcare system, when you're sick, you to to the doctor's office.  When you're really sick, you go to the hospital.  Forget the fact that these are the times you least want to have to go anywhere.  Forget the fact that those places are filled with lots of other people with whom you might exchange germs. 

We do it because it is easier for the doctors and other health care professionals...but, again, that's not really supposed to be the point, is it?

We should be using telehealth more (and here's Chrissy Farr's take on why we're not), but, in the meantime, Healthcare Dive profiled "the return of the house call."   Services like Heal or DispatchHealth believe that both the economics and the improved patient outcomes warrant the return, albeit with a 21st twist.

Nick Desai, CEO of Heal, told them: "We see (our house calls) as old-fashioned care with state-of-the-art technology." John Hopkins' Hospital at Home program takes it even a step further, trying to shorten or avoid hospital stays.

Similarly, Emma Yasinski profiled a company named Medically Home, which seeks to  bring the hospital into patients' homes through a combination of technology and in-home visits and services.  Their CEO told her:
Think of a rocket going into space. You have to be wired to all of the things going on in the rocket, to the astronaut.  Everything happening in the home we have to be able to see, monitor, communicate to and from.
That's not what generally we have now -- even when you are in the hospital.  But it is possible now. 
"Mission Control" at Medically Home.  Source: NeoLife
Ira Wilson, a professor of health services at Brown, further told her:
The fixed costs of hospitals are just utterly gargantuan.  One of the things we’ve got to figure out how to do in health care is to give people care in settings that provide all the resources that are necessary to treat the problem that they have — but not more,
Sounds like reinventing the wheel, doesn't it? 

Pick your favorite windmill to tilt at.  Pick the thing about healthcare you hate most of all.   Don't assume that just because we do something a lot or use something a lot, it is the only way, much less the right way, to do it.  Dare to come up with wheels that transform shape or power themselves. 

Reinvent the wheel.