Tuesday, April 25, 2017

Clicks-and-Mortar: Health Care's Future

The woes of the retail industry are well known, and are usually blamed on the impact of the Internet.  Credit Suisse projects that 8,600 brick-and-mortar stores will close in 2017, which would beat the record set in 2008, at the height of the last recession.  There are "zombie malls," full of empty stores but not yet shuttered.

And then there's health care, where the retail business is booming.

In a recent Wall Street Journal article, Christopher Mims set forth Three Hard Lessons the Internet is Teaching Traditional Stores.  The lessons are:
  1. Data is King
  2. Personalization + Automation = Profits
  3. Legacy Tech Won't Cut It
It's easy to see how all those also apply to health care.

But health care is different, right?  Patients want to see their physician.  That physical touch, that personal interaction, is a key part of the process.  It's not something that can be replicated over a computer screen.  

Yeah, well, the retail industry has been through all that.  Retail once primarily meant local mom-and-pop stores.  They knew their customers and made choices on their behalf.  Customers had little say in the choice of products, nor much ability to compare prices.  But it was all very personal.

The 19th century saw innovations like mail order catalogs (e.g., Montgomery Ward, Sears Roebuck) and department stores (e.g., Macy's, Wanamakers, Marshall Field's), while the 20th century added shopping malls.  Each helped distance consumers from their local merchants.  

Still, though, when Amazon came along, booksellers were adamant: no one wants to buy books sight unseen!  When that truism was proven false, other sectors of retail had their turn in the Internet spotlight, and the last twenty years of results haven't been pretty for them.  

It turns out that the personal touch isn't quite as important as retailers liked to think.

As for health care, it seems to be surviving the Internet onslaught pretty well.  WebMD alone has more monthly visits than to all the doctors in the U.S., but any declines in doctor visits are more likely due to economic factors than to internet searches.  Telemedicine has been touted as one of the next great health care innovations, but research suggests that, while it may substitute some in-person visits, it more than offsets that with new visits.  

Health care is following some historical retail trends.  Independent physician practices are quickly getting purchased by health systems,  which are becoming the department stores of health care -- down to their sprawling suburban campuses and their vertical integration of services.  Even independent practices increasingly rely on impersonal billing and practice management companies.  Everyone has computers, especially for the money, but few have really changed their processes to take full advantage of them.

It's very 1960s in the health care retail world.

As for that "personal touch," well, a Harvard study found that the average doctor visit takes 121 minutes of patients' time, only 15-20 minutes of which is actually spent with the doctor.  That's after being able to actually get an appointment, which can take weeks.  


That's not the kind of personal experience that consumers really want.

So why hasn't health care been more disrupted by the Internet?  Well, for one thing, when you buy a book online, your state doesn't require that you buy it from a bookstore that is licensed by its not-so-friendly licensing board, as is true with seeing doctors over the internet.  The state medical licensing boards are ostensibly there for our protection, but time and time again have acted as though protecting physicians' livelihoods is their main concern.

Strike one for disruption.

For another thing, we (usually) trust our doctors.  Then again, we used to trust recommendations from bookstore staff too.  That is, when they had time for us, if they seemed knowledgeable, and if they were making recommendations that fit us rather than just their own preferences.  AI-based recommendations from Amazon may not be as good as those from a really good bookstore employee, but are probably a lot better than those from the mediocre workers you were more likely to encounter.

Think the same thing won't happen when AI gets better at diagnoses?

Let's go back to Mr. Mims three lessons and see how they apply to health care:

  • Data is King: Health care collects a lot of data, and will get even more with all the new sensors.  Not all of that data is meaningful, much less actionable.  Health care providers sometimes share your data, but not always with your consent and rarely to your direct benefit.  All of that will change.  For example, Google's Verily has started Project Baseline to do in-depth tracking of 10,000 volunteers.  Their motto: "We mapped the world.  Now let's map human health."  The big tech companies know their customers very well and tailor interactions accordingly; health care must as well.
  • Personalization + Automation = Profits: Mr. Mims cited Amazon Go as an example of how these two features could boost margins, and Information Age similarly described retail experiences based on more automation and better knowledge of customers.  Meanwhile, we're stuck in waiting rooms, filling out forms we've already filled out elsewhere. That is not a personal experience that can survive in the 21st century.  It has to be smoother, faster, and friction-less.  
  • Legacy Tech Won't Cut It: EHRs that no one likes.  Claims systems that take weeks to process a claim.  Billing processes that produce bills no one can understand.  Records that are siloed when we want them shared, yet all too open to being hacked.  The list could go on almost indefinitely.  All too often, health care's tech is not ready for prime time.  
The question is, are health care's leaders learning these lessons?

The future of retail appears to be in "clicks-and-mortar" (or "bricks-and-clicks").  Amazon is opening up physical stores, while Wal-Mart is beefing up its online credentials.  They and other retailers know that consumers want things fast, sometimes in person and sometimes not, like options, and always are paying attention to the cost.  It takes both online and in-person presence.

Health care can act like B Dalton or Borders, assuming until it is too late that their consumers will visit them in person, because they always had.  Or it can act now to jump to the data-driven "clicks-and-mortar" approach that other retail businesses are moving to.  

Health care organizations which get that right will be the one to survive.  The rest are zombies, dead but not aware of it.

Tuesday, April 18, 2017

Think Bigger. Fail Often.

Alan Kay recently outlined some of the principles that he thought made Xerox's PARC so successful (if you don't know who Alan Kay is or why PARC was so special, you should try to find out).  One was: "'It's baseball,' not 'golf'...Not getting a hit is not failure but the overhead for getting hits."

That doesn't quite square with my impression of golf, but I take the point.  It's about the price of success.

As psychologist Dean Simonton pointed out in Origins of Genius: "The more successes there are, the more failures there are as well."  "Quality," he wrote, "is a probabilistic function of quantity."    

We talk a lot about innovation these days, especially "disruptive innovation."  Why not?  It sounds cool, it allows people to think they're on the cutting edge, and it often excites investors.  But perhaps we've lost sight of what it is supposed to actually be.  

Vuki Vujasinovic recently wrote in Forbes that: "Almost every use of the phrase ‘disruptive innovation’ as we see it today is wrong."  He cited several such examples, including Porsche and even Uber, and reminded us that the phrase doesn't just mean "change" or a new entrant in a market.  

Mr. Vujasinovic urged would-be innovators to be more precise with their claims, reserving "disruption" for true disruption.  Instead, he suggested: "Say you are doing something different, say you are changing the way something is done, but don’t say you’re disrupting something just because it’s a nice word you want people to repeat."

To go back to the baseball analogy, in health care these days we don't have a lot of home run hitters.  We have a lot of companies who are single hitters -- or maybe are just trying to bunt.  There are too few people swinging for the fences.

And even fewer trying to invent a brand-new game, one better suited for the 21st century.

Some examples may help illustrate why.

Wired published an excerpt of Rutger Bregman's new book Utopia for Realists, in which he gives several examples (such as the 2008 financial crisis) where "cognitive dissonance" kept well-educated, intelligent people from seeing what should have been obvious problems.  We're so set in our ways that we keep going down the same track even as it should become increasingly obvious that it is a dead-end.  As a result:
When reality clashes with our deepest convictions, we’d rather recalibrate reality than amend our worldview. Not only that, we become even more rigid in our beliefs than before.
I.e., no one likes our health care system, it's demonstrably not doing a very good job and doing so at a very high cost, but, hey, let's just tinker at the edges.

Professor Bregman does believe that new ideas can change the world, but it may take some sudden shocks and persistent objectors to get people to change their mindsets.  Our trouble, he warns, is that "we inhabit a world of managers and technocrats," who focus on the problems and solutions at hand.

In other words, singles hitters.  Never fear, though, he reminds us: "Ideas, however outrageous, have changed the world, and they will again."

John Nosta similarly warns in Psychology Today that innovation is too often throttled by "the mushy middle," usually in the name of collaboration.  Innovation is not about collaboration and certainly not about consensus, because: "Innovation is not an intellectual average."

He asserts that we need those "high performers" and their sometimes outrageous ideas, instead of "having their fragile voice crushed by generic consensus."

Lastly, in Fast Company, a trio of researchers noted the subtle power of default choices.  How choices are presented -- like opt-in versus opt-out -- has a strong impact on decisions.  Organ donation is a classic example, where the percentage of people agreeing to being an organ donor is significantly higher (like 80% higher) when it is presented as the default choice.

Disclosure about the options doesn't help as much as we'd like to think, because, "Research shows that making an option the default leads people to focus on reasons to accept the default and reject the alternative first and foremost."

We may not always realize the default options we're being given, especially when confronting a highly complex, inter-dependent system like health care.  We think we're changing something, but usually we're only doing so within the default options the existing system gives us.

John Nosta also writes about how Apple and Google's recent forays into health care should be a wake-up call for the life science industry, "which oftentimes has relied on the snooze function of line extensions and extended-release drugs as the source of income and innovation."  Those two companies' "expectedly unexpected" innovation in this area is welcome, but even they may be too entrenched into the existing approaches to have truly disruptive impact.

So, all you would-be health care innovators: are you prepared to fail, lots of times, before you succeed?  Are your ideas truly disruptive, or simply twists on what we've been doing?  Is yours a bold vision of what could be, or is it of just slightly further down the road that we're already on?

For example, when I read about an interesting start-up Better, which seeks to help consumers with their health insurance claims -- fighting with insurers and providers to ensure consumers only pay what they should -- I have two conflicting thoughts:

  1. Consumers certainly need help like this;
  2. I wish they'd focused instead on making the underlying problem(s) go away.
I love telehealth.  I love digital health.  I love direct primary care.  I love having AI help doctors.  They -- and numerous other examples -- are all important developments that, arguably, will help make our health care system better.

But they are not disruptive innovations.  They are not swinging for the fences.

Our health care system is so inefficient and so wasteful that it's almost too easy for innovators to pick a problem and make it at least less bad.  It certainly needs that.  If that's all they are looking for (and to get their piece of the $3 trillion pie), well, you can't hardly blame them.  

Me, though, I'm rooting for the innovators who are swinging big and are willing to miss a lot.  They're the ones who will eventually get us to the health care system of the future.

Tuesday, April 11, 2017

Losing the Doctor Lottery

Donna Jackson Nakazawa's insightful Health Affairs article "How to Win the Doctor Lottery" is, in turn, sad, frightening, wise, and hopeful.  She recounts some of her personal travails in finding the right doctors, the ones who will truly listen and become "a partner on my path to healing," and offers several suggestions about what has to happen for us to have more chance to "win."  

The real question, though, is not how to win the doctor lottery we find ourselves in, but why we're playing it at all.

Getting the right doctor is hard.  Consider the following:
  • It's easy enough to find out where a physician went to medical school and did their residency.  It's not as easy to know what the best medical schools or best teaching hospitals are, other than by reputations (that may or may not be deserved).  Maybe your doctor went to Harvard and did their residency at Johns Hopkins, but, otherwise, you may not be so sure about how good their training was.
  • Even if you did know how good their place of training was, you still wouldn't know how your doctor did there.  They might have been last in their class.  Even if you did find this out, you don't know if it is better to have done worse at a "better" school or well at a lesser school.  
  • In fact, it's not really clear that where one went to medical school or did their residency, or how well one did in those, has any measurable impact on actual competence as a physician.
  • Being board certified as become an accepted measure of basic competence in a specialty, but there is fierce debate between physicians and the specialty boards as to whether the process -- particularly the ongoing maintenance of certification (MOC) -- does anything of the sort.  
  • It would be good to know if a physician has had drug or alcohol impairment issues, has been charged with sexual improprieties with patients, or has a large number of malpractice suits, but don't expect to be able to find any of those out.  The medical licensing boards who should know aren't likely to tell you.   
  • There are many measures for "quality" when it comes to physicians, but none that are considered definitive, many of which are not meaningful to consumers, and all-too-few of which focus on what we should care most about: patient outcomes.   
  • Even for data that should be readily quantifiable -- e.g., how many of these procedures did Dr. X do?  How many patients die under Dr. X's care?  How many patients with my diagnosis does Dr. X treat? --  are rarely actually discoverable.  
  • There are some physician satisfaction scores and patient ratings, but most of those are looked upon dubiously, due to low reporting volume and likelihood of being skewed by non-clinical factors (like wait time or how quick prescriptions were given).
  • When your physician recommends a treatment or a drug, you don't know if the physician is doing so because the latest research solidly demonstrates their efficacy.  The physician may be being paid on the side by a drug/pharma company, may be influenced by the most recent drug rep visit, hasn't kept current on the research or simply doesn't accept it because it wasn't the way he/she was trained.  
  • As skilled as you may be at researching doctors, you may still find yourself in an emergency or other rapidly developing situation, in which you end up being treated by doctors you haven't had time to research and have never heard of. 
It's a wonder any of us ever find the "right" doctor.

Calling choosing the right physician a "lottery" may be being unfair to lotteries.  At least lotteries disclose the odds of winning, low though they might be, and it is usually clear very quickly whether there is a winner and who it is.  

In health care, you may never really know if you've won or lost, or may only find out much too late to do anything about it.  You may have gone through unnecessary pain and suffering, you may have lost years of better health, or you actually die.  

And, of course, you'll get billed for everything all along the way.

The even sadder thing is that it's like this throughout health care.  It may be marginally better for hospitals, in least in terms of more available data, but the usefulness of even that is not entirely clear.  For other types of health care professionals or institutions, information is even less available than for physicians.

Similarly, data on efficacy of treatments, procedures, or drugs is highly variable, often not disclosed to or discussed with patients, and usually not easily understood by them.     


In the end, most of us select a doctor based on the recommendations of friends and family, or another doctor, all of which are likely to be subjective as well.  And if it is true that most of us have confidence in our current doctor, that may be because we've switched from doctors in whom we lacked it (a phenomenon that seems little tracked).

It's still a lottery.

Most people who play the lottery know their odds of winning are low, and aren't betting their financial future or their lives on it.  For most of us, most of the time, picking the right physician is not a life-or-death decision either.  But when it is, we'd all like the decision to be more than random luck.

It is like the scene in WarGames, where the computer concludes the only way to win is not to play:


We don't have a data-driven health care system.  We don't have a performance/outcomes-driven system.  We tolerate it because we usually don't realize it, because most of the time it doesn't impact most of us.  But those are poor excuses.  We can, and should, demand better.

Playing the lottery is not a sound financial strategy, and it shouldn't be our strategy for getting heath care either.  

Wednesday, April 5, 2017

That Does Not Compute

OK, you use your smartphone all the time: you use the latest and greatest apps, you can text or tweet with the best of them, you have the knack for selfies, and so on.  You probably also have a computer, tablet, and a gaming system, each of which you are also very proficient with.  No question: you are a whiz with electronic devices.

But, if you're like most of us, you don't really know how or why they work.

Maybe that's OK.  Most of us don't know how our cars work either, couldn't explain how heavier-than-air flight is possible, have no idea what the periodic table means to our daily lives, and would be in trouble if our lives depending on us making, say, bricks or glass.

Still, though, as Captain Kirk once said (in a very different context), you have to know why things work.
We're not all going to become computer programmers.  Everyone doesn't need to major in computer science.  But we could all use to have a better understanding of how and why computers -- or, at least, computer programs -- work.

Welcome to computational thinking.

The concept was introduced by Jeannette Wing in a seminal paper in 2006.  She suggested that it was a fundamental skill that should be considered akin to reading, writing, and arithmetic -- learning how to solve problems by "reformulating a seemingly difficult problem into one we know how to solve, perhaps by reduction, embedding, transformation, or simulation."

She further clarified that it has the following characteristics:
  • Conceptualizing, not programming
  • Fundamental, not rote skill
  • A way that humans, not computers, think
  • Complements and combines mathematical and engineering thinking
  • Ideas, not artifacts
  • For everyone, everywhere.
Dr. Wing believes we've come a long way since her manifesto, and she may be right.  For example, Microsoft sponsors Carnegie Mellon's Center for Computational Thinking, and Google offers Exploring Computational Thinking, "a curated collection of lesson plans, videos, and other resources on computational thinking (CT)."  It includes an online course for educators.

A new initiative, Ignite My Future, wants to train 20,000 teachers to help make computational thinking a fundamental skill, hoping to engage a million students over the next five years.  One of the last initiatives President Obama announced was the Computer Science for All Initiative, providing $4b to improve K-12 computer science education (how it survives the new Administration remains to be seen).

A recent New York Times article, notes that, while the number of computer science majors has doubled since 2011, there is growing appeal to learn more about computer science by non-CS majors: "Between 2005 and 2015, enrollment of non-majors in introductory, mid- and upper-level computer science courses grew by 177 percent, 251 percent and 143 percent, respectively."

There is now an Advanced Placement course Computer Science Principles that "introduces students to the foundational concepts of computer science and challenges them to explore how computing and technology can impact the world."


The Times also profiled a number of ways that "non-techies" can learn elements of computational thinking, because "Code, it seems, is the lingua franca of the modern economy." The options include CS+X initiatives in college, a number of intensive "boot camps," and an increasing number of online courses, such as through Coursera, edX, and Udacity.   

Sebastian Thrun, co-founder and chairman of Udacity, argues that this kind of thinking is important for everyone because: "It’s a people skill, getting your brain inside the computer, to think like it does and understand that it’s just a little device that can only do what it’s told."

Still, computational thinking is not a panacea; as Shriram Krishnamurthi, a computer science professor at Brown, warned The Times, in our current culture, "we are just overly intoxicated with computer science.”

One of my favorite approaches to demystifying programming, Raspberry Pi, has sold some 12.5 million of its ultra-cheap, ultra-simple computers, making the 3rd best selling "general purpose computer" ever.  

Do a search for Raspberry Pi and you'll find thousands of examples of things people are doing with them, from simple tasks that children can do to sophisticated hacks.  Heck, someone has made a LEGO Macintosh Classic using a Raspberry Pi.

There's a new industry in toys for children that help teach coding, as The New York Times also reported. including Cubetto (which, at $225, is a lot more expansive than Raspberry Pis).  

All of which is to say, there's getting to be fewer and fewer reasons why people don't learn computational thinking.

And health care sure could use some more of it.

Health care likes to think of itself as a science, and it has many trappings of science, but even in the 21st century it remains much more of an art.  After all, this is the industry in which it was just reported that 20% of patients with serious conditions have been misdiagnosed -- in fact, most people are likely to experience at least one diagnostic error in their lifetime -- and in which we have an "epidemic" of unnecessary care.

It is an industry in which the technology often frustrates both the providers and the patients (e.g., EHRs and mammograms, respectively), where design is confusing at best and harmful to patients at worst.   It is an industry in which the coding has gone beyond arcane to incomprehensible,  

And it is an industry where there is surprisingly little data on efficacy, even less agreement about how to measure quality or value, and little training to help clinicians interpret or explain the data that does exist.  It is an industry that is bracing for its era of Big Data, and may not be at all ready.

So, yes, some computational thinking in health care certainly seems like it would be in order.

Thursday, March 30, 2017

I'll Pay For You, But Not Them

Elizabeth Rosenthal's searing article about medical billing, adapted from her forthcoming book An American Illness, is well worth a read.  Its topic of sophisticated medical billing/upcoding -- done by organizations ostensibly acting in the best interests of patients and often under the guise of a non-profit status -- is also worthy of a discussion itself.  This is not that discussion.

What jumped out to me (and to many others, on Twitter and elsewhere) was the following indictment:
In other countries, when patients recover from a terrifying brain bleed — or, for that matter, when they battle cancer, or heal from a serious accident, or face down any other life-threatening health condition — they are allowed to spend their days focusing on getting better. Only in America do medical treatment and recovery coexist with a peculiar national dread: the struggle to figure out from the mounting pile of bills what portion of the fantastical charges you actually must pay. It is the sickness that eventually afflicts most every American.

Which leads me to crowdfunding.

Crowdfunding is hot.  Sites such as GoFundMe, Kickstarter, Indiegogo, or YouCaring are provide platforms for people requesting money to pitch their case, and for potential donors to see them and, if so inclined, to contribute.  The market was estimated at $34b in 2015 (about half of which was in North America), and is projected to grow at an annual compounded rate of 27% from 2016-2020.  It's big business.

Although the Affordable Care Act has sharply reduced the number of uninsured, the facts are that almost 30 million remain uninsured, health insurance deductibles continue to rise rapidly, and almost 70% of Americans have less than $1,000 in the bank to cover "emergency bills -- such as medical bills.

It's no wonder that crowdfunding has become popular.

The Pew Research Center found that, in 2016, 22% of Americans had contributed to a crowdfunding project, while another 41% had at least heard of crowdfunding.  About two-thirds of the contributions had gone to help someone in need; mostly commonly, a friend, friend-of-a-friend, or family member.  Only 3% had created such a campaign for themselves.

Assistance with medical expenses is a leading type of campaign.  Some 70% of GoFundMe's campaigns are in the medical category, CEO Rob Solomon told Esquire.

This seems like a good thing, right?  Matching up people who need help paying their medical bills with people who can help them?  It certainly can be, but it also has its share of problems.

For one thing, it often doesn't succeed.  A University of Washington study examined 200 GoFundMe health expense campaigns, and found that 90% did not reach their goals.  On average, they only raised 40% of their target; 10% netted less than $100.

The authors didn't pull their punches, warning that crowdfunding could "deepen social and health inequities in the U.S. by promoting forms of individualized charity that rely on unequally- distributed literacies to demonstrate deservingness and worth."

A recent Viewpoint in JAMA, by Young and Scheinberg, echoed these concerns.  They acknowledged that such campaigns can be effective, and can make the process of matching donors with need more efficient.  They raised several concerns, such as the role of physicians in such campaigns, especially if the information the campaign presents is inaccurate or misleading.

However, one of their most powerful concerns is the following:
it is important to recognize potential for unfairly advantaging those with the means to engage with online tools and tap into large social networks, which may lead to an underrepresentation of cases with the greatest need in which patients lack the tools to coordinate effective crowdfunding campaigns.
In other words, funds may not go to the most needy, but rather to the most media-savvy.

Esquire devoted an article on this problem, aptly headlined Go Viral or Die Trying.  It leads with the story of Kati McFranland, who confronted Senator Tom Cotton at a town hall event about coverage for pre-existing conditions.  She was articulate, attractive, had a real need -- and got national media exposure.

As media coverage boomed, so did her YouCaring campaign, going from $1500 to close to $50,000.

The article also cited a family who hired a professional photographer to create a video for their daughter's campaign; it helped them eventually raise $4 million.  GoFundMe's Solomon said: "A picture is worth 1,000 words, a video is worth maybe a million, It's really a storytelling platform, the more interesting and compelling the story the better these will do."

So, crowdfunding success may be based less on need than on how attractive you (or your children) are, how much you look like the people with money, how professionally done your campaign is, and how large your social media footprint is.

If any of that sounds fair, then our current health care system must also sound like the epitome of fairness.

Maybe that's just life in the 21st century.  Maybe that just the consequence of social media; it's hard to engage, and some causes and some people are simply more appealing -- most deserving or not.  Maybe decrying the potential unfairness is about as pointless as decrying our wealth inequalities.

But maybe not.

Look, even in the United States, we don't do organ transplants by who pitches the most appealing story, or even who has the most money.  We established UNOS to take (most of) those kinds of subjective factors out of who gets a life-saving transplant.  UNOS matches available organs with potential recipients based on a variety of objective factors; social media footprint is not one of them.

We need a UNOS for assistance with medical expenses

It wouldn't be easy, but UNOS wasn't UNOS overnight.  It took decades to become what we know as UNOS today.

At the least, a crowdfunding UNOS could take into account severity (and expected duration) of the medical condition, size of the medical bills, and ability to pay.

Oh, and back to Ms. Rosenthal's findings: no one should have unpaid bills based on charges.  Provider charges are absurd at best and outrageous at worst.  Health care providers need to be muscled into limiting their charges to their most common insurance payment rates for people without insurance, and for people with low incomes.

Crowdfunding is cool.  For entrepreneurs, artists, or social causes, it is a great way to get access to capital.  There's an intriguing role for crowdfunding loans -- not donations -- to people who need cash flow assistance with their health care expenses.

But it's not how we should be funding health care expenses for people who can't afford them.

Tuesday, March 28, 2017

Disobey, Please

The M.I.T. Media Lab is taking nominations for its Disobedience Award, which was first announced last year.  As the award's site proudly quotes Joi Ito, the Director of the Lab and who came up with the idea: "You don't change the world by doing what you are told."

I love it.

The site, and the award's proponents, make clear that they are not talking about disobedience for the sake of disobedience.  It's not about breaking laws.  They're promoting "responsible disobedience," rule-breaking that is for the sake of the greater good.  The site specifies:
This award will go to a person or group engaged in what we believe is an extraordinary example of disobedience for the benefit of society."   
In Mr. Ito's original announcement, he elaborated:
The disobedience that we would like to call out is the kind that seeks to change society in a positive way, and is consistent with a key set of principles. The principles include non-violence, creativity, courage, and taking responsibility for one's actions." 
Given all that, $250,000 hardly seems enough, and it's a shame there can be only one winner.

The are two ways to participate: you can either nominate the eventual winner, or you can recruit someone else to nominate the winner.  Either way, you get to be flown to the awards ceremony on July 21, 2017.  Nominations close May 1 (and, no, it doesn't appear that being late to apply is the kind of disobedience they'll reward).


The award is funded by Reid Hoffman, the founder of Linkedin, who recently wrote about it on that platform.  His article is titled "Recognizing Rebels With A Cause," which is probably a good way to think of it.  Although one often thinks of disobedience in a Thoreau-type civil disobedience, Mr. Hoffman ties it closely to innovation:
In the realm of entrepreneurship, almost every great triumph has its roots in disobedience or contrarianism of one kind or another. And ultimately this impulse doesn't just create new products and companies, but also new industries, new institutions, and ultimately new cultural norms and expectations.
The Media Lab, he points out, serves as an example of how this can work: "researchers with widely varying areas of expertise are encouraged to collaborate and improvise in ways that become not just multi-disciplinary but antidisciplinary – disobedient."

Ethan Zuckerman, the Director of M.I.T.'s Center for Civic Media, told The New York Times: "In a lot of large institutions there’s really two ways you make progress. You make progress when people follow the rules and work their way through the processes, and then sometimes you make very radical progress by someone who essentially says, ‘Look, these processes don’t work anymore, and I need to have a radical shift in what I’m doing.’”

It just takes someone to stand up.

The creators of the award are probably not thinking much about health care -- despite disavowing it is about civil disobedience, many examples they've given revolve around people resisting what they think are improper government actions -- but they should be.

If there's a field where lots of stupid, or even bad, things happen to people , through design, indifference, or inaction, health care has to be it.

Every day, in every type of health care setting, things happen that aren't in the best interests of the people getting care.  People realize that they are happening, and, in many cases, they're happening because the rules say that is what is supposed to happen.

The list of disobedient acts in health care that would serve society is longer than my imagination can produce, but here are some examples:
  • The nurse who says, no, I'm not going to wake up our patients in the middle of the night for readings no one is going to look at (or blood samples that can wait until morning).
  • The doctor (or nurse) who knows a doctor that they believe is incompetent and decides, I'm going to speak up about it.  I'll make sure patients know.
  • The billing expert who decides, no, I'm not going to keep up the charge master, with this set of charges that aren't based on actual costs and which almost never actually get used (except by those unfortunate people without insurance).  Instead, we'll have a set of real prices, and, if we give anyone any discounts, they will be based on ability to pay, not on type of insurance.
  • The EHR developer who realizes that, it's silly that this institution's EHR can't communicate with that institution's EHR, even though they use the same platform and/or use the same data fields.  Data should go with the patient.
  • The insurance executive who vows, I'm tired of selling products that are full of jargon, loopholes, and legalese, so that no one understands them or knows what is or isn't covered.  We're going to sell a product that can be clearly described on one page using simple language.
  • The practice administrator who understands that patients' time is valuable too, and orders that the practice will limit overbooking and will not charge patients if they have to wait longer than 15 minutes. 
  • The medical specialty that commits to being for patients, not its physician members, by developing measures, specific to patient outcomes, in order to validate ongoing competence.   The results of measures would be made public, reported to licensing boards for action, and used to de-certify their specialty designation for physicians not meeting required performance.
Going back to the award's principles of non-violence, creativity, courage, and taking responsibility for one's actions -- well, the above would all seem to fit.  They're all achievable.  It only takes someone to stand up and decide to do them.

I just wouldn't hold my breath waiting to see any of them happen.


Break the rules.  Do the right thing.  Change the world, even if it is "just" the health care world.

And, who knows, maybe even win $250,000 for doing so.

Wednesday, March 22, 2017

On the Road to Ubiquity

Are you reading this on your PC?  How very 1980's of you.  Or are you looking at it on your tablet or smartphone?  Better, but still so ten years ago.  Are you an early adopter, viewing it in virtual reality (VR), perhaps set on your favorite beach as the text scrolls through the sky like the opening expository of Star Wars?  Now we're getting somewhere, but, even so, you still probably have to wear a clunky headset that's attached to a computer.

If you're aware of your device, that's the past.  Welcome to ubiquitous computing.
You may know it as "pervasive" computing, "invisible" technology, or as represented by the wearables craze -- not crude wearables like fitness trackers or Google Glass, but the ones that are already on the horizon that allow computing anywhere and everywhere.  Tech guru Walt Mossberg describes the goal as "information appliances" -- "dead-simple to use, without training or the need for a manual."  Or a even anything that resembles a device.

Some examples:
  • DuoSkin, developed by the MIT Media Lab, uses gold metal leaf devices that attach directly to the skin.  It can sense touch input, display output, and support wireless communication.
  • SkinMarks, developed by Saarland University (Germany), are "electronic tattoos" that are as flexible as skin.  They allow for touch input and display.
  • Smart clothing, such as the smart jacket developed by Levi and Google as part of Project Jacquard.  The jacket -- which will go on sale this fall (for $350) -- allows users to control selected features on their smartphone with gestures done at the jacket's cuff.  
  • Smart jewelry, such as the LEDA gemstone gems developed by MetaGem.  It can display various colors based on the kind of notification received, and MetaGem claims it can also do fitness tracking, SOS mode, remote selfie control, even be used for gesture-controlled games.

IDC estimates that the "wearables and hearables" market will grow from 2016's $102 million market to $237 million by 2021, with smart clothing accounting for almost 10% of that market (smart watches/bands still dominate in their estimates).  IDC warns that: "Tech companies will be forced to step up their game and offer a wider selection of sizes, materials, and designs in order to appeal to a broader audience."

Similarly, Tractica estimates that smart clothing shipments will grow from 140,000 in 2013 to 10.2 million in 2020, and Gartner projects that smart garments could reach 26 million by 2020. 

There's more.  With all these embedded devices, you'll still want something you can easily look at, and you probably won't want to be carrying around something with a screen.  No problem.  Sony, for example, has been working on projected screens that still have touchscreen capabilities, sensing hand motions well enough to, say, type or play the piano.  It can even morph into augmented reality.

You probably don't want to be lugging around a projector any more than you do a PC.  Sony's projectors are fairly small, and Serafim's iKeyBo has a keyboard projector that can "fit in your pocket."  It's only a matter of time before projectors get small enough to also become embedded into everyday items, like your new smart clothing.

Of course, input is only part of what we want screens to do; we also want them to display.  The future may be in holograms, which, as SingularityHub recently proclaimed, "aren't the stuff of science fiction anymore."  Various firms, such as Transparency Market Research and IndustryARC,  expect huge increases in the holographic display market, with the former company specifically citing demand for medical imaging as a major driver of that growth.  
  
Why would you want a print-out or a screen if you could look at a hologram, especially when it comes to the workings of our bodies?

This is the world we'll soon be in. Anything can be the input device, anything can do the processing and communication, and anything can be the display.  Devices become "invisible."  As tech columnist Greg Gascon describes
When using a piece of technology that has become invisible, the user thinks of using it in terms of end goals, rather than getting bogged down in the technology itself. The user doesn’t have to worry how it is going to work, they just make it happen.
Our current devices will look as old-fashioned and clunky as rotary dial landlines look to today's teenagers (that is, if they know what the latter are).

Especially in health care.  

Go to the doctor's office and they're listening to your chest with stethoscopes, taking your blood pressure with a cuff, measuring your temperature with a thermometer.  Sure, some of those may be digital now, but they're still all based on technology that is decades or even centuries old.  Go to the hospital and it's even worse: all the wires make it hard to move and the beeping of all the associated monitors make it hard to sleep.   

It doesn't have to be this way.  

Instead of all those monitors with all those wires, slap an e-tattoo on.  It could act as the sensor and the display, while updating your records wirelessly.  Instead of the intermittent, crisis-driven contact we now have with our physicians, our invisible monitors could keep track of us 24/7.  They'll alert us and our providers when something is off.

Instead of splitting attention between you and an EHR screen, you and the physician could view a holographic image of you that serves as your electronic record.  It can be updated with hand gestures and voice, help both you and your physician understand the issue(s) and your history, and help you understand what is happening with your health.

Of course, there will also be the nanobots working inside us.  Talk about ubiquitous, talk about invisible!

We're going to have to get past our fascination with the latest and greatest devices -- a new iPhone! a 4D television! -- and let their technology fade into the background.  As it should.

It's going to be very different, very exciting -- and sooner than many of us will be ready for.