Monday, October 24, 2016

Your Toaster May Be Bad For Your Health

Quote of the week/month:

In a relatively short time we've taken a system built to resist destruction by nuclear weapons and made it vulnerable to toasters.

Mr. Jarmoc was, of course, referring to the cyberattack last week that shut down access to many major websites (including, ironically, Twitter) for much of the day Friday.   The attack was what is called a distributed denial of service (DDoS) attack, which means that the hackers flooded a key part of the Internet infrastructure with essentially spam service requests.  In this case, they targeted a company called Dyn, whose Domain Name System serves as a directory for web addresses.  Legitimate requests to it were not able to be fulfilled.

What makes this even more interesting is that the hackers conducted the attack using hundreds of thousands, perhaps millions of Internet-connected devices -- e.g., webcams, routers, TVs, DVRs, security cameras, perhaps even the odd toaster or two.  This "botnet army" used a code called Mirai that was originally developed by gamers to deny online access to rival gamers.

As FastCompany reported, there had been warnings about attacks by these "Internet of Things" devices for some time, but the attack was still successful, rendering over 1,000 websites unavailable.  The reasons for it are not clear.  A security blogger told The Wall Street Journal "I believe somebody’s feelings got hurt and that we’re dealing with the impact. We’re dealing with young teenagers who are holding the internet for ransom."

I don't know if that should make me feel less scared, or more.

The New York Times warns of "a new era" of attacks powered by IoT devices, noting that many of them come with weak or nonexistent security features -- and that there soon could be billions of them in use.  A recent survey (The Internet of Stranger Things) confirms that most of us are worried about the cybersecurity risks of our various devices, but few of us have actually done anything about them.
We may buy cybersecurity programs for our computers, and try to beef up our passwords, but probably most of us aren't doing the same for our refrigerators or our cars.  Yet those are the kinds of devices we now need to worry about.

It's worse than that.  As The Times further noted:
The difference with the internet is that it is not clear in the United States who is supposed to be protecting it. The network does not belong to the government — or really to anyone. Instead, every organization is responsible for defending its own little piece.
Decentralized is good, until it is not.

What does this have to do with health care?  Plenty, as it turns out.  IoT devices are increasingly helping us manage our health and medical care.  IoT in health care is expected to be a huge market -- perhaps 40% of the total IoT, and worth some $117b by 2020, according to McKinsey.  Expected major uses include wearables, monitors, and implanted medical devices.  .

The problem is that many manufacturers haven't necessarily prepared for cyberattacks.  Kevin Fu, a professor at the University of Michigan's Archimedes Center for Medical Device Security, told CNBC: "the dirty little secret is that most manufacturers did not anticipate the cybersecurity risks when they were designing them [devices] a decade ago, so this is just scratching the surface."

Again, I'm not sure if the fact that there already are such centers as Dr. Fu's should make me feel less scared, or more.

Cybersecurity concerns for health care don't just involve the Internet.  Earlier this month J&J warned that one of its insulin pumps was vulnerable to hackers, who could spoof communication between the device and its wireless remote control.  The company sent letters about the risk to some 114,000 patients and their doctors, while claiming that the risk was low and that they knew of no such attacks -- yet.
One has to wonder how many other vulnerable devices there may be.

When it comes to health care, DDoS would be at best an inconvenience, and at worst life-threatening, but the cybersecurity risk most people still worry the most about is privacy.  We're going to need to be reassured both that the Internet-based services will be there when we need them, and that our privacy won't be compromised by them.  Those are, unfortunately, tough asks.

After all, healthcare is the industry whose data and systems are already being held for ransomware by hackers so amateur that they've sometimes settled for as little as $17,000 in bitcoin.  Meanwhile, cyberattacks on electronic health records are growing "exponentially," according to a new GAO report.  The GAO estimated that 113 million records were breached in 2015 -- up from 12.5 million in 2014, and less than 135,000 in 2009.  One has to imagine hackers are drooling over the vulnerability of IoT data.

The Street reports that "traditional" IT security firms (such as Symmatec) are already focusing on IoT, as well as new players like PTC or Synopsys, but also warns that, when it comes to IoT for health, security is still a major concern.  As Ivan Feinseth of investment bank Tigress Partners told them, "the connected car and house are really, really cool, but none of that is more important than healthcare."

Unfortunately, investment in cybersecurity for IoT remains low, with estimated spending on it only around $390 million, according to ABI research.   That's out of some $5.5b healthcare cybersecuity spending in 2016.  ABI estimates IoT cybersecurity spending will triple by 2021, but that still may lag far behind the spread of health IoT devices.

We've grown used to being hyperconnected, through email, the web, our mobile devices, and are just starting to explore the possibilities of IoT.  The Pandora's Box of connectivity is not going to close.  However, the basic structures of the internet are some 40 years old now, those of the World Wide Web some 25 years, and it may be time to figure out what comes next, especially because of IoT.

Whether that is the "Internet2," whether that is the "browserless experience" Acquia Labs envisions, whether that is blockchain -- I don't know.  What I do know is that a cyberwar in health is one in which we can't afford to lose many battles, so we better figure out sometime quick.

Before my toaster decides to do something mean to me.

Tuesday, October 18, 2016

Health Care's White Guy Problem(s)

The Wall Street Journal reports that women in India aren't benefiting from the spread of smartphones, which are helping men in that country -- where landlines are scarce, especially in rural areas -- perform the same kind of mobile functions most of us take for granted.

Rather than technology leveling gender gaps in India, though, it is exacerbating them.  Some 114 million more men than women have smartphones there, and that gap isn't going away anytime soon, due to gender biases that still dominate.  "Mobile phones are dangerous for women," explained a village elder.

Well, you might say, that's just India.  That sort of thing doesn't happen here, thank goodness.  Maybe you should talk to Tamika Cross, M.D.

Dr. Cross has gained notoriety lately due to an incident on a Delta flight.   There was a medical emergency, and she went into "emergency mode," getting out of her seat to offer her services.  Being young, female, and African-American, though, she evidently didn't fit the flight attendants' mental profile of a physician.  As one of them apparently told her, "Oh, no, sweetie, put [your] hand down.  We're looking for actual physicians or nurses or other type of medical personnel..."

I'm not sure which is more insulting, that she didn't fit their stereotype of any kind of medical professional, much less a doctor, or that they called her "sweetie."

Dr. Cross's experience has struck a chord, promoting #whatadoctorlookslike that has spurred both support and similar accounts, such as Jennifer Adaeze Okwerekwu's account in Stat, Jennifer Conti's story in Slate, or Lilly Workneh's Huffington Post column, plus thousands of sympathetic tweets.

The story is getting attention as an issue for female minority doctors, but the problem is, of course much bigger than that.  It is an issue for minorities and women in medicine generally, and for physicians who have emigrated to this country, to name a few subgroups.

While it is true that, according to the AAMC, women now make up 47% of medical school students, in those medical schools they only make up 38% of full-time faculty, 21% of full professors, and 15% of department chairs.  And nationally women only make up a third of the physician workforce.

Still, that's better than for minorities, who only make up only 20% of the physician workforce yet make up 37% of the population (and are projected to be a majority within a generation).  African-American or Hispanic/Latino physicians each only account for about 4% of total physicians (and, as it turns out, minority physicians play an "outsized role" in providing care to minority and underserved patients).

Clearly, there is a problem.

It's not just from whom we get our care that shows our cultural biases, but also what care we get.  There are well-documented disparities in care by race/ethnicity and by gender.  For example, men and women get treated differently for coronary heart disease, the nation's leading killers for both men and women.  Those differences are neither by design nor are helping women, as their mortality rates for heart disease have not dropped as dramatically as they have for men.

It doesn't help that clinical trials for such care are likely to have twice as many male participants than female, a fact that is true of clinical trials for many diseases.  There are disturbing under-representations in clinical trials for minorities as well.

In perhaps the most obvious example of gender mattering -- or not mattering -- there is the issue of maternal deaths due to childbirth.  The U.S. literally has third world mortality rates in this area, and is one of the few countries who report increasing, not decreasing, rates in the 21st century.  Where is the outrage, where is the urgency to address the problem?  Do most of us even know there is a problem?

Health care shouldn't feel singled out about these kind of biases.  Congress has 20% female Senators and 19% female Representatives, both of which make the private sector look bad: only 4% of Fortune 500 companies have a female CEO.  A recent report on leading New York law firms fond only 19% of partners were female, and only 5% were minorities.

The diversity problem in tech is especially well known.  Women make up less than 20% of tech jobs, and closer to 5% if just counting programmers.  It has been estimated that only 2% of tech workers are African-American and 3% Hispanic.

This matters for numerous reasons, perhaps most importantly due to AI.  AI is one of biggest tech trends, in healthcare and elsewhere, as many see it soon augmenting or even replacing human roles.  Unfortunately, there are concerns that the AI field already suffers from what Kate Crawford, writing in The New York Times, called its "white guy problem," since most of its developers are, in fact, white guys, full of their implicit and explicit biases.

As Professor Crawford said: "We need to be vigilant about how we design and train these machine-learning systems, or we will see ingrained forms of bias built into the artificial intelligence of the future." Your AI doc may not be a white male but may still think like one.

Look, I have nothing against white guys; heck, I am a white guy.  But the fact is that white males are not, and never have been, a majority in this country.  Yet in our health care system you're most likely to get care from a white male, who was most likely trained by white males, and the care you receive is most likely based on what has been found appropriate for white males.

If any of that sounds even remotely right to you, you're probably a white male.

It shouldn't matter the gender, race, ethnicity or, for that matter, sexual orientation, socioeconomic background, or religion of the people giving us care; what should matter is how well they provide that care.  On the other hand, those factors should all factor into the care we receive, to ensure that we receive the most appropriate care for our specific health needs.

We talk a lot about patient-centered care and personalized/precision medicine, but we're a long way away from even recognizing how pervasive our biases are that prevent us from those.  

Tuesday, October 11, 2016

Will Anyone Notice?

There's an interesting verbal battle going on between two prominent tech venture capitalists over the future of AI in health care.  In an interview in Vox,  Marc Andreessen asserted that Vinod Khosla "has written all these stories about how doctors are going to go away...And I think he is completely wrong."  Mr. Khosla was quick to respond via Twitter:  "Maybe @pmarca [Mr. Andreessen] should read what I think before assuming what I said about doctors going away." He included a link to his detailed "speculations and musings" on the topic. 

It turns out that Mr. Khosla believes that AI will take away 80% of physicians' work, but not necessarily 80% of their jobs, leaving them more time to focus on the "human aspects of medical practice such as empathy and ethical choices."  That is not necessarily much different than Mr. Andreessen's prediction that "the job of a doctor shifts and becomes a higher-level, more important job that pays better as the doctor becomes augmented by smarter computers."

When AIs start replacing physicians, will we notice -- or care?

Personally, I think it is naive to expect that only 20% of physicians' jobs are at risk from AI, or that AI will lead to physicians being paid even more.  The future may be closer than we realize, and "virtual visits" -- telehealth -- may illustrate why.

Recently, Fortune reported that over half of Kaiser Permanente's patient visits were done virtually, via smartphones, videoconferencing, kiosks, etc.  That's over 50 million such visits annually.  Just a year ago a research firm predicted 158 million virtual visits nationally -- by 2020.   At this rate, Kaiser may beat that projection by itself.

Or take Sherpaa, a health start-up that is trying to replace fee-for-service, in-person doctor visits with virtual visits.  Available with a $40 monthly membership fee, the visits are delivered via their app, tests or emails.  Their physicians can order lab work, prescribe, and make referrals if needed.

Sherpaa prides itself on offering more continuity to members through using a small number of full-time physicians (how and whether the Sherpaa model scales remains to be seen).   Sherpaa claims that 70% of members' health issues are delivered via virtual visits.  Many concierge medicine and direct primary care practices also encourage members to at least start with virtual consults.

How many people would notice if virtual visits were with an AI, not an actual physician?

Companies in every industry are racing to create chatbots, using AI to provide human-like interactions without humans.  Google Assistant, Amazon's Echo, and Apple's Siri are leading examples.  And health care bots are on the way.

Digital Trends reported on two U.K.-based companies who are developing AI chatbots designed specifically for health care, Your.MD and Babylon Health.   Your.MD claims to have the "world's first Artificial Intelligence, Personal Health Assistant," able to both ask patients pertinent questions and respond to their questions "personalized according to your unique profile."

Babylon Health claims to have "the world's most accurate medical artificial intelligence," which they say can analyze "hundreds of millions of combinations of symptoms" in real time to determine a personalized diagnosis.  Both companies say they want to democratise health care by making health advice available to anyone with a smartphone.

Not everyone is convinced we're there yet.  A new study did a direct comparison of human physicians versus 23 commonly used symptom checkers to test diagnostic accuracy, and found that the latter's performance was "clearly inferior."  The symptom checkers listed the correct diagnosis in their top 3 possibilities 51% of the time, versus 84% for humans.  That would seem to cast some cold water on the prospect of using an AI to help with your health issues.

However, consider the following:

  • The study was done by researchers from the Harvard Medical School.  One wonders if researchers at the MIT Computer Science and Artificial Intelligence Laboratory might have used different methodology and/or found different results.
  • The symptom-checkers may be the most commonly used, but may not have been the most state-of-the-art.  And the real test is how the best of those trackers did against the average human physician.
  • Humans still got the diagnosis wrong is at least 16% of the cases.  They're not likely to get much better (at least, not without AI assistance).  AIs, on the other hand, are only going to get better.  
It is only a matter of time until AI equal or exceed human performance in many aspects of health care and elsewhere.

It used to be that physicians were sure that their patients would always rather wait in order to see them in their offices, until retail clinics proved them wrong.  It used to be that physicians were sure patients would always rather see them in person rather than use a virtual visit (possibly with another physician), until telehealth proved them wrong.  And it still is true that most physicians are sure that patients prefer them to AI, but they may soon be proved wrong about that too.

Over 50 years ago MIT computer scientist Joseph Weizenbaum created ELIZA, a computer program that mimicked a psychotherapist.   It would be considered rudimentary today, but by all accounts its users took it seriously, to the extent some refused to believe they weren't communicating with a person.

More recently, an AI named Ellie is serving a similar purpose.  Ellie comes with an avatar and can analyze over 60 features of the people with whom it is interacting, including body language and tone of voice.  It turns out that people open up to Ellie more when they are told they are dealing with an AI than when told it is controlled by a human -- but the really amazing thing is that the latter group did not seem to realize there was actually no human involved.

Score one for the Turing test.  

AI is going to play a major role in health care.  Rather using physicians to focus more on empathy and ethical issues, as Mr.  Khosla suggested (or paying them more for it, as Mr. Andreessen suggested), we might be better off using nurses and ethicists, respectively, for those purposes.  So what will physicians do?

The hardest part of using AI in health care may not be developing the AI, but in figuring out what the uniquely human role in providing health care is.

Monday, October 3, 2016

The Waiting Game

A few days ago ProPublica had a headline I wished I'd written: If It Needs A Sign, It's Probably Bad Design.  Although the article started with a health care example (EpiPen of course, citing Joyce Lee's brilliant post), it wasn't focused on health care -- but it might as well have been.   Health care is full of bad design, and of signs.

Take, for example, the waiting room.

When most patients enter a provider's office or facility, the first thing they are likely to see is a waiting room.  The waiting room probably has other would-be patients already waiting there, each full of their own health concerns.  In some instances, the initial waiting room is merely a staging area; once processed, patients may be sent to yet another waiting room to wait some more.  And, of course, once they eventually do reach an exam room, they'll probably endure some more waiting, no matter how long their wait has already been.

It is no coincidence that in health care those of us not providing the care are called patients.

We're expected to be patient.  After all, our providers are very busy.  They have other patients.  Their time is apparently more precious than ours; if you don't think so, contrast what happens if you are late with what happens when they are late.  If they're late to our appointment, we're led to believe, it is because they've been spending quality time with other patients, and we can hope we'll get the same consideration.

Of course, they have all those other patients, and not enough time to keep them all on schedule, because that's how the day was scheduled.  It's not like the patient load couldn't have been predicted.  No one is forcing them to schedule us in unreasonably narrow increments.  It's simply a matter of generating the desired revenue.    

Speaking of revenue, the other thing patients are likely to see when entering an office are signs about payment -- have insurance cards ready, payment are expected at time of service, etc.  Between those financial reminders and the waiting room, it is not exactly a welcoming experience.  

Health care providers are certainly paying some attention to the problem.  The Upstate Business Journal reports on how some local physician office and hospitals are moving to a more "at-home appeal," with more natural light and better furnishings (including plants and artwork).  The waiting areas are "moving in the direction of a more collaborative, inviting space," including "having more technology with televisions and iPad stations that keep patients interested and occupied while they wait."

Similarly, FastCompany profiled the winners of the American Institute of Archtects (AIA) National Healthcare Design Awards, seven medical centers with some innovative designs.  The designs aren't not just aesthetics.  As an AIA spokesperson said: "There's much higher awareness now of how healthy environments help patients heal.  That is, in turn, related to evidence-based design studies that actually prove that—so it's not just intuitive, it's actually been proven in many instances."

Evidence-based design is, in fact, a real thing.  AIA has guidelines for healthcare building that try to take these into account, such as moving away from semi-private rooms.  These have been incorporated into law in over 35 states.  We've all seen the boom is healthcare building; consulting firm FMI estimates some $42b in 2016, and hopefully some good portion of that is based on these design principles.    

That's all well and good.  Making health care settings more comfortable and easier on the eye is a good thing, right?  But those may be missing the point.  Designers can try to make a doctor's office feel more like home, or a hospital seem more like a hotel, but we're not stupid.  We'll still know we're not at home or in a hotel.

We're focusing on the wrong design problem.  As Tom Goodwin wrote recently in TechCrunch: "We’ve got the questions wrong. It shouldn’t be how are you innovating or which project is doing new things, but why are you doing it and on what level."  He was talking about innovation generally, not just in design, but the point still applies.

Instead of paying designers to try to make waiting more comfortable, maybe we should spend the money on industrial engineers to identity why we're waiting at all, and address those root issues.  It is the wait that is the problem, not the waiting area.

Instead of pouring money to make hospitals more like hotels, maybe we should be spending the money on programs that allow people to remain at home.  Hospital patients often leave more disabled than when they arrive because they spend too much time in bed, because hospital design and processes revolve around beds.  We can make better beds in nicer rooms, but they're still not good for us.

The design problems are pervasive.  Health care is, after all, an industry that incents physicians to use EHRs they use but don't like; that has patient portals that patients don't even use, whose bills are so notoriously poorly designed that HHS holds contests to find ways to improve them, and whose terminology is so confusing that U.S. Department of Education says only 10% of us have a proficient level of health literacy.  Bad design abounds.

We can put up all the signs we want, we can architect nicer buildings and offices, but they won't address the underlying design issues.  Design needs to focus not on how to make health care settings prettier but how to make our encounters more efficient and our care more effective.  It needs to focus on us and our health.  We need to start asking the right questions and solving the right design problems.

If we're waiting long enough that we even notice the waiting area, that's a design problem.