Monday, October 24, 2016

Your Toaster May Be Bad For Your Health

Quote of the week/month:

In a relatively short time we've taken a system built to resist destruction by nuclear weapons and made it vulnerable to toasters.

Mr. Jarmoc was, of course, referring to the cyberattack last week that shut down access to many major websites (including, ironically, Twitter) for much of the day Friday.   The attack was what is called a distributed denial of service (DDoS) attack, which means that the hackers flooded a key part of the Internet infrastructure with essentially spam service requests.  In this case, they targeted a company called Dyn, whose Domain Name System serves as a directory for web addresses.  Legitimate requests to it were not able to be fulfilled.

What makes this even more interesting is that the hackers conducted the attack using hundreds of thousands, perhaps millions of Internet-connected devices -- e.g., webcams, routers, TVs, DVRs, security cameras, perhaps even the odd toaster or two.  This "botnet army" used a code called Mirai that was originally developed by gamers to deny online access to rival gamers.

As FastCompany reported, there had been warnings about attacks by these "Internet of Things" devices for some time, but the attack was still successful, rendering over 1,000 websites unavailable.  The reasons for it are not clear.  A security blogger told The Wall Street Journal "I believe somebody’s feelings got hurt and that we’re dealing with the impact. We’re dealing with young teenagers who are holding the internet for ransom."

I don't know if that should make me feel less scared, or more.

The New York Times warns of "a new era" of attacks powered by IoT devices, noting that many of them come with weak or nonexistent security features -- and that there soon could be billions of them in use.  A recent survey (The Internet of Stranger Things) confirms that most of us are worried about the cybersecurity risks of our various devices, but few of us have actually done anything about them.
We may buy cybersecurity programs for our computers, and try to beef up our passwords, but probably most of us aren't doing the same for our refrigerators or our cars.  Yet those are the kinds of devices we now need to worry about.

It's worse than that.  As The Times further noted:
The difference with the internet is that it is not clear in the United States who is supposed to be protecting it. The network does not belong to the government — or really to anyone. Instead, every organization is responsible for defending its own little piece.
Decentralized is good, until it is not.

What does this have to do with health care?  Plenty, as it turns out.  IoT devices are increasingly helping us manage our health and medical care.  IoT in health care is expected to be a huge market -- perhaps 40% of the total IoT, and worth some $117b by 2020, according to McKinsey.  Expected major uses include wearables, monitors, and implanted medical devices.  .

The problem is that many manufacturers haven't necessarily prepared for cyberattacks.  Kevin Fu, a professor at the University of Michigan's Archimedes Center for Medical Device Security, told CNBC: "the dirty little secret is that most manufacturers did not anticipate the cybersecurity risks when they were designing them [devices] a decade ago, so this is just scratching the surface."

Again, I'm not sure if the fact that there already are such centers as Dr. Fu's should make me feel less scared, or more.

Cybersecurity concerns for health care don't just involve the Internet.  Earlier this month J&J warned that one of its insulin pumps was vulnerable to hackers, who could spoof communication between the device and its wireless remote control.  The company sent letters about the risk to some 114,000 patients and their doctors, while claiming that the risk was low and that they knew of no such attacks -- yet.
One has to wonder how many other vulnerable devices there may be.

When it comes to health care, DDoS would be at best an inconvenience, and at worst life-threatening, but the cybersecurity risk most people still worry the most about is privacy.  We're going to need to be reassured both that the Internet-based services will be there when we need them, and that our privacy won't be compromised by them.  Those are, unfortunately, tough asks.

After all, healthcare is the industry whose data and systems are already being held for ransomware by hackers so amateur that they've sometimes settled for as little as $17,000 in bitcoin.  Meanwhile, cyberattacks on electronic health records are growing "exponentially," according to a new GAO report.  The GAO estimated that 113 million records were breached in 2015 -- up from 12.5 million in 2014, and less than 135,000 in 2009.  One has to imagine hackers are drooling over the vulnerability of IoT data.

The Street reports that "traditional" IT security firms (such as Symmatec) are already focusing on IoT, as well as new players like PTC or Synopsys, but also warns that, when it comes to IoT for health, security is still a major concern.  As Ivan Feinseth of investment bank Tigress Partners told them, "the connected car and house are really, really cool, but none of that is more important than healthcare."

Unfortunately, investment in cybersecurity for IoT remains low, with estimated spending on it only around $390 million, according to ABI research.   That's out of some $5.5b healthcare cybersecuity spending in 2016.  ABI estimates IoT cybersecurity spending will triple by 2021, but that still may lag far behind the spread of health IoT devices.

We've grown used to being hyperconnected, through email, the web, our mobile devices, and are just starting to explore the possibilities of IoT.  The Pandora's Box of connectivity is not going to close.  However, the basic structures of the internet are some 40 years old now, those of the World Wide Web some 25 years, and it may be time to figure out what comes next, especially because of IoT.

Whether that is the "Internet2," whether that is the "browserless experience" Acquia Labs envisions, whether that is blockchain -- I don't know.  What I do know is that a cyberwar in health is one in which we can't afford to lose many battles, so we better figure out sometime quick.

Before my toaster decides to do something mean to me.

Tuesday, October 18, 2016

Health Care's White Guy Problem(s)

The Wall Street Journal reports that women in India aren't benefiting from the spread of smartphones, which are helping men in that country -- where landlines are scarce, especially in rural areas -- perform the same kind of mobile functions most of us take for granted.

Rather than technology leveling gender gaps in India, though, it is exacerbating them.  Some 114 million more men than women have smartphones there, and that gap isn't going away anytime soon, due to gender biases that still dominate.  "Mobile phones are dangerous for women," explained a village elder.

Well, you might say, that's just India.  That sort of thing doesn't happen here, thank goodness.  Maybe you should talk to Tamika Cross, M.D.

Dr. Cross has gained notoriety lately due to an incident on a Delta flight.   There was a medical emergency, and she went into "emergency mode," getting out of her seat to offer her services.  Being young, female, and African-American, though, she evidently didn't fit the flight attendants' mental profile of a physician.  As one of them apparently told her, "Oh, no, sweetie, put [your] hand down.  We're looking for actual physicians or nurses or other type of medical personnel..."

I'm not sure which is more insulting, that she didn't fit their stereotype of any kind of medical professional, much less a doctor, or that they called her "sweetie."

Dr. Cross's experience has struck a chord, promoting #whatadoctorlookslike that has spurred both support and similar accounts, such as Jennifer Adaeze Okwerekwu's account in Stat, Jennifer Conti's story in Slate, or Lilly Workneh's Huffington Post column, plus thousands of sympathetic tweets.

The story is getting attention as an issue for female minority doctors, but the problem is, of course much bigger than that.  It is an issue for minorities and women in medicine generally, and for physicians who have emigrated to this country, to name a few subgroups.

While it is true that, according to the AAMC, women now make up 47% of medical school students, in those medical schools they only make up 38% of full-time faculty, 21% of full professors, and 15% of department chairs.  And nationally women only make up a third of the physician workforce.

Still, that's better than for minorities, who only make up only 20% of the physician workforce yet make up 37% of the population (and are projected to be a majority within a generation).  African-American or Hispanic/Latino physicians each only account for about 4% of total physicians (and, as it turns out, minority physicians play an "outsized role" in providing care to minority and underserved patients).

Clearly, there is a problem.

It's not just from whom we get our care that shows our cultural biases, but also what care we get.  There are well-documented disparities in care by race/ethnicity and by gender.  For example, men and women get treated differently for coronary heart disease, the nation's leading killers for both men and women.  Those differences are neither by design nor are helping women, as their mortality rates for heart disease have not dropped as dramatically as they have for men.

It doesn't help that clinical trials for such care are likely to have twice as many male participants than female, a fact that is true of clinical trials for many diseases.  There are disturbing under-representations in clinical trials for minorities as well.

In perhaps the most obvious example of gender mattering -- or not mattering -- there is the issue of maternal deaths due to childbirth.  The U.S. literally has third world mortality rates in this area, and is one of the few countries who report increasing, not decreasing, rates in the 21st century.  Where is the outrage, where is the urgency to address the problem?  Do most of us even know there is a problem?

Health care shouldn't feel singled out about these kind of biases.  Congress has 20% female Senators and 19% female Representatives, both of which make the private sector look bad: only 4% of Fortune 500 companies have a female CEO.  A recent report on leading New York law firms fond only 19% of partners were female, and only 5% were minorities.

The diversity problem in tech is especially well known.  Women make up less than 20% of tech jobs, and closer to 5% if just counting programmers.  It has been estimated that only 2% of tech workers are African-American and 3% Hispanic.

This matters for numerous reasons, perhaps most importantly due to AI.  AI is one of biggest tech trends, in healthcare and elsewhere, as many see it soon augmenting or even replacing human roles.  Unfortunately, there are concerns that the AI field already suffers from what Kate Crawford, writing in The New York Times, called its "white guy problem," since most of its developers are, in fact, white guys, full of their implicit and explicit biases.

As Professor Crawford said: "We need to be vigilant about how we design and train these machine-learning systems, or we will see ingrained forms of bias built into the artificial intelligence of the future." Your AI doc may not be a white male but may still think like one.

Look, I have nothing against white guys; heck, I am a white guy.  But the fact is that white males are not, and never have been, a majority in this country.  Yet in our health care system you're most likely to get care from a white male, who was most likely trained by white males, and the care you receive is most likely based on what has been found appropriate for white males.

If any of that sounds even remotely right to you, you're probably a white male.

It shouldn't matter the gender, race, ethnicity or, for that matter, sexual orientation, socioeconomic background, or religion of the people giving us care; what should matter is how well they provide that care.  On the other hand, those factors should all factor into the care we receive, to ensure that we receive the most appropriate care for our specific health needs.

We talk a lot about patient-centered care and personalized/precision medicine, but we're a long way away from even recognizing how pervasive our biases are that prevent us from those.  

Tuesday, October 11, 2016

Will Anyone Notice?

There's an interesting verbal battle going on between two prominent tech venture capitalists over the future of AI in health care.  In an interview in Vox,  Marc Andreessen asserted that Vinod Khosla "has written all these stories about how doctors are going to go away...And I think he is completely wrong."  Mr. Khosla was quick to respond via Twitter:  "Maybe @pmarca [Mr. Andreessen] should read what I think before assuming what I said about doctors going away." He included a link to his detailed "speculations and musings" on the topic. 

It turns out that Mr. Khosla believes that AI will take away 80% of physicians' work, but not necessarily 80% of their jobs, leaving them more time to focus on the "human aspects of medical practice such as empathy and ethical choices."  That is not necessarily much different than Mr. Andreessen's prediction that "the job of a doctor shifts and becomes a higher-level, more important job that pays better as the doctor becomes augmented by smarter computers."

When AIs start replacing physicians, will we notice -- or care?

Personally, I think it is naive to expect that only 20% of physicians' jobs are at risk from AI, or that AI will lead to physicians being paid even more.  The future may be closer than we realize, and "virtual visits" -- telehealth -- may illustrate why.

Recently, Fortune reported that over half of Kaiser Permanente's patient visits were done virtually, via smartphones, videoconferencing, kiosks, etc.  That's over 50 million such visits annually.  Just a year ago a research firm predicted 158 million virtual visits nationally -- by 2020.   At this rate, Kaiser may beat that projection by itself.

Or take Sherpaa, a health start-up that is trying to replace fee-for-service, in-person doctor visits with virtual visits.  Available with a $40 monthly membership fee, the visits are delivered via their app, tests or emails.  Their physicians can order lab work, prescribe, and make referrals if needed.

Sherpaa prides itself on offering more continuity to members through using a small number of full-time physicians (how and whether the Sherpaa model scales remains to be seen).   Sherpaa claims that 70% of members' health issues are delivered via virtual visits.  Many concierge medicine and direct primary care practices also encourage members to at least start with virtual consults.

How many people would notice if virtual visits were with an AI, not an actual physician?

Companies in every industry are racing to create chatbots, using AI to provide human-like interactions without humans.  Google Assistant, Amazon's Echo, and Apple's Siri are leading examples.  And health care bots are on the way.

Digital Trends reported on two U.K.-based companies who are developing AI chatbots designed specifically for health care, Your.MD and Babylon Health.   Your.MD claims to have the "world's first Artificial Intelligence, Personal Health Assistant," able to both ask patients pertinent questions and respond to their questions "personalized according to your unique profile."

Babylon Health claims to have "the world's most accurate medical artificial intelligence," which they say can analyze "hundreds of millions of combinations of symptoms" in real time to determine a personalized diagnosis.  Both companies say they want to democratise health care by making health advice available to anyone with a smartphone.

Not everyone is convinced we're there yet.  A new study did a direct comparison of human physicians versus 23 commonly used symptom checkers to test diagnostic accuracy, and found that the latter's performance was "clearly inferior."  The symptom checkers listed the correct diagnosis in their top 3 possibilities 51% of the time, versus 84% for humans.  That would seem to cast some cold water on the prospect of using an AI to help with your health issues.

However, consider the following:

  • The study was done by researchers from the Harvard Medical School.  One wonders if researchers at the MIT Computer Science and Artificial Intelligence Laboratory might have used different methodology and/or found different results.
  • The symptom-checkers may be the most commonly used, but may not have been the most state-of-the-art.  And the real test is how the best of those trackers did against the average human physician.
  • Humans still got the diagnosis wrong is at least 16% of the cases.  They're not likely to get much better (at least, not without AI assistance).  AIs, on the other hand, are only going to get better.  
It is only a matter of time until AI equal or exceed human performance in many aspects of health care and elsewhere.

It used to be that physicians were sure that their patients would always rather wait in order to see them in their offices, until retail clinics proved them wrong.  It used to be that physicians were sure patients would always rather see them in person rather than use a virtual visit (possibly with another physician), until telehealth proved them wrong.  And it still is true that most physicians are sure that patients prefer them to AI, but they may soon be proved wrong about that too.

Over 50 years ago MIT computer scientist Joseph Weizenbaum created ELIZA, a computer program that mimicked a psychotherapist.   It would be considered rudimentary today, but by all accounts its users took it seriously, to the extent some refused to believe they weren't communicating with a person.

More recently, an AI named Ellie is serving a similar purpose.  Ellie comes with an avatar and can analyze over 60 features of the people with whom it is interacting, including body language and tone of voice.  It turns out that people open up to Ellie more when they are told they are dealing with an AI than when told it is controlled by a human -- but the really amazing thing is that the latter group did not seem to realize there was actually no human involved.

Score one for the Turing test.  

AI is going to play a major role in health care.  Rather using physicians to focus more on empathy and ethical issues, as Mr.  Khosla suggested (or paying them more for it, as Mr. Andreessen suggested), we might be better off using nurses and ethicists, respectively, for those purposes.  So what will physicians do?

The hardest part of using AI in health care may not be developing the AI, but in figuring out what the uniquely human role in providing health care is.

Monday, October 3, 2016

The Waiting Game

A few days ago ProPublica had a headline I wished I'd written: If It Needs A Sign, It's Probably Bad Design.  Although the article started with a health care example (EpiPen of course, citing Joyce Lee's brilliant post), it wasn't focused on health care -- but it might as well have been.   Health care is full of bad design, and of signs.

Take, for example, the waiting room.

When most patients enter a provider's office or facility, the first thing they are likely to see is a waiting room.  The waiting room probably has other would-be patients already waiting there, each full of their own health concerns.  In some instances, the initial waiting room is merely a staging area; once processed, patients may be sent to yet another waiting room to wait some more.  And, of course, once they eventually do reach an exam room, they'll probably endure some more waiting, no matter how long their wait has already been.

It is no coincidence that in health care those of us not providing the care are called patients.

We're expected to be patient.  After all, our providers are very busy.  They have other patients.  Their time is apparently more precious than ours; if you don't think so, contrast what happens if you are late with what happens when they are late.  If they're late to our appointment, we're led to believe, it is because they've been spending quality time with other patients, and we can hope we'll get the same consideration.

Of course, they have all those other patients, and not enough time to keep them all on schedule, because that's how the day was scheduled.  It's not like the patient load couldn't have been predicted.  No one is forcing them to schedule us in unreasonably narrow increments.  It's simply a matter of generating the desired revenue.    

Speaking of revenue, the other thing patients are likely to see when entering an office are signs about payment -- have insurance cards ready, payment are expected at time of service, etc.  Between those financial reminders and the waiting room, it is not exactly a welcoming experience.  

Health care providers are certainly paying some attention to the problem.  The Upstate Business Journal reports on how some local physician office and hospitals are moving to a more "at-home appeal," with more natural light and better furnishings (including plants and artwork).  The waiting areas are "moving in the direction of a more collaborative, inviting space," including "having more technology with televisions and iPad stations that keep patients interested and occupied while they wait."

Similarly, FastCompany profiled the winners of the American Institute of Archtects (AIA) National Healthcare Design Awards, seven medical centers with some innovative designs.  The designs aren't not just aesthetics.  As an AIA spokesperson said: "There's much higher awareness now of how healthy environments help patients heal.  That is, in turn, related to evidence-based design studies that actually prove that—so it's not just intuitive, it's actually been proven in many instances."

Evidence-based design is, in fact, a real thing.  AIA has guidelines for healthcare building that try to take these into account, such as moving away from semi-private rooms.  These have been incorporated into law in over 35 states.  We've all seen the boom is healthcare building; consulting firm FMI estimates some $42b in 2016, and hopefully some good portion of that is based on these design principles.    

That's all well and good.  Making health care settings more comfortable and easier on the eye is a good thing, right?  But those may be missing the point.  Designers can try to make a doctor's office feel more like home, or a hospital seem more like a hotel, but we're not stupid.  We'll still know we're not at home or in a hotel.

We're focusing on the wrong design problem.  As Tom Goodwin wrote recently in TechCrunch: "We’ve got the questions wrong. It shouldn’t be how are you innovating or which project is doing new things, but why are you doing it and on what level."  He was talking about innovation generally, not just in design, but the point still applies.

Instead of paying designers to try to make waiting more comfortable, maybe we should spend the money on industrial engineers to identity why we're waiting at all, and address those root issues.  It is the wait that is the problem, not the waiting area.

Instead of pouring money to make hospitals more like hotels, maybe we should be spending the money on programs that allow people to remain at home.  Hospital patients often leave more disabled than when they arrive because they spend too much time in bed, because hospital design and processes revolve around beds.  We can make better beds in nicer rooms, but they're still not good for us.

The design problems are pervasive.  Health care is, after all, an industry that incents physicians to use EHRs they use but don't like; that has patient portals that patients don't even use, whose bills are so notoriously poorly designed that HHS holds contests to find ways to improve them, and whose terminology is so confusing that U.S. Department of Education says only 10% of us have a proficient level of health literacy.  Bad design abounds.

We can put up all the signs we want, we can architect nicer buildings and offices, but they won't address the underlying design issues.  Design needs to focus not on how to make health care settings prettier but how to make our encounters more efficient and our care more effective.  It needs to focus on us and our health.  We need to start asking the right questions and solving the right design problems.

If we're waiting long enough that we even notice the waiting area, that's a design problem.

Monday, September 26, 2016

Health Care Needs Some Spectacles

I've never written about Snapchat.  I didn't really get the point of its namesake app, the point of which was to post content that automatically disappeared.  I knew it was wildly popular among teens and celebrities, both of whom undoubtedly had more content they wished wouldn't persist than an old fogey like me, but it just seemed purposely trivial.

With their recent introduction of Spectacles, though, I figured Snap Inc. (as the company renamed itself) deserves a closer look.

The Wall Street Journal broke the story (as Business Insider also did) with an in-depth look at Spectacles.  It is not a new app, nor some new service on its existing app (which continues to be called Snapchat), but rather a piece of hardware: a pair of sunglasses that can record short videos.  Users can record ten to thirty second videos, taken from the sunglass's perspective.  The videos can then, of course, be uploaded to Snapchat, where they also will self-destruct.

Lights on the inside of the glasses will alert users that they are recording, and -- unlike with Google Glass's similar capability -- an external light will let surrounding people know they are being recorded.

Snap believes Spectacles allow for a more natural experience than using a smartphone camera.  The recording is more like what one would see, since they both are from the eye's POV and because it uses a 115-degree-angle lens to record a circular image.  More importantly, it frees your hand, much like GoPro does for adventure junkies.  As Snap's CEO Evan Spiegel points out, you're not holding your smartphone in front of you "like a wall in front of your face."

Snap has even gone so far as to label themselves a camera company, a curious move in an era where former camera titans like Kodak and Polaroid are trying to reinvent themselves out of that business.  As Mr. Spiegel described it to WSJ, "First it was make a photo [studio portraits].  Then it was take a photo [portable camera].  And finally it was give a photo [instant Polaroids evolving to smartphone selfies]."  He thinks this is a business with a future.

Spectacles are mounted on hipster sunglasses (available in three colors), are priced at $130, and will be offered in a limited rollout this fall.  Mr. Spiegel calls Spectacles a "toy," but plenty of people are taking it seriously, as the flurry of press it has received illustrate (e.g., Christian Science Monitor, Fast CompanyForbes, The New York Times, TechCrunch, and Wired).  The consensus seems to be that it bears watching, and won't share Google Glass's premature demise.

Snap isn't finished with Spectacles.  "We’re going to take a slow approach to rolling them out,” Mr. Spiegel told the WSJ. “It’s about us figuring out if it fits into people’s lives and seeing how they like it."

Snapchat is used by over 150 million people daily -- more than Twitter -- and more than 60% of 13 to 34 year-old smartphone users have it.  As the NYT reported, more than 35 million U.S. users watched portions of the Rio Olympics using a Snapchat channel -- "there was more Olympics footage and content on Snapchat then there was on NBC" --  and media companies are flocking to produce Snapchat content.

No wonder Facebook first tried to imitate Snapchat (Poke, anyone?), then buy it (a supposed $3b offer).  They're paying attention.

There are several lessons here:

1.  AR awaits:  Yes, right now Spectacles are just taking videos, but don't expect that to remain the limits of its capabilities.  Snapchat already offers various features (e.g., Lenses and Geofilters) to alter conventional smartphone photos, and adding augmented reality options makes sense.  Honestly, would you rather experience AR through your smartphone screen or in your field of vision (as Google Glass attempted)?  Not much of a contest.  I'm a big believer in how AR/VR will inform us and transform many of our experiences, and one of Spectacles' descendants is a very real possibility to help deliver those.

2.  Goodbye Smartphone:  Yes, we increasingly love our smartphones.  They are the Swiss Army knives of personal electronic devices, offering features undreamed of just a couple decades ago, to the point that even using the term "phones" in the name no longer reflects a main purpose.  As multi-purpose and omnipresent as they have become, there still is that awkwardness of having to hold the device.

We still are not at an Internet of Things environment (IoT), but we will be in less time than it took to go from cell phones to smartphones.  So apps and services that are built on smartphones better start looking for other, less device-specific platforms.  I'm not suggesting that Spectacles is, in any way, that platform, but at least Snap Inc. understands the problem.

3. Define Your Industry -- Don't Be Defined By It:  Snapchat was, and still is, sitting pretty in the messaging industry.  Messaging is big.  Facebook is pumping lots of money into Messenger and Instagram, suitors are falling over themselves to try to buy Twitter, Google has high hopes for Allo, and WeChat still hopes to take over the world outside of China.

Meanwhile, Snapchat's parent company wants to be a camera company.  That might sound dangerously backward-looking, but when Mr. Spiegel says Snap Inc. is a camera company, he doesn't mean that in the traditional sense.  As he told the WSJ, "It’s about instant expression and who you are right now. Internet-connected photography is really a reinvention of the camera."  Snap Inc. is reinventing the industry they are in.

Look at it this way: Snap won't be dependent on some other company's camera sitting on some other company's device to generate content.  Maybe not so backward looking at all.

In health care, bold thinking is for hospitals to relabel themselves as "health systems," or chiropractors to call their offices "wellness clinics."  That's nothing like a hugely successful messaging app company declaring they are in the camera business and producing hardware to support that vision.  Snap may succeed or they may fail (just ask Google), but what they are doing takes guts, and a vision of the future that doesn't just look like more of the same.  If any industry needs those, it is health care.

OK, health innovators: what is your parallel to Spectacles, and what industry do you think you're in?

Sunday, September 18, 2016

I Really Wish You Wouldn't Do That

Digital rectal exams (DREs) typify much of what's wrong with our health care system.  Men dread going to go get them, they're unpleasant, they vividly illustrate the physician-patient hierarchy, and -- oh, by the way -- they apparently don't actually provide much value.

By the same token, routine pelvic exams for healthy women also don't have any proven value either.

The recent conclusions about DREs come from a new study.  One of the researchers, Dr. Ryan Terlecki, declared: "The evidence suggests that in most cases, it is time to abandon the digital rectal exam (DRE).  Our findings will likely be welcomed by patients and doctors alike."

No kidding.

The study actually questioned doing DREs when PSA tests were available, but it's not as if PSA tests themselves have unquestioned value.  Even the American Urological Association came out a few years ago against routine PSA tests, citing the number of false positives and resulting unnecessary treatments.

Indeed, the value of even treating the cancer that DREs and PSAs are trying to detect -- prostate cancer -- has come under new scrutiny.  A new study tracked prostate cancer patients for ten years, and found "no significant difference" in mortality between those getting surgery, radiation, or simple active monitoring.

The surgery and radiation, on the other hand, had some unwelcome side effects.  Forty-six percent of men who had their prostate removed were wearing adult diapers six months later, and impotence was reported in 88% of surgical patients and 78% of radiation patients.  The chief medical officer of the American Cancer Society admitted, "Our aggressive approach to screening and treating has resulted in more than 1 million American men getting needless treatment."

"Needless" is perhaps the most benign description of what happened to those men.

As for the pelvic exam, about three-fourths of preventive visits to OB-GYNs include them, over 60 million visits annually.  They're not very good at either identifying or ruling out ovarian cancer, and the asymptomatic conditions they can detect don't have much data to indicate that treating them early offers any advantage to simply waiting for symptoms.

Or take mammograms.  Mammograms are uncomfortable, have significant false positive/over-diagnosis rates, and costs us something like $4b annually in unnecessary costs, yet remain the "gold standard."

Not many women like them.  It has been oft-stated that if men had to get them, there would be a better method.  Yet, according to the CDC, about two-thirds of women over 40 have had a mammogram within the past two years.  Maybe they shouldn't have.

Recommendations for how often and for which ages should get mammograms vary widely, with the default often ending up being annual screenings.  However, new research has concluded that many women only need triennial screenings.  Lead author Amy Trentham-Dietz said: "Women at low risk and low breast density will experience more harms with little added benefit with annual and biennial screening compared to triennial screening."

Mammograms can find evidence of breast cancers or pre-cancers, which often leads to mastectomies.  It has been known for some time that mastectomy rates in the U.S. are much higher than other countries, but now we're seeing more mastectomies in earlier stages of breast cancer and a "perplexing" increase in bilateral mastectomies, even among women who neither have cancer in the second breast nor carry the BRCA risk mutation for it, according to a AHRQ brief earlier this year.

As AHRQ Director Rick Kronick observed: "This brief highlights changing patterns of care for breast cancer and the need for further evidence about the effects of choices women are making on their health, well-being and safety."

In less diplomatic terms: what the hell?

Then there is everyone's favorite test -- colonoscopies.  Only about two-thirds of us are getting them as often as recommended, and over a quarter of us have never had one.  There are other alternatives, including a "virtual" colonoscopy and now even a pill version of it, but neither has done much to displace the traditional colonoscopy.  And all of those options still require what many regard as the worst part of the procedure, the prep cleansing.

An option that avoids not only the procedure but also the prep hasn't taken root either.  It involves collecting a sample of one's stool to test the blood.  This option, such as fecal immunochemical test (FIT) or fecal occult blood test (FOBT), has strong research support, to the point that the Canadian Task Force on Preventive Care says it, not colonoscopies, should be the first line of screening.  It is also much cheaper than a colonoscopy.  In the U.S., though, colonoscopies remain the preferred option for physicians.  

The final example is what researchers recently called an "epidemic" of thyroid cancer, which they attributed to overdiagnosis.  In the U.S., for example, annual incidence tripled from 1975 to 2009.  They found that the rates of the cancer were tied to the increased availability of diagnostic tests like ultrasound and CT scans, which led to the discovery of more cancers.  The researchers believe that as many as 80% of the tumors discovered were small benign ones, which did not mean they weren't surgically treated.

In fact, according to the researchers: "The majority of the overdiagnosed thyroid cancer cases undergo total thyroidectomy and frequently other harmful treatments, like neck lymph node dissection and radiotherapy, without proven benefits in terms of improved survival."  Not only that, once they've had the surgery, most patients will have to take thyroid hormones the rest of their lives.  

All of these examples happen to relate to cancer, although there certainly are similar examples with other diseases/conditions (e.g., appendectomy versus antibiotics for uncomplicated appendicitis).

Two conclusions:

1.  If we're going to have unpleasant things done to us, they better be based on facts: As the above examples illustrate, some of our common treatments and tests are based on tradition and/or outdated science.  We deserve better than that.  We should demand the options and the evidence.

2.  We should do everything we can to make unpleasant things, well, less unpleasant:  Physicians can't just focus on reducing patients' medical complaints but also should seek to reduce other complaints about their care.  When patients dread having something done, and often use that as an excuse not to get services, that should be a tip-off that something needs to change.

Let's get right on those.

Thursday, September 8, 2016

AI Docs May Need Some Good AI Lawyers

A recent post highlighted how artificial intelligence (AI) is already playing important roles in health care, and concluded that expanded use of AI may be ready for us before we are ready for it.  One example of the kind of problem we'll face is: who would we sue if care that an AI recommended or performed went wrong?

Because, you know, always follow the money.

Last week Stanford's One Hundred Year Study On Artificial Intelligence released its 2016 report, looking at the progress and potential of AI, as well as some recommendations for public policy.  The report urged that we be cautious about both too little regulation and too much, as the former could lead to undesirable consequences and the latter could stifle innovation.

One of the key points is that there is no clear definition of AI, because "it isn't any one thing."  It is already many things and will become many, many more.  We need to prepare for its impacts.

This call is important but not new.  For example, in 2015 a number of thought leaders issued an "open letter" on AI, both stressing its importance and that we must maximize the societal benefit of AI.  As they said, "our AI systems must do what we want them to do," "we" being society at large, not just AI inventors/investors.

The risks are real.  Most experts downplay concerns that AI will supplant us, as Stephen Hawkings famously warned, but that is not the only risk it poses.  For example, mathematician Cathy O'Neil argues in Weapons of Math Destruction that algorithms and Big Data are already being used to target the poor, reinforce racism, and make inequality worse.  And this is when they are still largely being overseen by humans.  Think of the potential when AI is in charge.

With health care, deciding what we want AI to be able to do is literally a life-or-death decision.  

Let's get to the heart of it: there will be an AI that knows as much -- or more -- as any physician ever has.  When you communicate with it, you will believe you are talking to a human, perhaps smarter than any human you know.  There will be an AI that can perform even complex procedures faster and more precisely than a human.  And there will be AIs who can look you in the eye, shake your hand, feel your skin -- just like a human doctor.  Whether they can also develop, or at least mimic, empathy remains to be seen.

What will we do with such AIs?  

The role that many people seem most comfortable with is that they would serve as aids to physicians.  They could serve as the best medical reference guide ever, able to immediately pull up any relevant statistics, studies, guidelines, and treatment options.  No human can keep all that information in their head, no matter how good their education is or how much experience they've had.  

Some go further and envision AIs actually treating patients, but only with limited autonomy and under direct physician supervision, as with physician assistants.  

But these only tap AI's potential.  If they can perform as well as physicians -- and that is an "if" about which physicians will fight fiercely -- why shouldn't their scope of practice be as wide as physicians'?  In short, why shouldn't they be physicians?

Historically, the FDA has regulated health-related products.  It has struggled with how to regulate health apps, which pose much less complicated questions than AI.  With AI, regulators may not be able to ascertain exactly how it will behave in a specific situation, as its program may constantly evolve based on new information and learning.  How is a regulator to say with any certainty that an AI's future behavior will be safe for consumers?

Perhaps AI will grow independent enough to be considered people, not products.  After all, if corporations can be "people," why not AI?  Indeed, specific instances of AI may evolve differently, based on their own learning.  Each AI instance might be, in a sense, an individual, and would have to be treated accordingly.  

If so, can we really see a medical licensing board giving a license to an AI?  Would we want to make one go through the indentured servitude of an internship/residency?   How should we evaluate their ability to give good care to patients?  After all, we don't do such a great job about this with humans.  

Let's say we manage to get to AI physicians.  It's possible that they will become widely available, but not seen as "good" as human physicians, and it ends up that only the wealthy can afford the latter.  Or AIs could be seen as better, and the wealthy ensure that only they benefit  from them, with everyone else "settling" for old-fashioned human physicians.  

These are the kinds of societal issues the Stanford report urged that we think about.

One of the problems we'll face is that AIs may expose the amount of unnecessary care patients now get, as is widely believed.   They may also expose that many of the findings which guide treatment decisions are based on faulty or outdated research, as has been charged.  In short, AIs may reveal that the practice of medicine is, indeed, a very human activity, full of all sorts of human shortcomings.  

Perhaps expecting AIs to be as good as physicians is setting too low a bar.

Back to the original question: who would be at fault if care given by an AI causes harm?  Unlike with humans, an AI's mistakes are unlikely to be because they didn't remember what to do, or because they were tired or distracted.  On the other hand, the self-generated algorithm it used to reach its decision may not be understandable to humans, so we may never know exactly what went "wrong." 

Did it learn poorly, so the AI's creator is at fault?  Did it base its decisions on invalid data or faulty research, in which cause their originators should be liable?  Did it not have access to the right precedents, in which case can we attach blame to anyone?  How would we even "punish" an AI?

Lawyers and judges, legislators and regulators will have plenty to work on.  Some of them may be AIs too.

Still, the scariest thing about AI isn't the implications we can imagine, no matter how disruptive they seem, but the unexpected ones that such technological advances inevitably bring about.  We may find that problems like licensing, malpractice, and job losses are the easy ones.