Monday, September 26, 2016

Health Care Needs Some Spectacles

I've never written about Snapchat.  I didn't really get the point of its namesake app, the point of which was to post content that automatically disappeared.  I knew it was wildly popular among teens and celebrities, both of whom undoubtedly had more content they wished wouldn't persist than an old fogey like me, but it just seemed purposely trivial.

With their recent introduction of Spectacles, though, I figured Snap Inc. (as the company renamed itself) deserves a closer look.

The Wall Street Journal broke the story (as Business Insider also did) with an in-depth look at Spectacles.  It is not a new app, nor some new service on its existing app (which continues to be called Snapchat), but rather a piece of hardware: a pair of sunglasses that can record short videos.  Users can record ten to thirty second videos, taken from the sunglass's perspective.  The videos can then, of course, be uploaded to Snapchat, where they also will self-destruct.

Lights on the inside of the glasses will alert users that they are recording, and -- unlike with Google Glass's similar capability -- an external light will let surrounding people know they are being recorded.


Snap believes Spectacles allow for a more natural experience than using a smartphone camera.  The recording is more like what one would see, since they both are from the eye's POV and because it uses a 115-degree-angle lens to record a circular image.  More importantly, it frees your hand, much like GoPro does for adventure junkies.  As Snap's CEO Evan Spiegel points out, you're not holding your smartphone in front of you "like a wall in front of your face."

Snap has even gone so far as to label themselves a camera company, a curious move in an era where former camera titans like Kodak and Polaroid are trying to reinvent themselves out of that business.  As Mr. Spiegel described it to WSJ, "First it was make a photo [studio portraits].  Then it was take a photo [portable camera].  And finally it was give a photo [instant Polaroids evolving to smartphone selfies]."  He thinks this is a business with a future.

Spectacles are mounted on hipster sunglasses (available in three colors), are priced at $130, and will be offered in a limited rollout this fall.  Mr. Spiegel calls Spectacles a "toy," but plenty of people are taking it seriously, as the flurry of press it has received illustrate (e.g., Christian Science Monitor, Fast CompanyForbes, The New York Times, TechCrunch, and Wired).  The consensus seems to be that it bears watching, and won't share Google Glass's premature demise.

Snap isn't finished with Spectacles.  "We’re going to take a slow approach to rolling them out,” Mr. Spiegel told the WSJ. “It’s about us figuring out if it fits into people’s lives and seeing how they like it."

Snapchat is used by over 150 million people daily -- more than Twitter -- and more than 60% of 13 to 34 year-old smartphone users have it.  As the NYT reported, more than 35 million U.S. users watched portions of the Rio Olympics using a Snapchat channel -- "there was more Olympics footage and content on Snapchat then there was on NBC" --  and media companies are flocking to produce Snapchat content.

No wonder Facebook first tried to imitate Snapchat (Poke, anyone?), then buy it (a supposed $3b offer).  They're paying attention.

There are several lessons here:

1.  AR awaits:  Yes, right now Spectacles are just taking videos, but don't expect that to remain the limits of its capabilities.  Snapchat already offers various features (e.g., Lenses and Geofilters) to alter conventional smartphone photos, and adding augmented reality options makes sense.  Honestly, would you rather experience AR through your smartphone screen or in your field of vision (as Google Glass attempted)?  Not much of a contest.  I'm a big believer in how AR/VR will inform us and transform many of our experiences, and one of Spectacles' descendants is a very real possibility to help deliver those.

2.  Goodbye Smartphone:  Yes, we increasingly love our smartphones.  They are the Swiss Army knives of personal electronic devices, offering features undreamed of just a couple decades ago, to the point that even using the term "phones" in the name no longer reflects a main purpose.  As multi-purpose and omnipresent as they have become, there still is that awkwardness of having to hold the device.

We still are not at an Internet of Things environment (IoT), but we will be in less time than it took to go from cell phones to smartphones.  So apps and services that are built on smartphones better start looking for other, less device-specific platforms.  I'm not suggesting that Spectacles is, in any way, that platform, but at least Snap Inc. understands the problem.

3. Define Your Industry -- Don't Be Defined By It:  Snapchat was, and still is, sitting pretty in the messaging industry.  Messaging is big.  Facebook is pumping lots of money into Messenger and Instagram, suitors are falling over themselves to try to buy Twitter, Google has high hopes for Allo, and WeChat still hopes to take over the world outside of China.

Meanwhile, Snapchat's parent company wants to be a camera company.  That might sound dangerously backward-looking, but when Mr. Spiegel says Snap Inc. is a camera company, he doesn't mean that in the traditional sense.  As he told the WSJ, "It’s about instant expression and who you are right now. Internet-connected photography is really a reinvention of the camera."  Snap Inc. is reinventing the industry they are in.

Look at it this way: Snap won't be dependent on some other company's camera sitting on some other company's device to generate content.  Maybe not so backward looking at all.


In health care, bold thinking is for hospitals to relabel themselves as "health systems," or chiropractors to call their offices "wellness clinics."  That's nothing like a hugely successful messaging app company declaring they are in the camera business and producing hardware to support that vision.  Snap may succeed or they may fail (just ask Google), but what they are doing takes guts, and a vision of the future that doesn't just look like more of the same.  If any industry needs those, it is health care.

OK, health innovators: what is your parallel to Spectacles, and what industry do you think you're in?

Sunday, September 18, 2016

I Really Wish You Wouldn't Do That

Digital rectal exams (DREs) typify much of what's wrong with our health care system.  Men dread going to go get them, they're unpleasant, they vividly illustrate the physician-patient hierarchy, and -- oh, by the way -- they apparently don't actually provide much value.

By the same token, routine pelvic exams for healthy women also don't have any proven value either.

The recent conclusions about DREs come from a new study.  One of the researchers, Dr. Ryan Terlecki, declared: "The evidence suggests that in most cases, it is time to abandon the digital rectal exam (DRE).  Our findings will likely be welcomed by patients and doctors alike."

No kidding.

The study actually questioned doing DREs when PSA tests were available, but it's not as if PSA tests themselves have unquestioned value.  Even the American Urological Association came out a few years ago against routine PSA tests, citing the number of false positives and resulting unnecessary treatments.

Indeed, the value of even treating the cancer that DREs and PSAs are trying to detect -- prostate cancer -- has come under new scrutiny.  A new study tracked prostate cancer patients for ten years, and found "no significant difference" in mortality between those getting surgery, radiation, or simple active monitoring.

The surgery and radiation, on the other hand, had some unwelcome side effects.  Forty-six percent of men who had their prostate removed were wearing adult diapers six months later, and impotence was reported in 88% of surgical patients and 78% of radiation patients.  The chief medical officer of the American Cancer Society admitted, "Our aggressive approach to screening and treating has resulted in more than 1 million American men getting needless treatment."

"Needless" is perhaps the most benign description of what happened to those men.

As for the pelvic exam, about three-fourths of preventive visits to OB-GYNs include them, over 60 million visits annually.  They're not very good at either identifying or ruling out ovarian cancer, and the asymptomatic conditions they can detect don't have much data to indicate that treating them early offers any advantage to simply waiting for symptoms.

Or take mammograms.  Mammograms are uncomfortable, have significant false positive/over-diagnosis rates, and costs us something like $4b annually in unnecessary costs, yet remain the "gold standard."

Not many women like them.  It has been oft-stated that if men had to get them, there would be a better method.  Yet, according to the CDC, about two-thirds of women over 40 have had a mammogram within the past two years.  Maybe they shouldn't have.

Recommendations for how often and for which ages should get mammograms vary widely, with the default often ending up being annual screenings.  However, new research has concluded that many women only need triennial screenings.  Lead author Amy Trentham-Dietz said: "Women at low risk and low breast density will experience more harms with little added benefit with annual and biennial screening compared to triennial screening."

Mammograms can find evidence of breast cancers or pre-cancers, which often leads to mastectomies.  It has been known for some time that mastectomy rates in the U.S. are much higher than other countries, but now we're seeing more mastectomies in earlier stages of breast cancer and a "perplexing" increase in bilateral mastectomies, even among women who neither have cancer in the second breast nor carry the BRCA risk mutation for it, according to a AHRQ brief earlier this year.

As AHRQ Director Rick Kronick observed: "This brief highlights changing patterns of care for breast cancer and the need for further evidence about the effects of choices women are making on their health, well-being and safety."

In less diplomatic terms: what the hell?

Then there is everyone's favorite test -- colonoscopies.  Only about two-thirds of us are getting them as often as recommended, and over a quarter of us have never had one.  There are other alternatives, including a "virtual" colonoscopy and now even a pill version of it, but neither has done much to displace the traditional colonoscopy.  And all of those options still require what many regard as the worst part of the procedure, the prep cleansing.

An option that avoids not only the procedure but also the prep hasn't taken root either.  It involves collecting a sample of one's stool to test the blood.  This option, such as fecal immunochemical test (FIT) or fecal occult blood test (FOBT), has strong research support, to the point that the Canadian Task Force on Preventive Care says it, not colonoscopies, should be the first line of screening.  It is also much cheaper than a colonoscopy.  In the U.S., though, colonoscopies remain the preferred option for physicians.  

The final example is what researchers recently called an "epidemic" of thyroid cancer, which they attributed to overdiagnosis.  In the U.S., for example, annual incidence tripled from 1975 to 2009.  They found that the rates of the cancer were tied to the increased availability of diagnostic tests like ultrasound and CT scans, which led to the discovery of more cancers.  The researchers believe that as many as 80% of the tumors discovered were small benign ones, which did not mean they weren't surgically treated.

In fact, according to the researchers: "The majority of the overdiagnosed thyroid cancer cases undergo total thyroidectomy and frequently other harmful treatments, like neck lymph node dissection and radiotherapy, without proven benefits in terms of improved survival."  Not only that, once they've had the surgery, most patients will have to take thyroid hormones the rest of their lives.  

All of these examples happen to relate to cancer, although there certainly are similar examples with other diseases/conditions (e.g., appendectomy versus antibiotics for uncomplicated appendicitis).

Two conclusions:

1.  If we're going to have unpleasant things done to us, they better be based on facts: As the above examples illustrate, some of our common treatments and tests are based on tradition and/or outdated science.  We deserve better than that.  We should demand the options and the evidence.

2.  We should do everything we can to make unpleasant things, well, less unpleasant:  Physicians can't just focus on reducing patients' medical complaints but also should seek to reduce other complaints about their care.  When patients dread having something done, and often use that as an excuse not to get services, that should be a tip-off that something needs to change.

Let's get right on those.

Thursday, September 8, 2016

AI Docs May Need Some Good AI Lawyers

A recent post highlighted how artificial intelligence (AI) is already playing important roles in health care, and concluded that expanded use of AI may be ready for us before we are ready for it.  One example of the kind of problem we'll face is: who would we sue if care that an AI recommended or performed went wrong?

Because, you know, always follow the money.

Last week Stanford's One Hundred Year Study On Artificial Intelligence released its 2016 report, looking at the progress and potential of AI, as well as some recommendations for public policy.  The report urged that we be cautious about both too little regulation and too much, as the former could lead to undesirable consequences and the latter could stifle innovation.

One of the key points is that there is no clear definition of AI, because "it isn't any one thing."  It is already many things and will become many, many more.  We need to prepare for its impacts.

This call is important but not new.  For example, in 2015 a number of thought leaders issued an "open letter" on AI, both stressing its importance and that we must maximize the societal benefit of AI.  As they said, "our AI systems must do what we want them to do," "we" being society at large, not just AI inventors/investors.

The risks are real.  Most experts downplay concerns that AI will supplant us, as Stephen Hawkings famously warned, but that is not the only risk it poses.  For example, mathematician Cathy O'Neil argues in Weapons of Math Destruction that algorithms and Big Data are already being used to target the poor, reinforce racism, and make inequality worse.  And this is when they are still largely being overseen by humans.  Think of the potential when AI is in charge.

With health care, deciding what we want AI to be able to do is literally a life-or-death decision.  

Let's get to the heart of it: there will be an AI that knows as much -- or more -- as any physician ever has.  When you communicate with it, you will believe you are talking to a human, perhaps smarter than any human you know.  There will be an AI that can perform even complex procedures faster and more precisely than a human.  And there will be AIs who can look you in the eye, shake your hand, feel your skin -- just like a human doctor.  Whether they can also develop, or at least mimic, empathy remains to be seen.

What will we do with such AIs?  

The role that many people seem most comfortable with is that they would serve as aids to physicians.  They could serve as the best medical reference guide ever, able to immediately pull up any relevant statistics, studies, guidelines, and treatment options.  No human can keep all that information in their head, no matter how good their education is or how much experience they've had.  

Some go further and envision AIs actually treating patients, but only with limited autonomy and under direct physician supervision, as with physician assistants.  

But these only tap AI's potential.  If they can perform as well as physicians -- and that is an "if" about which physicians will fight fiercely -- why shouldn't their scope of practice be as wide as physicians'?  In short, why shouldn't they be physicians?

Historically, the FDA has regulated health-related products.  It has struggled with how to regulate health apps, which pose much less complicated questions than AI.  With AI, regulators may not be able to ascertain exactly how it will behave in a specific situation, as its program may constantly evolve based on new information and learning.  How is a regulator to say with any certainty that an AI's future behavior will be safe for consumers?

Perhaps AI will grow independent enough to be considered people, not products.  After all, if corporations can be "people," why not AI?  Indeed, specific instances of AI may evolve differently, based on their own learning.  Each AI instance might be, in a sense, an individual, and would have to be treated accordingly.  

If so, can we really see a medical licensing board giving a license to an AI?  Would we want to make one go through the indentured servitude of an internship/residency?   How should we evaluate their ability to give good care to patients?  After all, we don't do such a great job about this with humans.  

Let's say we manage to get to AI physicians.  It's possible that they will become widely available, but not seen as "good" as human physicians, and it ends up that only the wealthy can afford the latter.  Or AIs could be seen as better, and the wealthy ensure that only they benefit  from them, with everyone else "settling" for old-fashioned human physicians.  

These are the kinds of societal issues the Stanford report urged that we think about.

One of the problems we'll face is that AIs may expose the amount of unnecessary care patients now get, as is widely believed.   They may also expose that many of the findings which guide treatment decisions are based on faulty or outdated research, as has been charged.  In short, AIs may reveal that the practice of medicine is, indeed, a very human activity, full of all sorts of human shortcomings.  

Perhaps expecting AIs to be as good as physicians is setting too low a bar.

Back to the original question: who would be at fault if care given by an AI causes harm?  Unlike with humans, an AI's mistakes are unlikely to be because they didn't remember what to do, or because they were tired or distracted.  On the other hand, the self-generated algorithm it used to reach its decision may not be understandable to humans, so we may never know exactly what went "wrong." 

Did it learn poorly, so the AI's creator is at fault?  Did it base its decisions on invalid data or faulty research, in which cause their originators should be liable?  Did it not have access to the right precedents, in which case can we attach blame to anyone?  How would we even "punish" an AI?

Lawyers and judges, legislators and regulators will have plenty to work on.  Some of them may be AIs too.

Still, the scariest thing about AI isn't the implications we can imagine, no matter how disruptive they seem, but the unexpected ones that such technological advances inevitably bring about.  We may find that problems like licensing, malpractice, and job losses are the easy ones.  

Monday, September 5, 2016

To Err Is Human, To Diagnose AI?

A new study found that physicians have a surprisingly poor knowledge of the benefits and harms of common medical treatments.  Almost 80% overestimated the benefits, and two-thirds overestimated the harms.  And, as Aaron Carroll pointed out, it's not just that they were off, but "it's how off they often were."

Anyone out there who still doesn't think artificial intelligence (AI) is needed in health care?

The authors noted that previous studies have found that patients often overestimate benefits as well, but tended to minimize potential harms.  Not only do physicians overestimate harm, they "underestimate how often most treatments have no effects on patients -- either harmful or beneficial."  Perhaps this is because, at least in part, because "physicians are poor at assessing treatment effect size and other aspects of numeracy."  

The authors pointed out that "even when clinicians understand numeracy, expressing these terms in a way patients will understand is challenging."  When asked how often they discussed absolute or relative risk reduction, or NNT, with patients, 47% said "rarely" -- and a third said "never."  

Dr. Carroll's reaction to this: "I’m screaming in my office because I feel like it’s all I talk about."

An accompanying editorial called for more physician training, and also urged more use of visual representations of probabilities.  Better visual representation of data is certainly good, but one wonders why physicians should need more training in understanding what treatments have what kind of value to their patients.  Isn't that, in fact, the whole point of medical education?

So, who/what is good with numeracy, remembering statistics, and evaluating data?  There are math nerds, of course, but, instead of going to medical school, as they once might have, they're probably making fortunes working on Wall Street or trying to make billions with their tech start-ups.  Then, of course, there is AI.

An AI would know exactly the known benefits/risks of treatments, common or not, and maybe even produce a nifty graph to help illustrate them.

Probably the best-known AI in health care is IBM's Watson, but they definitely don't have the field to themselves.  CB Insights recently profiled over 90 AI start-ups in healthcare, with over 55 equity funding rounds already this year (compared to 60 in 2015).  These run the gamut; they categorized them into: drug discovery, emergency room & hospital management, healthcare research, insights and risk management, lifestyle management & monitoring, medical imaging & diagnostics, mental health, nutrition, miscellaneous, wearables, and virtual assistants.

No wonder Frost & Sullivan projects this to be a $6.7b industry by 2025.

Take at look at some of AI's recent successes:

  • Researchers at Houston Methodist developed AI that improves breast cancer risk prediction, translating data "at 30 times human speed and with 99 percent accuracy."
  • Harvard researchers created AI that can differentiate breast cancer cells almost as well as pathologists (92% versus 96%) -- and, when used in tandem with humans, raised accuracy to 99.5%.
  • Stanford researchers developed AI that they believe beats humans in analyzing tissue cells for cancer, partly because it can identify far more traits that lead to a correct diagnosis than a human can.  No wonder some think AI may replace radiologists! 
  • A Belgium study found that AI can provide a "more accurate and automated" interpretation of tests for lung disease.
  • Watson recently diagnosed leukemia in a patient, a diagnosis that physicians had missed.  Watson had the benefit of comparing the patient's data to 20 million cancer records. 
AI is very good at looking at vast amounts of data very quickly, identifying patterns, and making predictions.  AI may lack humans' treasured intuition, but it has us beat when it comes to processing facts.  When it comes to something objective like "is this a cancer cell or not?" or "what are the documented risks/benefits of a treatment?", it's hard to argue against the value that AI can bring to health care.

Current iterations of AI are less truly "intelligent" than just really, really fast at what they do.  That is changing, though, as AI becomes less about what we program them to do than about an AI using "deep learning" to get smarter.  Deep learning essentially uses trial and error -- at almost incomprehensible speeds -- to figure out how to accomplish tasks.  It's how AI have gone from lousy at image recognition to comparable to -- or better than -- humans.

One of the dirty little secrets of health care is that much of our care is based on trial and error -- and that trial and error is often limited to our physician's personal training and experience.  If we're lucky, we have a physician who has seen lots of similar patients and is very well versed in the research literature, but he/she is still working with much less information than an AI could have access to -- even about one of the physician's own patients.
 
Our problem is going to be when we simply don't know what an AI did or why it reached the conclusion it did.  No human could have searched through the 20 million records Watson did to identify the leukemia, but at least it was a fairly objective result.  Someday soon AI will pull together seemingly unrelated facts to produce diagnoses or proposed treatments that we simply won't be able to follow, much less replicate.

We've had trouble getting state licensing boards to accept telemedicine, sometimes even when used by physicians they've licensed, much less when used by physicians from other states.  One can only imagine how they, or the FDA, are going to react to AIs who come up with diagnoses and treatments for reasons they can't explain to us.

Then, again, many physicians might sometimes have the same problem.  How much of a higher standard should we hold AI to?  How much better do they have to be?

In the short term, AI are likely to be a tool to help expand physicians' capabilities; as IBM's Kyu Rhee says, "as ubiquitous as the humble stethoscope."  In the mid-term, they may be partners with physicians, adding value on an equal basis.  And in the not-too-distant future, they may be alternatives to, or even replacements for, physicians.  With AI's capabilities growing exponentially and computing cost growing cheaper, human physicians may be a costly luxury many can't -- or won't want to -- afford.

One suspects that AIs will be ready for us before we're ready for them.