Monday, April 25, 2022

Healthcare Suffers from Patient Bias

If you went to business school, or perhaps did graduate work in statistics, you may have heard of survivor bias (AKA, survivorship bias or survival bias).   To grossly simplify, we know about the things that we know about, the things that survived long enough for us to learn from.  Failures tend to be ignored -- if we are even aware of them. 

The famous "missing holes" image

This, of course, makes me think of healthcare.  Not so much about the patients who survive versus those who do not, but about the people who come to the healthcare system to be patients versus those who don’t. It has a “patient bias.”

Survivor bias has a great origin story, even if it may not be entirely true and probably gives too much credit to one person.  It goes back to World War II, to mathematician Abraham Wald, who was working in a high-powered classified program called the Statistical Research Group (SRG).

One of the hard questions SRG was asked was how best to armor airplanes.  It’s a trade-off: the more armor, the better the protection against anti-aircraft weapons, but the more armor, slower the plane and the fewer bombs it can carry. They had reams of data about bullet holes in returning airplanes, so they (thought they) knew which parts of the airplanes were the most vulnerable.

Dr. Wald’s great insight was, wait -- what about all the planes that aren’t returning?  The ones whose data we’re looking at are the ones that survived long enough to make it back.  The real question was: where are the “missing holes”?  E.g., what was the data from the planes that did not return?

Credit: countbayesie.com

I won’t embarrass myself by trying to explain the math behind it, but essentially what they had to do was figure out how to estimate those missing holes in order to get a more complete picture.  The places with the most bullet holes don’t suggest those are the areas that need more armor, because, obviously, those planes can absorb hits to those parts and still make it back. It turned out that armoring engines was the best bet.  The military took his advice, saving countless pilots’ lives and helping shorten the war.   

That, my friends, is genius – not so much the admittedly complicated math as simply recognizing that there were “missing holes” that needed to be accounted for.

Jordon Ellenberg, in his How Not To Be Wrong, posed another example of survivor bias – comparing mutual funds’ performance.  You might compare performance over, say, ten years:

But something’s missing: the funds that aren’t there. Mutual funds don’t live forever. Some flourish, some die. The ones that die are, by and large, the ones that don’t make money. So judging a decade’s worth of mutual funds by the ones that still exist at the end of the ten years is like judging our pilots’ evasive maneuvers by counting the bullet holes in the planes that come back.
Healthcare has lots of data.  Every time you interact with the healthcare system you’re generating data for it.  The system has more data than it knows what to do with, even in a supposed era of Big Data and sophisticated analytics.  It has so much data that its biggest problem is usually said to be the lack of sharing that data, due to interoperability barriers/reluctance.

I think the bigger problem is the missing data. 


Take, for example, the problem with clinical trials, the gold standard in medical research.  We’ve become aware over the last few years, that results from clinical trials may be valid if you are a white man, but otherwise, not so much.  A 2018 FDA drug trial analysis found white made up 67% of the population but 83% of research participants; women are 51% of population but 38% of trial participants. There’s important data that clinical trials are not generating.

Or think about side effects of drugs or medical devices.  It’s not bad enough that the warning labels list so many possible ones, without any real ranking of their likelihood, but what’s worse is that those are only the ones reported by clinical trial participants or others who took the initiative to contact the FDA or the manufacturer.  Where are the “missing reports,” from people who didn’t attribute them to the drug/device, who didn’t know/take the initiative to make a report, or were simply unable to? 

Physicians often try to explain to prospective patients who they might fare post-treatment (e.g., surgery or chemo), but do they really know?  They know what patients report during scheduled follow-up visits, or if patients were worried enough to warrant a call, but otherwise, they don’t really know. As my former colleague Jordan Shlain, MD, preaches: “no news isn’t good news; it’s just no news.”

The healthcare system is, at best, haphazard about tracking what happens to people after they engage with it.

Most important, though, is data on what happens outside the healthcare system. The healthcare system tracks data on people who are patients, not on people when they aren’t.  We’re not looking at the people when they don’t need health care; we’re not gathering data on what it means to be healthy.  I.e., the “missing patients.”

Not everyone is a patient, or all the time. Credit: digitalhealth.net

Our healthcare system’s baseline should be people while they are healthy – understanding what that is, how they achieve it.  Then it needs to understand how that changes when they’re sick or injured, and especially how their interactions with the healthcare system improve/impede their return to that health. 

We’re a long way from that.  We’ve got too many “missing holes,” and, like the WWII military experts, we don’t even realize we’re missing them. We need to fill in those holes. We need to fix the patient bias.

Healthcare has a lot of people who, figuratively, make airplanes and many others who want to sell us more armor.  But we’re the pilots, and those are our lives on the line. We need an Abraham Wald to look at all of the data, to understand all of our health and all of the various things that impact it. 

It’s 2022. We have the ability to track health in people’s everyday lives.  We have the ability to analyze the data that comes from that tracking. It’s just not clear who in the healthcare system has the financial interest to collect and analyze all that data.  Therein lies the problem.

We’re the “missing holes” in healthcare.

Monday, April 18, 2022

We Love Innovation. Don't we?

America loves innovation.  We prize creativity.  We honor inventors.  We are the nation of Thomas Edison, Henry Ford, Jonas Salk, Steve Jobs, and Stephen Spielberg, to name a few luminaries.   Silicon Valley is the center of the tech world, Hollywood sets the cultural tone for the world, Wall Street is preeminent in the financial world.   Our intellectual property protection for all that innovation is the envy of the world.


But, as it turns out, maybe not so much. If there’s any doubt, just look at our healthcare system. 

---------

Matt Richtel writes in The New York TimesWe Have a Creativity Problem.”  He reports on research from Katz, et. alia that analyzes not just what we say about creative people, but our implicit impressions and biases about them.  Long story short, we may say people are creative but that doesn’t mean we like them or would want to hire them, and how creative we think they are depends on what they are creative about. 

“People actually have strong associations between the concept of creativity and other negative associations like vomit and poison,” Jack Goncalo, a business professor at the University of Illinois at Urbana-Champaign and the lead author on the new study, told Mr. Richtel.

Vomit and poison? 

A previous (2012) study by the same team focused on why we say we value creativity but often reject creative ideas.  “We have an implicit belief the status quo is safe,” Jennifer Mueller, a professor at the University of San Diego and a lead author on the 2012 paper, told Mr. Richtel“Novel ideas have almost no upside for a middle manager — almost none, The goal of a middle manager is meeting metrics of an existing paradigm.

You’ve been there.  You’ve seen that.  You’re probably blocked a few creative ideas yourself.

The 2012 research pointed out: “Our findings imply a deep irony.  Prior research shows that uncertainty spurs the search for and generation of creative ideas, yet our findings reveal that uncertainty also makes us less able to recognize creativity, perhaps when we need it most.  Moreover, “people may be reluctant to admit that they do not want creativity; hence, the bias against creativity may be particularly slippery to diagnose.

In the new study, participants were given two identical descriptions of a potential job candidate, except that one of the candidates had demonstrated creativity in designing running shoes, but the other in designing sex toys (the researchers note: “the pornography industry plays a significant role in the refinement, commercialization, and broad dissemination of innovative new technologies”).  The participants explicitly rated the latter candidate as less creative, although their implicit ratings showed equal ratings. 

The researchers concluded: 

Collectively, the findings strongly support our contention that implicit impressions of creativity can readily form, be differentiated from a traditional explicit measure, and uniquely predict downstream judgment, such as hiring decisions, that might be relevant in an organizational context.

This matters, they say, because: “the findings of study 4 seem to square with real world examples of highly creative people who were ignored until well after their death because their work was too controversial in its time to be recognized as a creative contribution…”

Umm, anyone remember Ignaz Semmelweis?

As the researchers warn: “The results of study 4 merely hint at the possibilities that await in many other embarrassing, stigmatized, or controversial domains within in which people might choose to do their most creative work but that their peers (and creativity researchers) might fear to tread.” 

E.g., if you’ve done your creative work in the health insurance space, that doesn’t necessarily buy you much credibility in the rest of the health care world – or if you’ve demonstrated your creativity in health tech, the rest of the tech world may still doubt your skills.

--------

Well, at least our patent system, which protects intellectual property and helps fosters innovation, works, right?  It fosters and incents innovation, doesn’t it?  Again, not so much.  A New York Times editorial charges: “The United States Patent and Trademark Office is in dire need of reform.”  In the current Patent Office system, the Editorial Board asserts, not only is legal trickery rewarded and the public’s interest overlooked, but also innovation — the very thing that patents were meant to foster — is undermined.” 

Credit: USPTO
If there’s any doubt, just look at the price of insulin, which has been propped up by patent “innovations” that keep its price high after a hundred years.  “When it comes to protecting a drug monopoly,” The Times says, not limiting those monopolies to insulin, “it seems no modification is too small.” 

The Patent Office, The Times suggests, needs to ensure that inventions are “truly novel and nonobvious, it must be described in enough detail for a reasonably qualified person to build and use it, and it must actually work.”  It also needs to challenge “bad patents,” such as those from so-called patent trolls, which exist not to innovate but to extort money from actual innovators. 

The U.S. is still, by far, the leader in patents granted, but not in scientific research papers or R&D spending per capita/% of GDP, which makes one wonder what all those patents are for.

-----------

Healthcare desperately needs innovation.  No one can dispute that; not anyone working in it, not anyone receiving care from it, not anyone who has had any exposure to it.  But healthcare also has a lot of middle managers, and middlemen, and, as Professor Mueller said, “Novel ideas have almost no upside for a middle manager.” 

Even worse, healthcare is always teetering on the edge of uncertainty – where’s the funding coming from, how much, what health crisis is coming, what’s the government going to do next?  The forces causing all that uncertainty should be driving innovation, but, as Professor Morrison’s 2012 research also found, “…uncertainty also makes us less able to recognize creativity.”  We have blind spots about what creativity is, who creative people are, and when and how we should incorporate those into our organizations.

Right now, healthcare thinks that EHRs and digital health – whatever that might actually be -- qualify as innovation.  That’s enough, it believes; those are forcing change in ways and at a pace healthcare is not used to and is not comfortable with.

Too bad.

It has been said that if your company has an innovation department, it’s not innovative. If it has middle managers deciding which novel ideas get pursued, don’t expect real innovation.  If it is ruling out hiring people who worked on unusual projects (think sex toys), it’s rejecting creativity. 

Your biases against creativity may (not) be showing.

Monday, April 11, 2022

DALL-E 2, Draw an AI Doctor

I can’t believe I somehow missed when OpenAI introduced DALL-E in January 2021 – a neural network that could “generate images from text descriptions” -- so I’m sure not going to miss now that OpenAI has unveiled DALL-E 2.  As they describe it, “DALL-E 2 is a new AI system that can create realistic images and art from a description in natural language."  The name, by the way, is a playful combination of the animated robot WALL-E  and the idiosyncratic artist Salvator Dali.

Credit: DALL-E 2/OpenAI

This is not your father’s AI.  If you think it’s just about art, think again.  If you think it doesn’t matter for healthcare, well, you’ve been warned.

Here’s further descriptions of what OpenAI is claiming:

  • “DALL·E 2 can create original, realistic images and art from a text description. It can combine concepts, attributes, and styles.
  • DALL·E 2 can make realistic edits to existing images from a natural language caption. It can add and remove elements while taking shadows, reflections, and textures into account.
  • DALL·E 2 can take an image and create different variations of it inspired by the original.”

Here’s their video:



I’ll leave it to others to explain exactly how it does all that, aside from saying it uses a process called diffusion, “which starts with a pattern of random dots and gradually alters that pattern towards an image when it recognizes specific aspects of that image.”  The end result is that, relative to DALL-E, DALL-E 2 “generates more realistic and accurate images with 4x greater resolution.” 

Devin Coldeway, writing in TechCrunch, marvels:

It’s hard to overstate the quality of these images compared with other generators I’ve seen. Although there are almost always the kinds of “tells” you expect from AI-generated imagery, they’re less obvious and the rest of the image is way better than the best generated by others.

OK, it’s true that DALL-E isn’t coming up with the ideas for art on its own, but it is creating never-seen-before images, like a koala bear dunking or Mona Lisa with a mohawk.  If that’s not AI being creative, it’s close.

-----------

Sam Altman, OpenAI’s CEO, had a blog post with several interesting thoughts about DALL-E 2.  He starts out by saying: “For me, it’s the most delightful thing to play with we’ve created so far. I find it to be creativity-enhancing, helpful for many different situations, and fun in a way I haven’t felt from technology in a while.”  I’m a big believer in Seven Johnson’s maxim that the future is where people are having the most fun, so that really hit home for me.

Mr. Altman outlines six things he believes are noteworthy about DALL-E 2:

  1. “This is another example of what I think is going to be a new computer interface trend: you say what you want in natural language or with contextual clues, and the computer does it.
  2. It sure does seem to “understand” concepts at many levels and how they relate to each other in sophisticated ways.
  3. Although I firmly believe AI will create lots of new jobs, and make many existing jobs much better by doing the boring bits well, I think it’s important to be honest that it’s increasingly going to make some jobs not very relevant (like technology frequently does)
  4. A decade ago, the conventional wisdom was that AI would first impact physical labor, and then cognitive labor, and then maybe someday it could do creative work. It now looks like it’s going to go in the opposite order.
  5. It’s an example of a world in which good ideas are the limit for what we can do, not specific skills.
  6. Although the upsides are great, the model is powerful enough that it's easy to imagine the downsides.”

On that last point, OpenAI has sharply restricted what images DALL-E has been trained on, who has access to it, watermarks each images it generates, reviews all images generated, and restricts the use of real individuals’ faces.  They recognize the potential for abuse.  Oren Etzioni, chief executive of the Allen Institute for AI, warned The New York Times: “There is already disinformation online, but the worry is that this scale disinformation to new levels.”

Mr. Altman indicated that there might be a product launch this summer, with broader access, but Mira Murati, OpenAI’s head of research, was firm: “This is not a product. The idea is understand capabilities and limitations and give us the opportunity to build in mitigation.

Credit: OpenAI
OpenAI algorithms researcher Prafulla Dhariwal told Fast Company: “Vision and language are both key parts of human intelligence; building models like DALL-E 2 connects these two domains. It’s a very important step for us as we try to teach machines to perceive the world the way humans do, and then eventually develop general intelligence.”

As their video says. “DALL-E helps humans understand how advanced AI systems see and understand our world.” 

------------

I don’t have any artistic skill whatsoever, but, as Mr. Altman suggested, we’re building towards “a world in which good ideas are the limit for what we can do, not specific skills.” In that world, as Mr. Altman also suggested, AI may do creative and cognitive work before physical labor.  We’ve already met Ai-Da, a an AI-driven “robot artist,” and we’re going to see other examples of creative AI.

OpenAI already has OpenAI Codex, an “AI system that can convert natural language to code.”  There are AI tools that can write, including one powered by OpenAI, and ones that can compose music.   

And, of course, Google has a host of AI initiatives specifically oriented towards health. 

Healthcare in general, and the practice of medicine in particular, has long been seen as a uniquely human endeavor.  Its practitioners claim it is a blend of art and science, not easily reducible to computer code.  If healthcare is finally acknowledging that AI is good at, say, recognizing radiology images, it purports that is still a long way from diagnosing patients with their complex situations, much less advising or comforting them. 

Perhaps we should ask DALL-E 2 to draw them a picture of what that might look like.


Monday, April 4, 2022

If You've Seen One Robot -- Wait, What?

We think we know robots, from the old school Robbie the Robot to the beloved R2-D2/C-3PO to the acrobatic Boston Dynamics robots or the very human-like Westworld ones.   But you have to love those scientists: they keep coming up with new versions, ones that shatter our preconceptions.  Two in particular caught my attention, in part because both expect to have health care applications, and in part because of how they’re described.

Hint: the marketing people are going to have some work to do on the names.

Yep, that's a robot. Credit: Syun, et. alia

-----------

Let’s start with the robot called by its creators – a team at The Chinese University of Hong Kong -- a “magnetic slime robot,” which some in the press have referred to as a “magnetic turd robot” (see what I mean about the names?).  It has what are called “visco-elastic properties,” which co-creator Professor Li Zhang explained means “sometimes it behaves like a solid, sometimes it behaves like a liquid…When you touch it very quickly it behaves like a solid. When you touch it gently and slowly it behaves like a liquid” 

The slime is made from a polymer called polyvinyl alcohol, borax, and particles of neodymium magnet. The magnetic particles allow it to be controlled by other magnets, but also are toxic, so researchers added a protective layer of silica, which would, in theory, allow it to be ingested (although Professor Zhang warned: “The safety [would] also strongly depend on how long you would keep them inside of your body.”). 

The big advantage of the slime is that it can easily deform and travel through very tight spaces.  The researchers believe it is capable of “grasping solid objects, swallowing and transporting harmful things, human motion monitoring, and circuit switching and repair.”  It even has self-healing properties.

Watch it in action:



In the video, among other tasks, the slime surrounds a small battery; researchers see using the slime to assist when someone swallows one.  “To avoid toxic electrolytes leak[ing] out, we can maybe use this kind of slime robot to do an encapsulation, to form some kind of inert coating,” Professor Zhang said.

As fate would have it, the news of the discovery hit the on April 1st, leading some to think it was an April Fool’s joke, which the researchers insist it is not.  Others have compared the magnetic slime to Flubber or Venom, but we’ll have to hope we make better use of it. 

It is not yet autonomous, so some would argue it is not actually a robot, but Professor Zhang insists, “The ultimate goal is to deploy it like a robot.” 

----------

If magnetic slime/turd robots don’t do it for you, how about a “magnetic tenacle robot” – which some have deemed a “snakelike” robot)?  This one comes from researchers at the STORM Lab at the University of Leeds.  STORM Lab mission is:

We strive to enable earlier diagnosis, wider screening and more effective treatment for life-threatening diseases such as cancer…We do so by creating affordable and intelligent robotic solutions that can improve the quality of life for people undergoing flexible endoscopy and laparoscopic surgery in settings with limited access to healthcare infrastructures.

In this particular case, rather than using traditional bronchoscopes, which might have a diameter of 3.5 – 4 millimeters and which are guided by physicians, the magnetic tenacle robot offers a smaller, more flexible, and autonomous option.  Professor Pietro Valdastri, the STORM Lab Director, explained:

A magnetic tentacle robot or catheter that measures 2 millimetres and whose shape can be magnetically controlled to conform to the bronchial tree anatomy can reach most areas of the lung, and would be an important clinical tool in the investigation and treatment of possible lung cancer and other lung diseases.   

Moreover, “Our system uses an autonomous magnetic guidance system which does away for the need for patients to be X-rayed while the procedure is carried out.” A patient-specific route, based on pre-operative scans, would be programmed into the robotic system.  It could then inspect suspicious lesions or even deliver drugs.

Dr. Cecillia Pompili, a thoracic surgeon who was a member of them team, says: “This new technology will allow to diagnose and treat lung cancer more reliably and safely, guiding the instruments at the periphery of the lungs without the use of additional X-rays.” 

Watch it in action: