Monday, October 30, 2023

Rube Goldberg Would Be Proud

Larry Levitt and Drew Altman have an op-ed in JAMA Network with the can’t-argue-with-that title Complexity in the US Health Care System Is the Enemy of Access and Affordability. It draws on a June 2023 Kaiser Family Foundation survey about consumer experiences with their health insurance. Long stories short: although – surprisingly – over 80% of insured adults rate their health insurance as “good” or “excellent,” most admit they have difficulty both understanding and using it.  And the people in fair or poor health, who presumably use health care more, have more problems.

Health insurance is the target in this case, and it is a fair target, but I’d argue that you could pick almost any part of the healthcare system with similar results.  Our healthcare system is perfect example of a Rube Goldberg machine, which Merriam Webster defines as “accomplishing by complex means what seemingly could be done simply.”   

Boy howdy.

Bing's idea of a healthcare Rube Goldberg machine. Credit: Bing

Health insurance is many people’s favorite villain, one that many would like to do without (especially doctors), but let’s not stop there. Healthcare is full of third parties/intermediaries/middlemen, which have led to the Rube Goldberg structure.

CMS doesn’t pay any Medicare claims itself; it hires third parties – Medicare Administrative Contactors (formerly known as intermediaries and carriers). So do employers who are self-insured (which is the vast majority of private health insurance), hiring third party administrators (who may sometimes also be health insurers) to do network management, claims payment, eligibility and billing, and other tasks.

Even insurers or third party administrators may subcontract to other third parties for things like provider credentialling, utilization review, or care management (in its many forms).  Take, for example, the universally reviled PBMs (pharmacy benefit managers), who have carved out a big niche providing services between payors, pharmacies, and drug companies while raising increasing questions about their actual value.

Physician practices have long outsourced billing services. Hospitals and doctors didn’t develop their own electronic medical records; they contracted with companies like Epic or Cerner.  Health care entities had trouble sharing data, so along came H.I.E.s – health information exchanges – to help move some of that data (and HIEs are now transitioning to QUINs – Qualified Health Information Networks, due to TEFCA).

And now we’re seeing a veritable Cambrian explosion of digital health companies, each thinking it can take some part of the health care system, put it online, and perhaps make some part of the healthcare experience a little less bad.  Or, viewed from another perspective, add even more complexity to the Rube Goldberg machine.  

On a recent THCB Gang podcast, we discussed HIEs. I agreed that HIEs had been developed for a good reason, and had done good work, but in this supposed era of interoperability they should be trying to put themselves out of business. 

HIEs identified a pain point and found a way to make it a little less painful. Not to fix it, just to make it less bad. The healthcare system is replete with intermediaries that have workarounds which allow our healthcare system to lumber along.  But once in place, they stay in place. Healthcare doesn’t do sunsetting well.

Unlike a true Rube Goldberg machine, though, there is no real design for our healthcare system. It’s more like evolution, where there are no style points, no efficiency goals, just credit for survival.  Sure, sometimes you get a cat through evolution, but other times you get a naked mole rat or a hagfish. Healthcare has a lot more hagfish than cats.

Yep, that's hagfish. Credit: Bing

I’m impressed with the creativity of many of these workarounds, but I’m awfully tired of needing them. I’m awfully tired of accepting that complexity is inherent in our healthcare system. Complexity is bad for patients, bad for the people directly giving the care, and only good for all the other people/entities who make a living in healthcare because if it. Instead of making pain points less painful, we should be getting rid of them.

If we had a magic wand, we could remake our healthcare system into something much simpler, much more effective, and much less expensive. Unfortunately, we not only don’t have such a magic wand, we don’t even agree on what that system should look like. We’ve gotten so used to the complex that we can no longer see the simple.

I don’t have a Utopian vision of a healthcare system that would solve all the problems of our system, but I do have some suggestions for all the innovators in healthcare:

  • If your solution makes patients fill out one more form, log into one more portal, make one more phone call, please reconsider.
  • If your solution takes time with patients away from clinicians, making them do other tasks instead, please reconsider.
  • If your solution doesn’t create information that is going to be shared to help patients or clinicians, please reconsider.
  • If your solution only focuses on a point-in-time, rather than helping an ongoing process, please reconsider.
  • If your solution is designed to increase revenue rather than to improve health, please reconsider.
  • If your solution doesn’t recognize, acknowledge, report and act on failures/mistakes/errors, please reconsider.
  • If your solution can’t simply be explained to a layman, please reconsider.
  • If your solution adds to the healthcare system without reducing/eliminating the need for something even bigger in the system, please reconsider.
  • If your solution steers care to certain clinicians, in certain places, rather than seeking the best care for the patient in the best place, please reconsider.
  • If your solution adds costs to the healthcare system without uniquely and specifically reducing even more costs, please reconsider.
  • If your solution doesn’t have built-in mechanisms (e.g., use of A.I.) to be and stay current on an ongoing basis, please reconsider.

 I’m sure all those innovators think their idea is very clever, and many are, but remember: just because an idea is clever doesn’t mean it’s not Rube Goldbergian.  They need to step back and think about if they’re adding to healthcare’s Rube Goldberg machine or helping simplify it. My bet is that usually they’re adding to it.

So, yeah, I agree with Mr. Levitt and Dr. Altman that health insurance should be less complex.   Just like everything else in the healthcare system. Let’s start taking the healthcare Rube Goldberg machine apart.

Monday, October 23, 2023

Y2Q and You

Chances are, you’ve at least somewhat concerned about your privacy, especially your digital privacy.  Chances are, you’re right to be.  Every day, it seems, there are more reports about data beeches, cyberattacks, and selling or other misuse of confidential/personal data.  We talk about privacy, but we’re failing to adequately protect it. But chances are you’re not worried nearly enough.

Y2Q is coming. 

Ready or not, quantum computers are coming. Credit: Bing

That is, I must admit, a phrase I had not heard of until recently. If you are of a certain age, you’ll remember Y2K, the fear that the year 2000 would cause computers everywhere to crash.  Business and governments spent countless hours and huge amounts of money to prepare for it. Y2Q is an event that is potentially just as catastrophic as we feared Y2K would be, or worse.  It is when quantum computing reaches the point that will render our current encryption measures irrelevant.

The trouble is, unlike Y2K, we don’t know when Y2Q will be.  Some experts fear it could be before the end of this decade; others think more the middle or latter part of the 2030’s.  But it is coming, and when it comes, we better be ready.

Without getting deeply into the encryption weeds – which I’m not capable of doing anyway – most modern encryption relies on factoring unreasonably large numbers – so large that even today’s supercomputers would need to spend hundreds of years trying to factor.  But quantum computers will take a quantum leap in speed, and make factoring such numbers trivial. In an instant, all of our personal data, corporations’ intellectual property, even national defense secrets, would be exposed. 

“Quantum computing will break a foundational element of current information security architectures in a manner that is categorically different from present cybersecurity vulnerabilities,” warned a report by The RAND Corporation last year.

“This is potentially a completely different kind of problem than one we’ve ever faced,” Glenn S. Gerstell, a former general counsel of the National Security Agency, told The New York Times.  “If that encryption is ever broken,” warned mathematician Michele Mosca in Science News, “it would be a systemic catastrophe. The stakes are just astronomically high.”

The World Economic Forum thinks we should be taking the threat very seriously.  In addition to the uncertain deadline, it warns that the solutions are not quite clear, the threats are primarily external instead of internal, the damage might not be immediately visible, and dealing with it will need to be an ongoing efforts, not a one-time fix.

Even worse, cybersecurity experts fear that some bad actors – think nation-states or cybercriminals – are already scooping up troves of encrypted data, simply waiting until they possess the necessary quantum computing to decrypt it.  The horse may be out of the barn before we re-enforce that barn. 

It’s not that experts aren’t paying attention. For example, the National Institute of Standards and Technology has been studying the problem since the 1990’s, and is currently finalizing three encryption algorithms designed specifically to counter quantum computers. Those are expected to be ready by 2024, with more to follow. “We’re getting close to the light at the end of the tunnel, where people will have standards they can use in practice,” said Dustin Moody, a NIST mathematician and leader of the project.

Credit: J. Wang/NIST and Shutterstock          
Also, last December President Biden signed the Quantum Computing Preparedness Act, which requires federal agencies to identify where encryption will need to be upgraded. There is a National Quantum Initiative, and the CHIPs Act also boosts federal investment in all things quantum.  Unfortunately, migrating to new standards could take a decade or more.

But all this still requires that companies do their part in getting ready, soon enough.  Dr Vadim Lyubashevsky, cryptography research at IBM Research, urged:

…it’s important for CISOs and security leaders to understand quantum-safe cryptography. They need to understand their risk and be able to answer the question: what should they prioritize for migration to quantum-safe cryptography? The answer is often critical systems and data that need to be kept for the long term; for example, healthcare, telco, and government-required records.

Similarly, The Cybersecurity and Infrastructure Security Agency (CISA) emphasized: “Organizations with a long secrecy lifetime for their data include those responsible for national security data, communications that contain personally identifiable information, industrial trade secrets, personal health information, and sensitive justice system information.”

If all that isn’t scary enough, it’s possible that no encryption scheme will defeat quantum computers. Stephen Ormes, writing in MIT Technology Review points out:

Unfortunately, no one has yet found a single type of problem that is provably hard for computers—classical or quantum—to solve…history suggests that our faith in unbreakability has often been misplaced, and over the years, seemingly impenetrable encryption candidates have fallen to surprisingly simple attacks. Computer scientists find themselves at a curious crossroads, unsure of whether post-quantum algorithms are truly unassailable—or just believed to be so. It’s a distinction at the heart of modern encryption security. 

And, just to rub it in, if you’ve already been worried about artificial intelligence taking our jobs, or at least greatly boosting the cybersecurity arms race, well, think about AI on quantum computers, communicating over a quantum internet – “you have a potentially just existential weapon for which we have no particular deterrent,” Mr. Gerstell also told NYT.   

---------

Healthcare is rarely a first mover when it comes to technology. It usually waits until the economic or legal imperatives force it to adopt something. Nor has it been good about protecting our data, despite HIPAA and other privacy laws.  It’s made it often to hard for those who need the data to have access to it, while failing to protect it from external entities that want to do bad things with it.

So I don’t expect healthcare to be an early adopter of quantum computing. But I think we all should be demanding that our healthcare organizations be cognizant of the threat to privacy that quantum computing poses.  We don’t have twenty years to prepare for it; we may not even have ten.  The ROI on such preparation may be hard to justify, but the risk of not investing enough, soon enough, in it is, as Professor Mosca said, catastrophic.  

Y2Q is coming for healthcare, and for you.


Monday, October 16, 2023

Goodwill's Lessons for Healthcare

The New York Times had an interesting profile this weekend about how Goodwill Industries is trying to revamp its online presence – transitioning from its legacy ShopGoodwill.com to a new platform GoodwillFinds -- in the amidst of numerous other online resellers.  It zeroed in on the key distinction Goodwill has:

But Goodwill isn’t doing this just because it wants to move into the 21st century. More than 130,000 people work across the organization, while two million people received assistance last year through its programs, which include career navigation and skills training. Those opportunities are funded through the sales of donated items.

Moreover, the article continued: “Last year, Goodwill helped nearly 180,000 people through its job services. 

Credit: Goodwill Industries

In case you weren’t aware, Goodwill has long had a mission of hiring people who otherwise face barriers to employment, such as veterans, those who lack job experience or educational qualifications, or have handicaps.  As it says in its mission statement, it “works to enhance the dignity and quality of life of individuals and families by strengthening communities, eliminating barriers to opportunity, and helping people in need reach their full potential through learning and the power of work.”

As PYMNTS wrote earlier this month: “Every purchase made through GoodwillFinds initiates a chain reaction, providing job training, resume assistance, financial education, and essential services to individuals in need within the community where the item was contributed.” 

I want healthcare to have that kind of commitment to patients.

Healthcare claims to be all about patients. You won’t find many that openly talk about profits or return on equity. Reading mission statements of healthcare organizations yield the kinds of pronouncements one might expect.  A not-entirely random sample:

Cleveland Clinic: “to be the best place for care anywhere and the best place to work in healthcare.

HCA: “committed to the care and improvement of human life...dedicated to giving people a healthier tomorrow.”

Kaiser Permanente: “to provide high-quality, affordable health care services and to improve the health of our members and the communities we serve.”

United Healthcare: “to help people live healthier lives and make the health system work better for everyone.

UPMC: “Serve our communities by providing outstanding patient care.” 

There’s a lot about care, some about health more generally, but not so much about helping people reach their full potential.  That’s someone else’s job, some other organizations’ missions. That seems like something important that’s missing.

Credit: Bing

One of the things I’ve valued about Twitter – er, make that “X” – is getting to know more in the health community, or rather, communities. One of those that has been mostly rewarding is learning more about the people whose experiences in the healthcare system has made them vigorous advocates for patients – themselves and others. 

At the risk of overlooking many worthy efforts, they do things like fight for patient information privacy, access to one’s own health data,  helping patients navigate the healthcare system, ensure patients are represented in clinical trial design and in healthcare conferences, and empowering peer to peer health.  I’m leaving many others out; the breadth and scope of, and passion for, their efforts are breathtaking

Too often, in the healthcare system, patients are people to whom things are done. They may – although not always – be in their “best interest,” but they have not generally been true partners.  Making their lives, not just their health, better has not been the mission.  Involving them, asking them, deferring to them – no, that’s not the tradition.

When your healthcare conference has panels of “experts” that don’t include the people getting care, it’s not about patients. When your board is heavy on clinicians and donors but light on patients, your organization is not about patients.  When your company develops drugs but don’t heavily involve the kinds of people who will be using those drugs, it’s not about patients.

And when your healthcare organization sues former patients or sends them to collection, that’s not about the patients’ best interests.

Here’s where I compare Goodwill to healthcare.  Where are the healthcare organizations that actively seek to hire patients?  Where are the healthcare organizations that recognize that the care some patients received may make resuming their former jobs/lives difficult or impossible, and seek to hire them or retrain them? 

E.g., instead of suing those patients who can’t pay them bills, hire them, so that they can earn a living that allows them to.  Or, at least, help guide them into other jobs that will.

Most healthcare organizations are led by executives with impressive business and/or clinical backgrounds, but I’ll posit this: ones led by people who have experienced, or are currently experiencing, significant health issues of their own would be very different than those that are not.  Personal familiarity with receiving health care should be as much of a prerequisite for healthcare executives as an M.D. or MBA.

Credit: Bing
Perhaps your healthcare organization has a “patient experience: officer; well, congratulations. But if that person isn’t actually a patient, just having someone in the role merits barely a passing grade. Moreover, there isn’t a singular “patient experience.” A woman with breast cancer has a different experience than, say, a man with a heart condition – or from a man with breast cancer, for that matter.  Getting that “patient experience” right is tough stuff.

Still, we can try to do better.

Now, I don’t want to ignore that Goodwill isn’t some idyllic organization.  It’s been accused to excessive executive compensation, of underpaying disabled workers, or even having unsafe working conditions.  Some of those charges may be misinformation, but it – and Goodwill isn’t really even an “it,” it’s a collection of independent organizations – isn’t perfect.  I just don’t see what healthcare organizations aren’t living in their own glass houses and are in no position to throw any stones. Goodwill has a broader view of making people’s lives better than healthcare organizations do.

I admire Goodwill’s commitment to hiring people whom other organizations don’t, and to helping others to be better prepared to find work elsewhere.  Healthcare organizations too often wash their hands of people once they are no longer “patients.”  They need a more holistic view of the people they serve, and they need more of those people’s perspectives. 

Healthcare – stop thinking of people as simply patients and start treating them as people. 

Monday, October 9, 2023

There Needs to Be an "AI" in "Med Ed"

It took some time for the news to percolate to me, but last month the University of Texas San Antonio announced that it was creating the “nation’s first dual program in medicine and AI.”  That sure sounds innovative and timely, and there’s no question that medical education, like everything else in our society, is going to have to figure out how to incorporate AI.  But, I’m sorry to say, I fear UTSA is going about it in the wrong way.

AI teaching medical school students. Credit: Bing

UTSA has created a five year program that will result in graduates obtaining an M.D. from UT Health San Antonio and a Master of Science in Artificial Intelligence (M.S.A.I.) from UTSA.   Students will take a “gap year” between the third and fourth year of medical school to get the M.S.A.I.  They will take two semesters in AI coursework, completing a total of 30 credit hours: nine credit hours in core courses including an internship, 15 credit hours in their degree concentration (Data Analytics, Computer Science, or Intelligent & Autonomous Systems) and six credit hours devoted to a capstone project.

“This unique partnership promises to offer groundbreaking innovation that will lead to new therapies and treatments to improve health and quality of life,” said UT System Chancellor James B. Milliken.

“Our goal is to prepare our students for the next generation of health care advances by providing comprehensive training in applied artificial intelligence,” said Ronald Rodriguez, M.D., Ph.D., director of the M.D./M.S. in AI program and professor of medical education at the University of Texas Health Science Center at San Antonio. “Through a combined curriculum of medicine and AI, our graduates will be armed with innovative training as they become future leaders in research, education, academia, industry and health care administration. They will be shaping the future of health care for all.”

Credit: UTSA/UT Health San Antonio

Dhireesha Kudithipudi, a professor in electrical and computer engineering who was tasked with helping develop the university’s AI curriculum, told Preston Fore of Fortune:

In lots of scenarios, you might see AI capabilities are being very exaggerated—that it might replace physicians and so forth. But I think our line of inquiry was guided in a different way, in a sense how we can promote this AI physician interaction-AI patient interaction, bringing humans to the center of the loop, and how AI can enhance care or emphasize more patient centric attention.

OK, fabulous.  But, you know, computers have been integral to healthcare for decades, especially the past 15 years (due to EMRs), and we don’t expect doctors to get Masters in Computer Science.  We’re just happy when they can figure out how to navigate the interfaces. 

To be honest, I was expecting more from UT; last January I wrote about how they were doing an online M.S.A.I., creating what they said “will be the first large-scale degree program of its kind and the only master’s degree program in AI from a top-ranked institution to be priced close to $10,000.”  It didn’t even require an undergraduate degree.  That, I said at the time, was the kind of thinking medical schools should be doing. 

But, instead, UTSA has made the medical school experience longer and more expensive, even though the U.S. medical education system is perhaps the longest and most expensive in the world.  No other country leaves its new doctors with such staggering medical school debt.  So, yeah, let’s add a year and another degree’s cost to that process. 

Don’t get me wrong: I’m as big an advocate of AI in healthcare as you’ll find, and medical school is no exception.  I’ll give UTSA credit for doing something about AI; I just don’t think they’ve really seized the moment. I fear they’re trying to be relevant to the present instead of preparing to jump to the future.  

Right now, medical educators need to be thinking: what does the practice of medicine look like in an AI world?  What will those doctors need to know, what will they need to know how to do, and what can they expect their various AI to do for them/assist them with?  Those aren’t questions that any of us really know the answers to, but even current results with AI indicate that it is going to be immensely helpful.  It will know more, what it knows will be more current, and it will be able to sift through masses of data to produce cogent summaries and recommendations.  Doctors in 2040, perhaps even 2030, won’t know how they ever got along without it.

Yes, practicing medicine in 2040 is going to be different. Credit: Bing
So medical education needs to change just as radically.  Medical school should be shorter. It should focus much less on memorization than on where to find and apply answers.  It should teach students how and when to rely on AI, and how to make that collaboration most productive.  Forget the stethoscopes and medical flashlights; doctors are going to be “carrying around” AI first and foremost. Similarly, VR and AR are going to be ubiquitous. 

Practicing medicine in 2030 is going to be much different than practicing even in 2020 was, and practicing in 2040 or 2050 – well, I don’t think our 20th century medical schools are preparing themselves or their students for that.

People like Charles Prober, M.D. have been advocating for over ten years for “lectures without lecture halls” – a.k.a “a flipped classroom model” -- in which memorization is emphasized less, and “in which students absorb an instructor's lecture in a digital format as homework, freeing up class time for a focus on applications.”  Medical schools have been slow to adopt those ideas, so I’m not expecting they’ll be quick to jump on how to revolutionize themselves via AI.  But they need to -- or be superseded by entities that do.

I’ve been calling for a new Flexner Report for years now.  Medical education isn’t working for doctors and it’s not working for patients.  We have way too many types of medical education, not the least of which is the now meaningless distinction between M.D. and D.O., and they all take too long, cost too much, yet don’t adequately prepare graduates for the world or the healthcare system in which they’ll be delivering care. Now add AI to that mix…

The beginning of the 21st century would have been a good time to rethink medical education from first principles, but AI now puts us on the precipice of societal change that makes such a reformation not just overdue but essential. 

Monday, October 2, 2023

Altman, Ive, and AI - Oh, My!

Earlier this year I urged that we Throw Away That Phone, arguing that the era of the smartphone should be over and that we should get on to the next big thing.  Now, I don’t have any reason to think that either Sam Altman, CEO of OpenAI, and Jony Ive, formerly and famously of Apple and now head of design firm LoveFrom, read my article but apparently they have the same idea. 

AI-led brainstorming. Credit: Bing

Last week The Information and then Financial Times reported that OpenAi and LoveFrom are “in advanced talks” to form a venture in order to build the “iPhone of artificial intelligence.”  Softbank may fund the venture with as much as $1b.  There have been brainstorming sessions, and discussions are said to be “serious,” but a final deal may still be months away. The new venture would draw on talent from all three firms.

Details are scare, as are comments from any of the three firms, but FT cites sources who suggest Mr. Altman sees “an opportunity to create a way of interacting with computers that is less reliant on screens.” which is a sentiment I heartily agree with.  The Verge similarly had three sources who agreed that the goal is a “more natural and intuitive user experience.”

OpenAI’s ChatGPT took the world by storm this year, and continues to wow; last week OpenAI announced that it could now “see, speak, and hear,” offering “a new, more intuitive type of interface by allowing you to have a voice conversation or show ChatGPT what you’re talking about.”  No wonder a future less reliant on screens makes sense.

"Given Ive’s involvement, it’s most likely to be some sort of consumer device, like a reimagined phone," write Jessica Lessin and Stephanie Palazzolo for The Information. "One possibility is OpenAI is building its own operating system... Imagine an AI-native operating system that could generate apps in real-time based on what it believes its user needs, or one that listens to nearby conversations and automatically pulls up relevant information for its user."  

I sure hope we wouldn’t get just a “reimagined smartphone.”  Carrying around a tiny computer with a screen seems so 1990’s, or at least so 2007.  In the soon-to-be world of ambient computing and virtual displays, as I discussed before, the mobile phone will soon be an outdated concept entirely.

The New York Times speculates that the initiative may be as much about control as it is innovation, saying:

One reason Mr. Altman may be determined to develop his own device is to avoid having OpenAI depend on Apple or Google’s Android for distribution. Relying on other platforms has challenged tech giants, such as Facebook and Amazon, because Apple and Google take a cut of sales across their platform. Apple also has introduced privacy limits, which cut into advertising sales.

Several tech outlets reporting on the talks noted that there is a long list of software companies with a rather dismal record when trying to shift to hardware. Ars Technica quotes former Microsoft Windows Division President Steven Sinofsky: “"Anyone can build a phone. Watching Google and Microsoft should be good evidence that few can distribute one."  TechCrunch says: “But hardware is a tricky business. OpenAI knows this well,” mentioning the robotics research division it shut down in 2021 due to “major technical difficulties.” 

What I’m wondering is if we really need Mr. Ive or the OpenAI team at all.  Perhaps you haven’t been paying attention to the work being done at Wharton on AI, but Wharton professors Christian Terwiesch and  Karl Ulrich, former Wharton graduate student Lennart Meincke, and Cornell Tech professor Karan Girotra ran an entrepreneurial competition between Wharton MBA students and ChatGPT – and ChatGPT won.

No wonder the students are dismayed. Credit: Bing
“I was really blown away by the quality of the results,” Professor Terwiesch said. “I had naively believed that creative work would be the last area in which we humans would be superior at solving problems … so we set up this horse race of man versus machine.”  ChatGPT not only produced more ideas, but vastly outperformed students in ideas that were rated “exceptional.”  Quantity and quality of ideas. 

Their three takeaways are:

First, generative AI has brought a new source of ideas to the world
Second, the bottleneck for the early phases of the innovation process in organizations now shifts from generating ideas to evaluating ideas.
Finally, rather than thinking about a competition between humans and machines, we should find a way in which the two work together.

Another new study, in Scientific Reports, found that, yes, chatbots outperformed most humans when “asked to generate uncommon and creative uses for everyday objects,” but “the best human ideas still matched or exceed those of the chatbots.”  I guess we can breathe a (temporary) sigh of relief, but I have to worry about the quality of those Wharton MBA students.

The authors of the latter study cautioned:

However, the AI technology is rapidly developing and the results may be different after half year. On basis of the present study, the clearest weakness in humans' performance lies in the relatively high proportion of poor-quality ideas, which were absent in chatbots' responses.

The Wall Street Journal’s Christophe Mims warns we’re not going to be able to avoid or ignore AI, in either our personal or professional lives: “Soon, most of us will use tools like these, even if indirectly, unless we want to risk falling behind.”  Along the lines of what Messrs. Altman and Ive may be hoping, Mr. Mimms speculates: “Another way generative AI could make itself impossible to avoid: by becoming the default interface for information retrieved from the internet, and within companies.”

The moral of the story is that, if you’re looking for new ideas, and the best ideas, you better be using AI.  And soon, those ideas may come from the AI alone.

-------------

 I usually try to link the topic of my articles to healthcare, however tenuously, and this one shouldn’t need much elaboration.  Healthcare may or may not need “the iPhone of artificial intelligence,” but it needs AI built into almost everything it does.  It also badly needs new ideas and serious innovation, and failing to use AI to generate those harms all of us.

AI is no longer science fiction.  It is the future, but it is now also the present - in our personal lives, in healthcare, and everywhere else.  I wish Messrs. Altman and Ive the best of luck, but what they’re doing should be the norm, not the exception.