Sunday, March 26, 2023

AI: Not Ready, Not Set - Go!

I feel like I’ve written about AI a lot lately, but there’s so much happening in the field. I can’t keep up with the various leading entrants or their impressive successes, but three essays on the implications of what we’re seeing struck me: Bill Gates’ The Age of AI Has Begun, Thomas Friedman’s Our New Promethean Moment, and You Can Have the Blue Pill or the Red Pill, and We’re Out of Blue Pills by Yuval Harari, Tristan Harris, and Aza Raskin.  All three essays speculate that we’re at one of the big technological turning points in human history.

We’re not ready.


The subtitle of Mr. Gates’ piece states: “Artificial intelligence is as revolutionary as mobile phones and the Internet.” Similarly, Mr. Friedman recounts what former Microsoft executive Craig Mundie recently told him: “You need to understand, this is going to change everything about how we do everything. I think that it represents mankind’s greatest invention to date. It is qualitatively different — and it will be transformational.”    

Mr. Gates elaborates:

The development of AI is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone. It will change the way people work, learn, travel, get health care, and communicate with each other. Entire industries will reorient around it. Businesses will distinguish themselves by how well they use it.

Mr. Friedman is similarly awed:

This is a Promethean moment we’ve entered — one of those moments in history when certain new tools, ways of thinking or energy sources are introduced that are such a departure and advance on what existed before that you can’t just change one thing, you have to change everything. That is, how you create, how you compete, how you collaborate, how you work, how you learn, how you govern and, yes, how you cheat, commit crimes and fight wars.

Professor Harari and colleagues are more worried than awed, warning: “A.I. could rapidly eat the whole of human culture — everything we have produced over thousands of years — digest it and begin to gush out a flood of new cultural artifacts.”  Transformational isn’t always beneficial.

Each of the articles points out numerous ways AI can help - and in some cases, already is helping – solve important problems.  Even though Professor Harari and his colleagues are the most concerned, they admit: “A.I. indeed has the potential to help us defeat cancer, discover lifesaving drugs and invent solutions for our climate and energy crises. There are innumerable other benefits we cannot begin to imagine.”

All three essays, in fact, reference how AI could help revolutionize health care in particular; Mr. Gates devotes an entire section of his essay to how AI will improve health and medical care, while Mr. Friedman discusses at length AI’s role in understanding protein folding, which has crucial roles in drug discovery.

Exciting times.  Peter Lee, Microsoft’s Corporate Vice President, Research, tweeted:

Of course, not every industry is going to be equally ready.  Take healthcare.  Joyce Lee, M.D. (aka Doctor as Designer) bemoaned:


Healthcare is trying to use 21st century technology in a system with 19th century institutions (e.g., hospitals) and 20th century regulations (e.g., telehealth licensing restrictions).  AI is going to be ready for healthcare long before healthcare is ready for it.

-------------

The problem is, of course, much bigger than healthcare.  As Mr. Friedman laments: “Are we ready? It’s not looking that way: We’re debating whether to ban books at the dawn of a technology that can summarize or answer questions about virtually every book for everyone everywhere in a second.

Professor Harari and colleagues are even more doubtful: “Social media was the first contact between A.I. and humanity, and humanity lost.”  And that was with what they correctly call “primitive” AI; imagine, they say:

What would it mean for humans to live in a world where a large percentage of stories, melodies, images, laws, policies and tools are shaped by nonhuman intelligence, which knows how to exploit with superhuman efficiency the weaknesses, biases and addictions of the human mind — while knowing how to form intimate relationships with human beings?

 Scary, indeed.

The U.S. did a terrible with recognizing how automation – more than outsourcing – took away hundreds of thousands of factory jobs over the past few decades, and we’re even more ill-prepared for when AI comes for all those white collar and “creative” jobs.  Such as in healthcare.

More than jobs are at stake, according to Professor Harari and colleagues:  

The time to reckon with A.I. is before our politics, our economy and our daily life become dependent on it. Democracy is a conversation, conversation relies on language, and when language itself is hacked, the conversation breaks down, and democracy becomes untenable
.

No, we’re not ready, especially, as Mr. Gates says: “Finally, we should keep in mind that we’re only at the beginning of what AI can accomplish. Whatever limitations it has today will be gone before we know it.”  Professor Harari and colleagues go even further: “We have summoned an alien intelligence. We don’t know much about it, except that it is extremely powerful and offers us bedazzling gifts but could also hack the foundations of our civilization.”

Wow.

---------------

AI is not like just a faster computer. It is not even like the introduction of the PC or the smartphone. This is, as the above authors have said, potentially more like mastery of fire, use of the wheel, development of the steam engine, or the advent of man-made electricity.  AI will change society as we’ve known it, in ways we can’t predict.

All three essays are dubious that market forces alone are going to result in AI that has the best outcomes for society, as opposed to for a select few.   Mr. Gates’ main priority is: “The world needs to make sure that everyone—and not just people who are well-off—benefits from artificial intelligence.”  To do that, Mr. Friedman believes: “We are going to need to develop what I call “complex adaptive coalitions” — where business, government, social entrepreneurs, educators, competing superpowers and moral philosophers all come together to define how we get the best and cushion the worst of A.I.”

But we don’t have the luxury of time. Professor Harari and colleagues urge: “The first step is to buy time to upgrade our 19th-century institutions for an A.I. world and to learn to master A.I. before it masters us.” 

I’m not sure our technology obtuse legislators or our for-profit orientation are ready for any of that.  So have fun playing with GPT-4 or Bard, but this is not a game. AI’s implications are world-changing. 


Monday, March 20, 2023

Throw Away That Phone

If I were a smarter person, I’d write something insightful bout the collapse of Silicon Valley Bank.  If I were a better person, I’d write about the dire new UN report on climate change. But, nope, I’m too intrigued about Google announcing it was (again) killing off Glass. 

Google Glass, we hardly knew ye. Credit: Google

It’s not that I’ve ever used them, or any AR (augmented reality) device for that matter.  It’s just that I’m really interested in what comes after smartphones, and these seemed like a potential path.  We all love our smartphones, but 16 years after Steve Jobs introduced the iPhone we should realize that we’re closer to the end of the smartphone era than we are to the beginning. 

It’s time to be getting ready for the next big thing.  

---------------

Google Glass was introduced ten years ago, but after some harsh feedback soon pivoted from a would-be consumer product to an Enterprise product, including for healthcare.  It was followed by Apple, Meta, and Snap, among others, but none have quite made the concept work. Google is still putting on a brave face, vowing: “We’ll continue to look at ways to bring new, innovative AR experiences across our product portfolio.”  Sure, whatever.

Credit: Google
It may be that none of the companies have found the right use case, hit the right price point, adequately addressed privacy concerns, or made something that didn’t still seem…dorky.  Or it may simply be that, with tech layoffs hitting everywhere, resources devoted to smart glasses were early on the chopping block.  They may be a product whose time has not quite come…or may never.   

That’s not to say that we aren’t going to use headsets (like Microsoft’s Hololens) to access the metaverse (whatever that turns our to be) or other deeply immersive experiences, but my question is what’s going to replace the smartphone as our go-to, all-the-time way to access information and interact with others? 

We’ve gotten used to lugging around our smartphones – in our hands, our purses, our pants, even in our watches – and it is a marvel the computing power that has been packed into them and the uses we’ve found for them.  But, at the end of the day, we’re still carrying around this device, whose presence we have to be mindful of, whose battery level we have to worry about, and whose screen we have to periodically use. 

Transistor radios – for any of you old enough to remember them – brought about a similar sense of mobility, but the Walkman (and its descendants) made them obsolete, just as the smartphone rendered them superfluous.  Something will do that to smartphones too.

What we want is all the computing power, all that access to information and transactions, all that mobility, but without, you know, having to carry around the actual device.  Google Glass seemed like a potential road, but right now that looks like a road less taken (unless Apple pulls another proverbial rabbit out of its product hat if and when it comes out with its AR glasses). 

----------------

There are two fields I’m looking to when I think about what comes after the smartphone: virtual displays and ambient computing. 

Virtual displays: when I refer to virtual displays, I don’t mean the mundane splitting your monitor (or multiple monitors) into more screens.  I don’t even mean what AR/MR (mixed reality) is trying to accomplish, adding images or content into one’s perception of the real world.  I mean an actual, free-standing display equivalent to what one would see on a smartphone screen or computer monitor, fully capable of being interacting with as though it was a physical screen.  Science fiction movies are full of these.

Tony Stark uses holographic screens. Credit: Marvel/Disney
I suspect that these will be based on holograms or related technology.  The displays they render can appear fully life-like.  You’ll use them like you would a physical screen/device, not even thinking about the fact that the displays are virtual.  You may interact with them with your hands or maybe even directly from your brain.

They’ve historically required significant computing power, but this may be changing and might not even be a constraint even if it doesn’t, due to ambient computing.  

Ambient computing: We once thought of computers as humans doing calculations.  Then they became big, room-sized machines. Personal computers brought them to a more manageable (and ultimately portable) size, and smartphones made them fit to our hands.  Moore’s Law continues to triumph.

Ambient computing (aka, ubiquitous computing, aka Internet of Things – IoT) will change our conceptions again. Basically, computers, or processors, will be embedded in almost everything.  They’ll communicate with each other, and with us. As we move along, the specific processors, and their configuration, may change, without missing a beat, much as our smartphones switch between cell towers without us (usually) realizing it.  AI will be built in everywhere. 

Ambient computing is everywhere, all the time
The number of processors used, which processors, and how they’re used, will depend on where you are and what task you want done.  The ambient computer may just listen to your direction, or may project a screen for you to use, depending on the task.  You won’t worry about where either is coming from.

In that new world of virtual screens and ambient computing, carrying around a smartphone will seem as antiquated as those 1950’s mainframes.  Our grandchildren will be as astounded by smartphones as Gen Z is by rotary phones (or landlines in general).

That’s the kind of advance I was hoping Google Glass would help bring about, and that’s why I’m sad Google is calling it quits.    

 

----------------

Healthcare is proud of itself because it finally seems to be embracing telehealth, digital medicine, and EHRs.  Each is long overdue, none are based on any breakthrough technologies, and all are being poorly integrated into our existing, extremely broken healthcare system. 

What healthcare leaders need to be thinking about is what comes next.  Healthcare found uses for Google Glasses and is finding uses for AR/MR/VR, but it is still a long way from making those anywhere close to mainstream.  Smartphones are getting closer to mainstream in healthcare, but no one in healthcare should assume they are anything but the near-term future. 

What is possible – and what is required – when there are no physical screens and no discrete computers? 

Hey, I’m still waiting for my holographic digital twin as my EHR. 

Monday, March 13, 2023

Letting AI Physicians Into the Guild

Let’s be honest: we’re going to have AI physicians. 


Now, that prediction comes with a few caveats. It’s not going to be this year, and maybe not even in this decade. We may not call them “physicians,” but, rather, may think of them as a new category entirely. AI will almost certainly first follow its current path of become assistive technology, for human clinicians and even patients.  We’re going to continue to struggle to fit them into existing regulatory boxes, like clinical decision support software or medical devices, until those boxes prove to be the wrong shape and size for how AI capabilities develop.

But, even given all that, we are going to end up with AI physicians.  They’re going to be capable of listening to patients’ symptoms, of evaluating patient history and clinical indicators, and of both determining likely diagnosis and suggested treatments.  With their robot underlings, or other smart devices, they’ll even be capable of performing many/most of those treatments.

We’re going to wonder how we ever got along without them.

Many people claim to not be ready for this. The Pew Research Center recently found that 60% of Americans would be uncomfortable if their physician even relied on AI for their care, and were  more worried that health care professionals would adopt AI technologies too fast rather than too slow. 

Still, though, two-thirds of the respondents already admit that they’d want AI to be used in their skin cancer screening, and one has to believe that as more people understand the kinds of things AI is already assisting with, much less the things it will soon help with, the more open they’ll be.    

People claim to value the patient-physician relationship, but what we really want is to be healthy.  AI will be able to help us with that.

For the sake of argument, let’s assume you buy my prediction, and focus on the harder question of how we’ll regulate them. I mean, they’re already passing licensing exams.  We’re not going to “send” them to medical school, right?  They’re probably not going to need years of post-medical school internships/ residencies/fellowships like human physicians either. And are we really going to make cloud-based, distributed AI get licensed in every state where they might “see” patients? 

There are some things we will definitely want them to demonstrate, such as:

  • Sound knowledge of anatomy and physiology, diseases, and injuries;
  • Ability to link symptoms with likely diagnoses;
  • Wide ranging knowledge of evidence-based treatments for specific diagnoses;
  • Effective patient interaction skills.

We’ll also want to be sure we understand any built-in biases/limitations of the data the AI trained on. E.g., did it include patients of all ages, genders, racial and ethnic backgrounds, and socioeconomic statuses? Are the sources of information on conditions and treatments drawn from just a few medical institutions and/or journals, or a broad range? How able is it to evaluate robust research studies from more questionable ones? 

Credit: BMJ
Many will also argue we’ll need to remove any “black boxes,” so that the AI can clearly explain how it went from inputs to recommendations. 

Once we get past all those hurdles and the AI is actually treating patients, we’ll want to maintain oversite.  Is it keeping up with the latest research?  How many, and what kinds of, patients is it treating?  Most importantly, how are its patients faring?

I’m probably missing some that others more knowledgeable about medical education/training/ licensure might add, but these seem like a fair start.  I’d want my AI physician to excel on all those.

I just wish I was sure my human physicians did as well.

London cab drivers have famously had to take what has been termed the “most difficult test in the world” to get their license, but it’s one what anyone with GPS could probably now pass and that autonomous vehicles will soon be able to.  We’re treating prospective physicians like those would-be cab drivers, except they don’t do as well.

According to the Association of American Medical Colleges (AAMC), the four year medical school graduation rate is over 80%, and that attrition rate includes those who leave for reasons other than poor grades (e.g., lifestyle, financial burdens, etc.). So we have to assume that many medical schools students leave with Cs or even D’s in their coursework, which is performance we probably would not tolerate from an AI.

Similarly, the textbooks they use, the patients they see, the training they get, are fairly circumscribed. Training at Harvard Medical School is not the same as even, say, Johns Hopkins, much less the University of Florida College of Medicine.  Doing an internship or residency at Cook County Hospital will not see the same conditions or patients as at Penn Medicine Princeton Medical Center.  There are built-in limitations and biases in existing medical training that, again, we would not want with our AI training.

As for basing recommendations on medical evidence, it is estimated that currently as little as 10% of medical treatments are based on high quality evidence, and that it can take as long as 17 years for new clinical research to actually reach clinical practice. Neither would be considered acceptable for AI.  Nor do we usually ask human physicians to explain their “black box” reasoning.

What the discussion about training AI to be physicians reveals is not how hard it will be but, rather, how poorly we’ve done it with humans.

Human physicians do have ongoing oversight – in theory.  Yes, there are medical licensure boards in every state and, yes, there are ongoing continuing education requirements, but it takes a lot for the former to actually discipline poorly performing physicians and the requirements for the latter are well below what physicians would need to stay remotely current.  Plus, there are few reporting requirements on how many/what type of patients individual physicians see, much less on outcomes. It’s hard to imagine that we’ll expect so little with AI physicians.  

----------------

As I explained previously, for many decades taking an elevator without having a human “expert” operate it on your behalf was unthinkable, until technology made such operation as easy as pushing a button. We’ve needed physicians as our elevator operators in the byzantine healthcare system, but we should be looking to use AI to simplify health care for us.

For all intents and purposes, the medical profession is essentially a guild; as a fellow panelist on a recent podcast, medical societies seem more concerned about how to keep nurse practitioners (or physician assistants, or pharmacists) from encroaching on their turf than they are about how to prepare for AI physicians.  

Open up that guild! 

Monday, March 6, 2023

OI May Be the New AI

In the past few months, artificial intelligence (AI) has suddenly seemed to come of age, with “generative AI” showing that AI was capable of being creative in ways that we thought was uniquely human.  Whether it is writing, taking tests, creating art, inventing things, making convincing deepfake videos, or conducting searches on your behalf, AI is proving its potential.  Even healthcare has figured out a surprising number of uses.

It's fun to speculate about which AI -- ChatGPT, Bard, DeepMind, Sydney, etc. – will prove “best,” but it turns out that “AI” as we’ve known it may become outdated.  Welcome to “organoid intelligence” (OI).

Organoids at work. Credit: Smirnova, et. alia

------------

I’d been vaguely aware of researchers working with lab-grown brain cells, but I was caught off-guard when Johns Hopkins University researchers announced organoid intelligence (a term they coined) as “the new frontier in biocomputing and intelligence-in-a-dish.”  Their goal:

…we present a collaborative program to implement the vision of a multidisciplinary field of OI. This aims to establish OI as a form of genuine biological computing that harnesses brain organoids using scientific and bioengineering advances in an ethically responsible manner.

Their video: 


“Computing and artificial intelligence have been driving the technology revolution, but they are reaching a ceiling,” said Thomas Hartung, the leader of the initiative.  “Biocomputing is an enormous effort of compacting computational power and increasing its efficiency to push past our current technological limits.”  Professor Hartung pointed out that only last year a supercomputer exceeded the computational capacity of a single human brain – “but using a million times more energy.”

“We are at a moment in time, where the technologies to achieve actual biocomputing have matured," Professor Hartung told CNET’s Eric Mack. "The hope is that some of the remarkable functionalities of the human brain can be realized as OI, such as its ability to take fast decisions based on incomplete and contradictive information (intuitive thinking)…Computers and the brain are not the same, even though we tried making computers more brain-like from the beginning of the computer age. The promise of OI is to add some new qualities.”

It remains to be seen what those “new qualities” might be.

Last year members of the team reported getting a dish of living brain cells – an earlier form of organoids -- to teach itself how to play Pong. “And I would say that replicating this experiment with organoids already fulfills the basic definition of OI. From here on, it’s just a matter of building the community, the tools, and the technologies to realize OI’s full potential,” Professor Hartung believes.

The researchers are now working on how to “communicate” with the organoids – sending information and reading what they re “thinking.” Professor Hartung explained: We developed a brain-computer interface device that is a kind of an EEG cap for organoids…It is a flexible shell that is densely covered with tiny electrodes that can both pick up signals from the organoid, and transmit signals to it,”

Still, we’re a long way to get existing arrangements of organoids to true OI.  "They are too small, each containing about 50,000 cells. For OI, we would need to increase this number to 10 million,” Professor Hartung explained“It will take decades before we achieve the goal of something comparable to any type of computer.  But if we don’t start creating funding programs for this, it will be much more difficult.”

The researchers are already excited about medical applications.  They can produce organoids from adult tissues, and use them to study neurological disorders.  According to Professor Hartung: “With OI, we could study the cognitive aspects of neurological conditions as well, For example, we could compare memory formation in organoids derived from healthy people and from Alzheimer’s patients, and try to repair relative deficits. We could also use OI to test whether certain substances, such as pesticides, cause memory or learning problems.”

Study coauthor and co-investigator Lena Smirnova added:

We want to compare brain organoids from typically developed donors versus brain organoids from donors with autism. The tools we are developing towards biological computing are the same tools that will allow us to understand changes in neuronal networks specific for autism, without having to use animals or to access patients, so we can understand the underlying mechanisms of why patients have these cognition issues and impairments.

If you were already worried about the ethical issues involved with computer-based AI approaching something that seems like sentience, imagine how much more troubling it will be when it is a bunch of human brain cells trying to convince you it thinks and feels.  The research team claims to be aware of the issues.  Professor Hartung says:

A key part of our vision is to develop OI in an ethical and socially responsible manner. For this reason, we have partnered with ethicists from the very beginning to establish an ‘embedded ethics’ approach. All ethical issues will be continuously assessed by teams made up of scientists, ethicists, and the public, as the research evolves.

Oh, OK, then. 

--------------

The researchers are definitely ambitious:

Ultimately, we aim toward a revolution in biological computing that could overcome many of the limitations of silicon-based computing and AI and have significant implications worldwide. Specifically, we anticipate OI-based biocomputing systems to allow faster decision-making (including on massive, incomplete, and heterogenous datasets), continuous learning during tasks, and greater energy and data efficiency. Furthermore, the development of “intelligence-in-a-dish” offers unparalleled opportunities to elucidate the biological basis of human cognition, learning, and memory, together with various disorders associated with cognitive deficits – potentially aiding the identification of novel therapeutic approaches to address major global unmet needs.

That’s the kind of revolution it will take to get to a 22nd healthcare system.

It’s been said before, including by me, that if the 20th century was the century of computers, the 21st century will be the century of biology, including genomics, DNA computers, biocomputing, synthetic biology, and now, it would seem, OI.  By the end of the century we may look back at today’s AI like someone in 1999 looked at radio in 1923: OI may be to AI as the Internet was to early radio. 

Or the organoids may never progress much past Pong. 

AI technology is evolving much faster than our culture is ready for, and our laws and regulations are trailing further. That’s the thing about technology: just when you’ve gotten used to a new technology, something newer has come along. So enjoy playing with ChatGPT, pat yourself on the back if you’ve thought of ways to use AI in your business, but don’t stop looking ahead.  Like at OI.