Monday, June 24, 2024

Batteries All Around

Quick question: how many batteries do you have? Chances are, the answer is way bigger than you think. They’re in your devices (e.g., smartphones, tablets, laptops, ear buds), they’re throughout your house (e.g., clocks, smoke detectors), they’re in your car (even if you don’t have an EV), and they may even be in you. We usually only think about them when they need recharging, or when they catch fire. They can be an environmental nightmare if not recycled, and recycling lithium-ion batteries is still problematic.  

This should not be our future. Credit: Bing Image Creator

So I was intrigued to read about some efforts to rethink what a battery is.

Let’s start with some work done by Swedish tech company Sinonus, a spinout of Chalmers University of Technology and KTH Royal Institute of Technology. The company is all about carbon fiber; more specifically, integrating structural strength and storing energy.

It seeks to make things multipurpose: “Just think of your smartphone, today it seems farfetched to use a single purpose phone, camera and mp3 player when you can have them all in one. In the same way we can transform single purpose materials, such as structure materials and batteries, through our multipurpose carbon fiber composite solution.” 

Or, as TechRadar put it, “how the laptop could become the battery.”

Sinonus says its carbon fiber based composite “can provide structural strength and store energy, all in one. By doing so we can utilize the mass that is "already there" to store energy, creating an opportunity to reduce weight, volume and improve overall system performance.”

Carbon fiber battery. Credit: Sinonus
New Atlas raves:

Imagine an electric car that isn't weighed down by a huge, kilowatt-hour-stuffed battery. It wouldn't need as much power to drive it forward and could rely on a smaller motor, saving yet more weight. Or imagine an eVTOL that could take off without lifting a lithium-ion anchor that requires it to be back on the ground within an hour for charging. Or a windmill with blades that work as their own batteries, storing energy during low demand periods for distribution at peak hours.

CEO Markus Zetterström explains: “Storing electrical energy in carbon fiber may perhaps not become as efficient as traditional batteries, but since our carbon fiber solution also has a structural load-bearing capability, very large gains can be made at a system level.” That reduced efficiency may be a cause for concern, but, as Jeff Butts wrote in Tom’s Hardware: “After all, if your laptop is smaller and lighter while still giving the same battery life, it hardly matters that the material storing the energy isn’t as efficient as a LiON battery pack.”

It has already replaced AAA batteries in low-power lab tests, but still has some considerable way to go to achieve more power and to make the materials cost-effective. Still, it cites a Chambers study that suggested this approach could increase EV range by 70%, while eliminating volatile chemicals that create the landfill issues and the potential for fires. 

According to Recharge News, Sinonus is also looking to use the carbon fiber in wind turbine blades, so they could act as their storage device as well.  It is also considering using its composite in the “internal fabric” of buildings.

Speaking of which, if you like the idea of your laptop chassis acting as its own battery, you should love this: how about your house being its own battery?

Work done at MIT, led by Dr. Damian Stefaniuk, has created a way to store power in a form of concrete made from cement, water, and something called carbon black. It technically forms a supercapacitor, not a battery, but it can store energy. BBC’s Tom Ough writes that supercapacitors are very efficient in storing energy, charge more rapidly than lithium-ion batteries, but also release their energy more rapidly, something that the team is working on.

The first time the team connected an LED to a piece of the concrete, it lit up. "At first I didn't believe it," said Dr. Stefaniuk. "I thought that I hadn't disconnected the external power source, and that was why the LED was on. It was a wonderful day."

Dr. Stefaniuk and his team describe roads that collect and store solar energy, charging EV vehicles as they drive on them. Or – and the folks at Sinonus should love this – as part of a building’s structure: “to have walls, or foundations, or columns, that are active not only in supporting a structure, but also in that energy is stored inside them."

As Dr. Stefaniuk told BBC: “A simple example would be an off-grid house powered by solar panels: using solar energy directly during the day and the energy stored in, for example, the foundations during the night."

Credit: MIT/Damian Stefaniuk

The team still has a long way to go in terms of how much power the material can produce, and, whoops, the addition of the carbon black makes the concrete weaker, so there is still work to be done in fine-tuning the ideal mixture. It should also be noted that production of cement is not without its own environmental impact.

But, as Michael Short, head of the Centre for Sustainable Engineering at Teesside University, told BBC: “As the materials are also commonplace and the manufacture relatively straightforward, this gives a great indication that this approach should be investigated further and could potentially be a very useful part of the transition to a cleaner, more sustainable future."

And if those two examples aren’t quite ready, the next wave for batteries may be sodium-ion, instead of lithium-ion, with the advantage that sodium is much more common than lithium. China already has a large scale battery storage system, and, in the U.S., Natron Energy has just launched its commercial scale operations. Colin Wessells, founder and co-CEO, Natron Energy, said: “The electrification of our economy is dependent on the development and production of new, innovative energy storage solutions. We at Natron are proud to deliver such a battery without the use of conflict minerals or materials with questionable environmental impacts.”

I love reducing our dependence on rare materials like lithium, replacing it with more common materials like carbon or sodium. But I especially like making our energy technology part of our everyday structures, much as the Internet-of-Things (IoT) has long promised about our computing. As Sinonus strives for, making single purpose solutions multipurpose.

As various people have said in various ways, the best technology should be invisible.   

Monday, June 17, 2024

Innovators: Avoid Healthcare

NVIDIA founder and CEO Jensen Huang has become quite the media darling lately, due to NVIDIA’s skyrocketing market value the past two years ($3.3 trillion now, thank you very much. A year ago it first hit $1 trillion). His company is now the world’s third largest company by market capitalization. Last week he gave the commencement speech at Caltech, and offered those graduates some interesting insights.

Jensen Huang at Caltech. Credit: NVIDIA

Which, of course, I’ll try to apply to healthcare.

Mr. Jensen founded NVIDIA in 1993, and took the company public in 1999, but for much of its existence it struggled to find its niche. Mr. Huang figured NVIDIA needed to go to a market where there were no customers yet – “because where there are no customers, there are no competitors.” He likes to call this “zero billion dollar markets” (a phrase I gather he did not invent).

About a decade ago the company bet on deep learning and A.I. “No one knew how far deep learning could scale, and if we didn’t build it, we’d never know,” Mr. Huang told the graduates. “Our logic is: If we don’t build it, they can’t come.”

NVIDIA did build it, and, boy, they did come.

Credit: NVIDIA

He believes we all should try to do things that haven’t been done before, things that “are insanely hard to do,” because if you succeed you can make a real contribution to the world.  Going into zero billion dollar markets allows a company to be a “market maker, not a market-taker.” He’s not interested in market share; he’s interested in developing new markets.

Accordingly, he told the Caltech graduates:

I hope you believe in something. Something unconventional, something unexplored. But let it be informed, and let it be reasoned, and dedicate yourself to making that happen. You may find your GPU. You may find your CUDA. You may find your generative AI. You may find your NVIDIA.

And in that group, some may very well.

He didn’t promise it would be easy, citing his company’s own experience, and stressing the need for resilience. “One setback after another, we shook it off and skated to the next opportunity. Each time, we gain skills and strengthen our character,” Mr. Huang said. “No setback that comes our way doesn’t look like an opportunity these days… The world can be unfair and deal you with tough cards. Swiftly shake it off. There’s another opportunity out there — or create one.”

He was quite pleased with the Taylor Swift reference; the crowd seemed somewhat less impressed.

Some of those graduates will probably end up working on artificial intelligence, perhaps at NVIDIA (he announced at the beginning that he was recruiting). Others will get snapped up by other Big Tech companies. More than a few will start their own companies. And a fair number will probably end up working on healthcare, in one way or another.

Healthcare needs bright people. It needs innovation; lots of it. It needs to be more efficient, and, hopefully, more effective. There’s no shortage of new ideas or money for them; according to Silicon Bank, venture capital firms poured $19b into healthcare in 2023, after $50b for 2021-22. It is already incorporating A.I. faster than I might have predicted, such as in drug development, where it is said to be “revolutionizing” the field. A.I. is also rapidly starting to “copilot” doctors.

But, I fear, these all seem like market-takers, not market makers.

Ten years ago I wrote Getting Our Piece of the Pie, expressing my concern that healthcare innovators were more interested in getting their share of the nation’s then $3 trillion spending (it’s now $5 trillion). “We need innovators who don't want a slice of the existing pie,” I wrote, “but are willing to throw it away and make a new kind of pie.

I think Mr. Huang would agree.

The internet should have transformed healthcare. Electronic health records should have transformed healthcare. Digital health should have transformed healthcare. But they didn’t. Sure, they changed healthcare, but healthcare first tried to ignore them, but then simply absorbed them in its big bear hug. “OK,” it said. “We can use you, but don’t expect anything to be cheaper or smaller, and don’t expect any of the major players to go away.” Now it’s doing the same with A.I.

Everywhere you look in healthcare, there are competitors. To be more accurate, everywhere you look there are consolidators, because many parts of our healthcare system prefer to dominate markets than to compete in them (e.g., Epic, UHC, and many local health systems). But an innovator would be hard pressed to find a market niche without competition. And the thought of doing something where there are no customers is anathema to most healthcare innovators.

Honestly, I think healthcare innovators who start building things thinking about patients, doctors, hospitals, pharma/PBMs, and health insurance companies, well – I don’t think they should bother. That paradigm is hitting a dead end. We need new paradigms.

When imaginary numbers were developed during the Renaissance, no one expected that they’d be useful for anything, much less than they’d be integral (pun intended) for electrical engineering and quantum mechanics. Neither of those fields even existed yet. Alexander Graham Bell was more interested in helping the deaf than in inventing the telephone. And Bob Taylor of ARPA (now DARPA) didn’t expect to create the internet when it came up with ARPANET.

Big, bold ideas find – create -- their own markets.

If you want to make a mark in healthcare, look for the zero billion dollar markets. Look for the things that customers haven’t yet realized they have a need for. Look for the things that no competitor is interested in (or hasn’t thought of). Look to build things with the logic: “If we don’t build it, they can’t come.” Look to change the world, not just to make healthcare a little less bad.

If you do all that, or some of that, perhaps health or healthcare will benefit as well, even if it’s not what we think of it as “health” or “healthcare” now.  Find your own NVIDIA.

Monday, June 10, 2024

Oh. Never Mind

You may have read the coverage of last week’s tar-and-feathering of Dr. Anthony Fauci in a hearing of the House Select Subcommittee on the Coronavirus Pandemic. You know, the one where Majorie Taylor Greene refused to call him “Dr.”, told him: “You belong in prison,” and accused him – I kid you not – of killing beagles. Yeah, that one.

Congressional hearings aren't the best way to find the truth. Credit: BBC

Amidst all that drama, there were a few genuinely concerning findings. For example, some of Dr. Fauci’s aides appeared to sometimes use personal email accounts to avoid potential FOIA requests. It also turns out that Dr. Fauci and others did take the lab leak theory seriously, despite many public denunciations of that as a conspiracy theory. And, most breathtaking of all, Dr. Fauci admitted that the 6 feet distancing rule “sort of just appeared,” perhaps from the CDC and evidently not backed by any actual evidence.

I’m not intending to pick on Dr. Fauci, who I think has been a dedicated public servant and possibly a hero. But it does appear that we sort of fumbled our way through the pandemic, and that truth was often one of its victims.

In The New York Times,  Zeynep Tufekci minces no words:

I wish I could say these were all just examples of the science evolving in real time, but they actually demonstrate obstinacy, arrogance and cowardice. Instead of circling the wagons, these officials should have been responsibly and transparently informing the public to the best of their knowledge and abilities.

As she goes on to say: “If the government misled people about how Covid is transmitted, why would Americans believe what it says about vaccines or bird flu or H.I.V.? How should people distinguish between wild conspiracy theories and actual conspiracies?

Credit: Menninger
Indeed, we may now be facing a bird flu outbreak, and our COVID lessons, or lack thereof, could be crucial. There have already been three known cases that have crossed over from cows to humans, but, like the early days of COVID, we’re not actively testing or tracking cases (although we are doing some wastewater tracking). No animal or public health expert thinks that we are doing enough surveillance,” Keith Poulsen, DVM, PhD, director of the Wisconsin Veterinary Diagnostic Laboratory at the University of Wisconsin-Madison, said in an email to Jennifer Abbasi of JAMA.

Echoing Professor Tufekci’s concerns about mistrust, Michael Osterholm, the director of the Center for Infectious Disease Research and Policy at the University of Minnesota, told Katherine Wu of The Atlantic his concerns about a potential bird flu outbreak: “without a doubt, I think we’re less prepared.” He specifically cited vaccine reluctance as an example.

Sara Gorman, Scott C. Ratzan, and Kenneth H. Rabin wondered, in StatNews, if the government has learned anything from COVID communications failures: in regards to a potential bird flu outbreak,  “…we think that the federal government is once again failing to follow best practices when it comes to communicating transparently about an uncertain, potentially high-risk situation.” They suggest full disclosure: “This means our federal agencies must communicate what they don't know as clearly as what they do know.”

But that runs contrary to what Professor Tufekci says was her big takeaway from our COVID response: “High-level officials were afraid to tell the truth — or just to admit that they didn’t have all the answers — lest they spook the public.

A new study highlights just how little we really knew. Eran Bendavid (Stanford) and Chirag Patel (Harvard) ran 100,000 models of various government interventions for COVID, such as closing schools or limiting gatherings. The result: “In summary, we find no patterns in the overall set of models that suggests a clear relationship between COVID-19 government responses and outcomes. Strong claims about government responses’ impacts on COVID-19 may lack empirical support.”

In an article in Stat News, they elaborate: “About half the time, government policies were followed by better Covid-19 outcomes, and half of the time they were not. The findings were sometimes contradictory, with some policies appearing helpful when tested one way, and the same policy appearing harmful when tested another way.”

They caution that it’s not “broadly true” that government responses made things worse or were simply ineffective, nor that they demonstrably helped either, but: “What is true is that there is no strong evidence to support claims about the impacts of the policies, one way or the other.”

Fifty-fifty.  All those policies, all those recommendations, all the turmoil, and it turns out we might as well just flipped a coin.

Like Professor Tufekci, Dr. Gorman and colleagues, and Ms. Wu, they urge more honesty: “We believe that having greater willingness to say “We’re not sure” will help regain trust in science.”  Professor Zufekci quotes Congresswoman Deborah Ross (D-NC): “When people don’t trust scientists, they don’t trust the science.” Right now, there’s a lot of people who neither trust the science or the scientists, and it’s hard to blame them.

Professor Zufekci laments: “As the expression goes, trust is built in drops and lost in buckets, and this bucket is going to take a very long time to refill.” We may not have that kind of time before the next crisis.

Professors Bendavid and Patel suggest more and better data collection for critical health measures, on which the U.S. has an abysmal record (case in point: bird flu), and more experimentation of public health policies, which they admit “may be ethically thorny and often impractical” (but, they point out, “subjecting millions of people to untested policies without strong scientific support for their benefits is also ethically charged”).  

As I wrote about last November, American’s trust in science is declining, with the Pew Research Center confirming that the pandemic was a key turning point in that decline. Professors Bendavid and Patel urge: “Matching the strength of claims to the strength of the evidence may increase the sense that the scientific community’s primary allegiance is to the pursuit of truth above all else,” but in a crisis – as we were in 2020 – there may not be much, if any, evidence available but yet we still are desperate for solutions.

We all need to acknowledge that there are experts who know more about their fields than we do, and stop trying to second guess or undermine them. But, in turn, those experts need to be open about what they know, what they can prove, and what they’re still not certain about. We all failed those tests in 2020-21, but, unfortunately, we’re going to get retested at some point, and that may be sooner rather than later.

Monday, June 3, 2024

Who Needs Humans, Anyway?

Imagine my excitement when I saw the headline: “Robot doctors at world’s first AI hospital can treat 3,000 a day.” Finally, I thought – now we’re getting somewhere. I must admit that my enthusiasm was somewhat tempered to find that the patients were virtual. But, still.

Are they human or AI? Credit: Bing Image Creator

The article was in Interesting Engineering, and it largely covered the source story in Global Times, which interviewed the research team leader Yang Liu, a professor at China’s Tsinghua University, where he is executive dean of Institute for AI Industry Research (AIR) and associate dean of the Department of Computer Science and Technology. The professor and his team just published a paper detailing their efforts.  

The paper describes what they did: “we introduce a simulacrum of hospital called Agent Hospital that simulates the entire process of treating illness. All patients, nurses, and doctors are autonomous agents powered by large language models (LLMs).” They modestly note: “To the best of our knowledge, this is the first simulacrum of hospital, which comprehensively reflects the entire medical process with excellent scalability, making it a valuable platform for the study of medical LLMs/agents.”

In essence, “Resident Agents” randomly contract a disease, seek care at the Agent Hospital, where they are triaged and treated by Medical Professional Agents, who include 14 doctors and 4 nurses (that’s how you can tell this is only a simulacrum; in the real world, you’d be lucky to have 4 doctors and 14 nurses). The goal “is to enable a doctor agent to learn how to treat illness within the simulacrum.”

Overview of the AI hospital. Credit: Li, et. alia
The Agent Hospital has been compared to the AI town developed at Stanford last year, which had 25 virtual residents living and socializing with each other. “We’ve demonstrated the ability to create general computational agents that can behave like humans in an open setting,” said Joon Sung Park, one of the creators. The Tsinghua researchers have created a “hospital town.”

Gosh, a healthcare system with no humans involved. It can’t be any worse than the human one. Then, again, let me know when the researchers include AI insurance company agents in the simulacrum; I want to see what bickering ensues.

As you might guess, the idea is that the AI doctors – I’m not sure where the “robot” is supposed to come in – learn by treating the virtual patients. As the paper describes: “As the simulacrum can simulate disease onset and progression based on knowledge bases and LLMs, doctor agents can keep accumulating experience from both successful and unsuccessful cases.”

Credit: Li, et. alia
The researchers did confirm that the AI doctors’ performance consistently improved over time. “More interestingly,” the researchers declare, “the knowledge the doctor agents have acquired in Agent Hospital is applicable to real-world medicare benchmarks. After treating around ten thousand patients (real-world doctors may take over two years), the evolved doctor agent achieves a state-of-the-art accuracy of 93.06% on a subset of the MedQA dataset that covers major respiratory diseases.”

The researchers note the “self-evolution” of the agents, which they believe “demonstrates a new way for agent evolution in simulation environments, where agents can improve their skills without human intervention.”  It does not require manually labeled data, unlike some LLMs. As a result, they say that design of Agent Hospital “allows for extensive customization and adjustment, enabling researchers to test a variety of scenarios and interactions within the healthcare domain.”

The researchers’ plans for the future include expanding the range of diseases, adding more departments to the Agent Hospital, and “society simulation aspects of agents” (I just hope they don’t use Grey’s Anatomy for that part of the model).  Dr. Liu told Global Times that the Agent Hospital should be ready for practical application in the 2nd half of 2024.

One potential use, Dr. Liu told Global Times, is training human doctors:

…this innovative concept allows for virtual patients to be treated by real doctors, providing medical students with enhanced training opportunities. By simulating a variety of AI patients, medical students can confidently propose treatment plans without the fear of causing harm to real patients due to decision-making error. 

No more interns fumbling with actual patients, risking their lives to help train those young doctors. So one hopes.

AI hospital research team. Credit: Liu via Global Times
I’m all in favor of using such AI models to help train medical professionals, but I’m a lot more interested in using them to help with real world health care. I’d like those AI doctors evaluating our AI twins, trying hundreds or thousands of options on them in order to produce the best recommendations for the actual us. I’d like those AI doctors looking at real-life patient information and making recommendations to our real life doctors, who need to get over their skepticism and use AI input as not only credible but also valuable, even essential.

There is already evidence that AI-provided diagnoses compare very well to those from human clinicians, and AI is only going to get better. The harder question may be not in getting AI to be ready than in – you guessed it! – getting physicians to be ready for it. Recent studies by both Medscape and the AMA indicate that the majority of physicians see the potential value of  AI in patient care, but were not ready to use it themselves.

Perhaps we need a simulacrum of human doctors learning to use AI doctors.

In the Global Times interview, the Tsinghua researchers were careful to stress that they don’t see a future without human involvement, but, rather, one with AI-human collaboration.  One of them went so far as to praise medicine as “a science of love and an art of warmth,” unlike “cold” AI healthcare.

Yeah, I’ve been hearing those concerns for years. We say we want our clinicians to be comforting, displaying warmth and empathy. But, in the first place, while AI may not yet actually be empathetic, it may be able to fake it; there are studies that suggest that patients overwhelmingly found AI chatbot responses more empathetic than those from actual doctors.

In the second place, what we want most from our clinicians is to help us stay healthy, or to get better when we’re not. If AI can do that better than humans, well, physicians’ jobs are no more guaranteed than any other jobs in an AI era.

But I’m getting ahead of myself; for now, let’s just appreciate the Agent Hospital simulacrum.