Monday, August 18, 2025

Distribute More of the Future Here

As seminal science fiction writer William Gibson (supposedly) once said, “The future is already here. It’s just not very evenly distributed.” My fear is more of it is being distributed in China, not in the U.S.

We’re supposed to be the country of big dreams. We’re supposed to be the country that invents the future and goes boldly into it (Captain Kirk, after all, was born in Iowa). The list of innovations America helped pioneer, the Nobel Prizes Americans have won, the amounts of patents we file – all speak to our faith in the future and our confidence that we’ll be the ones who get there first. But, more and more, we seem to be looking back, not forward.

I’ve written before about the Trump Administration’s war on science. Its attacks on many of our leading universities may be viewed as culture wars, but they are wars that our country is the casualty of. Historian Garrett M. Graff, writing in The New York Times, put it this way:

What America may find is that we have squandered the greatest gift of the Manhattan Project — which, in the end, wasn’t the bomb but a new way of looking at how science and government can work together.

He laments: “Today, just as China’s own research and development efforts take off, the Trump administration has been erasing this legacy,” and concludes by warning: “If China is able to capitalize on our self-inflicted wounds to invent and secure the future of the 21st century instead, we may find that we have squandered the greatest gift of the Manhattan Project.”

President Trump famously hates EVs, solar energy, and wind turbines, promotes more use of oil, gas, and “clean” coal, and considers climate change a hoax. Well, the future begs to disagree - and so does China.

Some examples:

The New York Times had an in-depth analysis How China Went From Clean Energy Copycat to Global Innovator. Its thesis, supported by several nifty charts and graphs:

Accused for years of copying the technologies of other countries, China now dominates the renewable energy landscape not just in terms of patent filings and research papers, but in what analysts say are major contributions that will help to move the world away from fossil fuels.

“It is the opposite of an accident,” Jenny Wong Leung, an analyst and data scientist at the Australian Strategic Policy Institute, told NYT, adding: “The sheer volume of Chinese investment has been so much larger than in the West. It meant they could build industries from the ground up, all the way through the supply chain.”

Credit: NYT (Source: European Patent Office, Espacenet)

Or there’s Jacob’s Dreyer’s essay, also in NYT, about China’s push in biotech. He asserts:

In its quest to dethrone American dominance in biotech, China isn’t necessarily trying to beat America at its own game. While the U.S. biotech industry is known for incubating cutting-edge treatments and cures, China’s approach to innovation is mostly focused on speeding up manufacturing and slashing costs. The idea isn’t to advance, say, breakthroughs in the gene-editing technology CRISPR; it’s to make the country’s research, development, testing and production of drugs and medical products hyperefficient and cheaper.

Mr. Dreyer thinks that the Trump cuts to research may mean that America’s biotech industry could go from a “homegrown dominance” to “Big American companies will be ever more dependent on the cost advantages and bright young engineers that China offers.”

Then there’s China’s efforts in robotics, As I wrote previously, “…when it comes to robots — especially AI-powered, humanoid ones — the battle may be closer to being over…and the U.S. is not winning.” Jeff Burnstein, president of the Association for Advancing Automation (A3), told The Wall Street Journal: “They have more companies developing humanoids and more government support than anyone else. So, right now, they may have an edge.”

Last week China hosted the inaugural World Humanoid Robot Games, and while it was funny seeing robots sometimes fall down or swing wildly at another robot, the breadth and depth of its advances should not be trivialized. The Guardian noted that when it comes to the competition between China and the U.S. on AI, the Games illustrate that “…while the US still has the lead on frontier research, owing in part to Washington’s restrictions on the export of cutting-edge chips to China, Beijing is going all-in on real life applications, such as robotics.”

Or, finally, there’s the furor over “rare earth minerals” as well as other elements critical to modern electronics, such as lithium and copper. China’s recent threat to restrict exports of them put U.S. industry (and the military) in a state of panic. The U.S. used to lead the world in the mining and refining of these, and we still have huge sources of them. The trouble is, for the most part we no longer do either of those, ceding them to China.

I could go into the whole chip manufacturing debacle – again, we invented the industry, then gave it away – but that ground has been well covered. By now hopefully you get the point. China is eagerly looking ahead; we’re not.

-------

Dan Wang, a research fellow at Stanford’s Hoover Institution, thinks he knows the crux of the problem. His essay in The Atlantic (adopted from his book Breakneck: China’s Quest to Engineer the Future) posits that we have become a nation of lawyers, while China is a nation of engineers. Who, you know: Get. Things. Done.

He writes:

Think about it this way: China is an engineering state, which treats construction projects and technological primacy as the solution to all of its problems, whereas the United States is a lawyerly society, obsessed with protecting wealth by making rules rather than producing material goods. 

The U.S., he charges, “…has a government of the lawyers, by the lawyers, and for the lawyers.” We’re good at writing laws and regulations, taking people to court, and (for the most part) protecting intellectual property, but when it comes to actually building stuff, we’ve gone soft and slow. He says: “The United States has lost the ability to get stuff done as it focuses on procedures rather than results.”

We don’t want China’s reckless approach to environmental damages, its surveillance state, or its censorship of ideas, but, gosh darn it, it’d be nice for the U.S. to get back to making things and making them better.

Look, I’m a Boomer. My future is way shorter than my past. Some of that past I’m nostalgic about. But I’d sure like to see more of the future, and have more of that future invented here.

Monday, August 11, 2025

Après AI, le Déluge

I have to admit, I’ve steered away from writing about AI lately. There’s just so much going on, so fast, that I can’t keep up. Don’t ask me how GPT-5 differs from GPT-4, or what Gemini does versus Genie 3. I know Microsoft really, really wants me to use Copilot, but so far I’m not biting. DeepMind versus DeepSeek?  Is Anthropic the French AI, or is that Mistral?  I’m just glad there are younger, smarter people paying closer attention to all this.

When it is you or the AI, who will get the job? Credit: Microsoft Designer
Still, I’m very much concerned about where the AI revolution is taking us, and whether we’re driving it or just along for the ride. In Fast Company, Sebastion Buck, co-founder of the “future design company” Enso, posits a great attitude about the AI revolution:

The scary news is: We have to redesign everything.

The exciting news is: We get to redesign everything.

He goes on to explain:

Technical revolutions create windows of time when new social norms are created, and where institutions and infrastructure is rethought. This window of time will influence daily life in myriad ways, from how people find dates, to whether kids write essays, to which jobs require applications, to how people move through cities and get health diagnoses.
Each of these are design decisions, not natural outcomes. Who gets to make these decisions? Every company, organization, and community that is considering if—and how—to adopt AI. Which almost certainly includes you. Congratulations, you’re now part of designing a revolution.

I want to pick out one area in particular where I hope we redesign everything intentionally, rather than in our normal short-sighted, laissez-faire manner: jobs and wealth.

It has become widely accepted that offshoring led to the demise of U.S. manufacturing and its solidly middle class blue collar jobs over the last 30 years. There’s some truth to that, but automation was arguably more of a factor – and that was before AI and today’s more versatile robots. More to the point, today’s AI and robots aren’t coming just to manufacturing but pretty much to every sector.

Former Transportation Secretary Pete Buttigieg warned:

The economic implications are the ones that I think could be the most disruptive, the most quickly. We're talking about whole categories of jobs, where — not in 30 or 40 years, but in three or four — half of the entry-level jobs might not be there. It will be a bit like what I lived through as a kid in the industrial Midwest when trade in automation sucked away a lot of the auto jobs in the nineties — but ten times, maybe a hundred times more disruptive.

Mr. Buttigieg is no AI expert, but Erik Brynjolfsson, senior fellow at Stanford's Institute for Human-Centered Artificial Intelligence and director of the Stanford Digital Economy Lab, is. When asked about those comments, he told Morning Edition: “Yeah, he's spot on. We are seeing enormous advances in core technology and very little attention is being paid to how we can adapt our economy and be ready for those changes.”

You could look, for example, at the big layoffs in the tech sector lately. Natasha Singer, writing in The New York Times, reports on how computer science graduates have gone from expecting mid-six figure starting salaries to working at Chipotle (and wait till Chipotle automates all those jobs). The Federal Reserve Bank of New York says unemployment for computer science & computer engineering majors is better than anthropology majors, but, astonishingly, worse than pretty much all other majors.

And don’t just feel sorry for tech workers. Neil Irwin of Axios warns: “In the next job market downturn — whether it's already starting or years away — there just might be a bloodbath for millions of workers whose jobs can be supplanted by artificial intelligence.” He quotes Federal Reserve governor Lisa Cook: “AI is poised to reshape our labor market, which in turn could affect our notion of maximum employment or our estimate of the natural rate of unemployment."

In other words, you ain’t seen nothing yet.

While manufacturing was taking a beating in the U.S. over the last thirty years, tech boomed. Most of the world’s largest and most profitable companies are tech companies, and most of the world’s richest people got their wealth from tech. Those are, by and large, the ones investing most heavily in AI -- most likely to benefit from it.

Professor Brynjolfsson worries about how we’ll handle the transition to an AI economy:

The ideal thing is that you find ways of compensating people and managing a transition. Sad to say, with trade, we didn't do a very good job of that. A lot of people got left behind. It would be a catastrophe if we made the similar mistake with technology, [which] that also is going to create enormous amounts of wealth, but it's not going to affect everyone evenly. And we have to make sure that people manage that transition.
 “Catastrophe” indeed. And I fear it is coming.

We know that CEO to worker pay ratios have skyrocketed over the past 40 years. We know that concentration of wealth in the U.S. is also at unprecedented levels. And we know that social mobility – the American Dream of children doing better than their parents, that anyone can make it – has stalled and is actually lower than in many of our peer countries. AI can address those, or make them much, much worse.

It’s exciting to think of all the things AI is going to do for us. We’ll be able to do old things better/faster/cheaper, and do new things that we can barely even dream of now. With it, we should be living in a post-scarcity/abundance society. But that doesn’t mean we’ll all benefit, and certainly not all benefit equally or equitably.

Professor Brynjolfsson hits the nail on the head:

I'm optimistic about the potential to create a lot more wealth and productivity. I think we're going to have much higher productivity growth. At the same time, there's no guarantee all that wealth and productivity is going to be evenly shared. We are investing so much in driving the capabilities for hundreds of billions of dollars and we're investing very little in thinking about how we make sure that leads to widely shared prosperity. That should be the agenda for the next few years.

So if you’re not thinking about social welfare programs, universal basic income (UBI), baby bonds, and the like, as well as what, exactly, we want humans to spend their days doing, start thinking. As Mr. Buck suggests, start designing the AI revolution we should want.

Monday, August 4, 2025

Quantum Is Still Surprising Us

Most of us don’t think about quantum mechanics very much, if at all, even though our everyday life depends on it (such as in semiconductors or GPS). It is said to be the most accurate theory in terms of testable predictions, even though the fundamentals of the theory don’t make sense in our everyday life. Light is both a particle and a wave? Particles can be “entangled” even when they are vast distances apart? Cats that can be alive and dead at the same time?  

Ready for quantum batteries? Credit: Quantum Insider

It’s so counterintuitive that a recent Nature survey found that even leading quantum physicists don’t agree on what it really means. “I find it remarkable that people who are very knowledgeable about quantum theory can be convinced of completely opposite views,” says Gemma De les Coves, a theoretical physicist at the Pompeu Fabra University.

Quantum computing has become a big thing lately, thought to be the future of computing. The Wall Street Journal says: “The emerging technology promises better medicine, faster internet and more sustainable food production,” not to mention upending all existing cryptography. We’re quickly rushing into the AI world, but quantum may be the next gold rush.

I knew all that, but what I did not know was that included in the quantum revolution are quantum batteries.

We all know about batteries, whether they’re for our phones, our cars, our flashlights, our computers, and a host of other applications. Batteries have existed for several hundred years, and basically all have relied on some form of chemical reaction. Quantum batteries, on the other hand, use what is called quantum superposition, moving electrons into higher energy states to store energy.

The field is still in early days, and one of the major problems has been how long researchers could get the quantum batteries to store energy. They charged rapidly, but also lost their charge rapidly, in a matter of nanoseconds. Now researchers from RMIT University and CSIRO (Australia’s national science agency) have announced a new method that lasts a 1,000 times longer – we’re talking microseconds now, folks. The results were published in PRX Energy.

That's what a quantum battery looks like? Credit: RMIT
I won’t try to get into the weeds to explain the approach, other than to regurgitate what was in the title, that they used molecular triplets in Dicke Quantum Batteries. The researchers’ “popular summary” explains:

Quantum batteries may offer scalable charging power density. Those based on the Dicke model enable a cavity-enhanced energy transfer process called superabsorption, but the lifetime is limited by fast radiative emission losses and super radiance. Here, the authors show a promising approach to extend the energy storage lifetime using molecular triplet states, which they test on five devices across a triplet-polariton resonance. One device shows a 1000-fold increase in storage time compared to previous demonstrations.

Study co-author and RMIT PhD candidate Daniel Tibben said: “While we’ve addressed a tiny ingredient of the overall piece, our device is already much better at storing energy than its predecessor.”

“While a working quantum battery could still be some time away, this experimental study has allowed us to design the next iteration of devices,” study co-author and RMIT chemical physicist Professor Daniel Gómez said. “It’s hoped one day quantum batteries could be used to improve the efficiency of solar cells and power small electronic devices.”

Coauthor Francesco Campaioli notes that, while the storage is still only microseconds: “It’s the equivalent of having a phone that charges in 30 minutes and runs out of battery after about 20 days if left idle. Not too shabby.” He adds: “There is still a lot of work to do to develop these ideas into a technology that could impact everyday life. What matters to me is that we have a clear understanding of the challenges that we need to overcome to make it happen.”

And that’s not all the recent news in quantum batteries.

A new paper from researchers at PSL Research University in Paris and the University of Pisa proposes “a deceptively simple quantum battery model that displays a genuine quantum advantage, saturating the quantum speed limit.”  

"Our model consists of two coupled harmonic oscillators: one acts as the 'charger,' and the other serves as the 'battery,'" explained Vittoria Stanzione and Gian Marcello Andolina, co-authors of the paper, to Phys.org. "The key ingredient enabling the quantum advantage is an anharmonic interaction between the two oscillators during the charging process. This anharmonic coupling allows the system to access non-classical, entangled states that effectively create a 'shortcut' in Hilbert space, enabling faster energy transfer than in classical dynamics.”

Got it?

They added: "To the best of our knowledge, this work provides the first rigorous certification of a genuine quantum advantage in a solvable model. Furthermore, the proposed setup can be realized with current experimental technologies."

It seems like a big deal that it outperforms “classical” approaches and is achievable with existing technology.

Finally, researchers from Hubei University, the Chinese Academy of Sciences, and Lanzhou University have proposed a “diamond-based” approach to quantum batteries, using the nitrogen-vacancy (NV) center in diamond. Who knew diamonds had a nitrogen vacancy?

Schematic illustration of the QB scheme. Credit: Jun-Hong An

The paper was published in Physical Review Letters, and deals with the issue of quantum batteries “self-discharging” (due to what is called decoherence). "The main advantage of our QB scheme in the NV center is that the unique hyperfine interaction between the electron and the 14N nucleus, which is absent in other platforms, permits us to coherently optimize this ratio," Jun-Hong An, co-senior author of the paper, told Phys.org. "This is the irreplaceable feature of our QB scheme in the NV center. This irreplaceability endows us with the ability to mitigate the self-discharging on one hand, and to maximize the extractable work on the other."

Again, both of those are big deals.

As a result, the researchers conclude: “our results pave the way for the practical realization of the QB.” Professor An believes: "A quantum-technology revolution is underway, which uses quantum resources to overcome various performance limitations of devices set by classical physics."

---------

Quantum computing seems like it is where AI was five years ago, still around the corner but turning that corner faster than we realized. And I feel like quantum batteries are where quantum computing was five or so years ago, starting to overcome the practical issues that had once seemed insurmountable.

They’re going to happen, sooner than we think.

I don’t think they’re going to replace the existing power grid, and maybe not even your cell phone battery, but in a world of the Internet of Things, nanobots, and other things that edge closer to the quantum level, they’re going to be important.

Monday, July 28, 2025

Have Some Water - While You Can

We live on a water world (despite its name being “Earth”). We, like all life on earth, are water creatures, basically just sacks of water. We drink it, in its various forms (plain, sparking, carbonated, sweetened, flavored, even transformed by a mammal into milk). We use it to grow our crops, to flush our toilets, to water our lawns, to frack our oil, to name a few uses. Yet 97% of Earth’s water is salt water, which we can’t drink without expensive desalination efforts, and most of the 3% that is freshwater is locked up – in icebergs, glaciers, the ground and the atmosphere, etc. Our civilization survives on that sliver of freshwater that remains available to us.

Guess which one is our more likely future. Credit: Microsoft Designer

Unfortunately, we’re rapidly diminishing even that sliver. And that has even worse implications than you probably realize.

A new study, published in Science Advances, utilizes satellite images (NASA GRACE/GRACE-FO) to map what’s been happening to the freshwater in the “terrestrial water storage” or TWS we blithely use. Their critical finding: “the continents have undergone unprecedented TWS loss since 2002.”

Indeed: “Areas experiencing drying increased by twice the size of California annually, creating “mega-drying” regions across the Northern Hemisphere…75% of the population lives in 101 countries that have been losing freshwater water.” The dry parts of the world are getting drier faster than the wet parts are getting wetter.

Credit: Chandanpurkar, et. al.

“It is striking how much nonrenewable water we are losing,” said Hrishikesh A. Chandanpurkar, lead author of the study and a research scientist for Arizona State University. “Glaciers and deep groundwater are sort of ancient trust funds. Instead of using them only in times of need, such as a prolonged drought, we are taking them for granted. Also, we are not trying to replenish the groundwater systems during wet years and thus edging towards an imminent freshwater bankruptcy.”

As much as we worry about shrinking glaciers, the study found that 68% of the loss of TWS came from groundwater, and – this is the part you probably didn’t realize – this loss contributes more to rising sea levels than the melting of glaciers and ice caps.

This is not a blip. This is not a fluke. This is a long-term, accelerating trend. The paper concludes: “Combined, they [the findings] send perhaps the direst message on the impact of climate change to date. The continents are drying, freshwater availability is shrinking, and sea level rise is accelerating.”

Yikes.

“These findings send perhaps the most alarming message yet about the impact of climate change on our water resources,” said Jay Famiglietti, the study’s principal investigator and a professor with the ASU School of Sustainability. 

We’ve known for a long time that we were depleting our aquifers, and either ignored the problem or waved off the problem to future generations. The researchers have grim news: “In many places where groundwater is being depleted, it will not be replenished on human timescales.” Once they’re gone, we won’t see them replenished in our lifetimes, our children’s lifetimes, or our grandchildren’s lifetimes.

Professor Famiglietti is frank: “The consequences of continued groundwater overuse could undermine food and water security for billions of people around the world. This is an ‘all-hands-on-deck’ moment — we need immediate action on global water security.”

If all this still seems abstract to you, I’ll point out that much of Iran is facing severe water shortages, and may be forced to relocate its capital. Kabul is in similar straits. Mexico City almost ran out of water a year ago and remains in crisis. Water scarcity is a problem for as much as a third of the EU, such as in Spain and Greece. And the ongoing drought in America’s Southwest isn’t going any anytime soon.

Propublica has a great story on the study and its implications, with some killer illustrations. It points out that the study suggests the middle band of Earth is becoming less habitable, and “…these findings all point to the likelihood of widespread famine, the migration of large numbers of people seeking a more stable environment and the carry-on impact of geopolitical disorder.”

As Aaron Salzberg, a former fellow at the Woodrow Wilson Center and the former director of the Water Institute at the University of North Carolina, who was not involved with the study, told ProPublica: “Water is being used as a strategic and political tool. We should expect to see that more often as the water supply crisis is exacerbated.”

That. Is. Going. To. Be. A. Problem!

We can’t see the loss of groundwater, but, increasingly, we can see the impacts of it. A study published in May used satellite data to show that all – that’s all – of the 28 largest U.S. cities are sinking as a result of land subsidence, mostly due to groundwater extraction. They’re sinking by 2 to 10 millimeters per year, and: “In every city studied, at least 20 percent of the urban area is sinking — and in 25 of 28 cities, at least 65 percent is sinking.”

Sinking quantified. Credit: Ohenhen, et. al. 
Leonard Ohenhen, the study’s lead author, notes: “Even slight downward shifts in land can significantly compromise the structural integrity of buildings, roads, bridges, and railways over time," Principal investigator Associate Professor Manoochehr Shirzaei adds: "The latent nature of this risk means that infrastructure can be silently compromised over time with damage only becoming evident when it is severe or potentially catastrophic. This risk is often exacerbated in rapidly expanding urban centers."

If “2 to 10 millimeters per year” doesn’t scare you, you only need look at Central Valley (CA), which has been sinking about an inch per year over the last 20 years – and is now some 30 feet lower than a hundred years ago. That you’ll notice.  

Professor Famiglietti and his coauthors retain some hope:

While efforts to slow climate change may be sputtering (72, 73), there is no reason why efforts to slow rates of continental drying should do the same. Key management decisions and new policies, especially toward regional and national groundwater sustainability, and international efforts, toward global groundwater sustainability, can help preserve this precious resource for generations to come. Simultaneously, such actions will slow rates of sea level rise.

As evidence that smart water management plans can have an impact, Los Angeles uses less water now than in 1990, despite having a half million more residents.

This problem isn’t something we can wave our hands at and call “fake news.” This isn’t a “theory” like critics try to claim climate change is. We can measure the loss of groundwater; we can measure land subsidence. Professor Famiglietti warns: “We can’t negotiate with physics. Water is life. When it’s gone, everything else unravels.”

Monday, July 21, 2025

Gen Z Should Give Healthcare a Stare

Last I knew, Gen Z showed its disdain for older generations with a dismissive “OK Boomer.” But that was a few years ago, and now, it appears, Gen Z doesn’t even bother with that; instead, there is what has become known as the “Gen Z stare.” You’ve probably seen it, and may have even experienced it. TikTok influence Janaye defines it thusly: “The Gen Z stare is specifically when somebody does not respond or just doesn't have any reaction in a situation where a response is either required or just reasonable.”

You can't do a Gen Z stare better than Wednesday/ Credit: Netflix

It’s been blowing up on social media and the media over the last few days, so it apparently has tapped into the social zeitgeist. It’s often been attributed to customer service interactions, either as a worker receiving an inane request or as a customer facing an undue burden.

You can already see why I link it to healthcare.

It’s off-putting because, as Michael Poulin, an associate psychology professor at the University at Buffalo, told Vox: “People interpret it as social rejection. There is nothing that, as social beings, humans hate more. There’s nothing that stings more than rejection.”

Many attribute the Gen Z stare to Gen Z’s lack of social experience caused by isolation during the pandemic, exacerbated by too much screen time generally. Jess Rauchberg, an assistant professor of communication technologies at Seton Hall University, would tend to agree, telling NBC News: “I think we are starting to really see the long-term effects of constant digital media use, right?”  

Similarly, Tara Well, a professor at Bernard College, told Vox: “It’s sort of almost as though they’re looking at me as though they’re watching a TV show… We don’t see them as dynamic people who are interacting with us, who are full of thoughts and emotions and living, breathing people. If you see people as just ideas or images, you look at them like you’re paging through an old magazine or scrolling on your phone.”

Millennial Jarrod Benson told The Washington Post: “It’s like they’re always watching a video, and they don’t feel like the need to respond. Small talk is painful. We know this. But we do it because it’s socially acceptable and almost socially required, right? But they won’t do it.” Zoomer (as those of Gen Z are known) Jordan MacIsaac speculated to The New York Times: “It almost feels like a resurgence of stranger danger. Like, people just don’t know how to make small talk or interact with people they don’t know.”

On the other hand, TikTok creator Dametrius “Jet” Latham claims: “I don’t think it’s a lack of social skills. I just think we don’t care,” which might be more to the point.

Credit: Dametrius "Jet" Latham/TikTok

ABC News cited some customer service examples that deserved a Gen Z stare: "I've been asked to make somebody's iced tea less cold. I've been asked to give them a cheeseburger without the cheese, but keep the pepper jack of it all." As Zoomer Efe Ahworegba put it: “The Gen Z stare is basically us saying the customer is not always right.”

Ms. Ahworegba doesn’t think a Gen Z stare doesn’t reflect Gen Z’s lack of social skills, but rather: “They just didn’t want to communicate with someone who’s not using their own brain cells.” As some Zoomers say, it is “the look they give people who are being stupid while waiting for them to realize they are being stupid.”

Still, as one commenter on TikTok wrote: “I think it’s hilarious that Gen Z thinks they’re the first generation to ever deal with stupidity or difficult customers, and that’s how they justify the fact that they just disassociate and mindlessly stare into space whenever they are confronted with a difficult or confusing situation, instead of immediately engaging in the situation like every other generation has ever done before them lol.”

Or perhaps this is much ado about nothing. Professor Poulin noted: “To some degree, it’s a comforting myth that all of us who are adults — who’ve gotten beyond the teens and 20s — that we tell ourselves that we were surely better than that.” When it comes to displaying socially acceptable behavior, he says: “This isn’t the first generation to fail.”

---------

Interestingly, Gen Z is already skeptical of our traditional healthcare system, as well they might be. A new study from Edelman found:

  • 45% of adults age 18 to 34 said they've disregarded their health provider's guidance in favor of information from a friend or family member in the past year — a 13-point increase from the previous year.
  • 38% of young adults said they've ignored their provider in favor of advice from social media, a 12-point increase from the year before.

"Younger adults have truly created their own health ecosystem with how they're looking for information, who they trust, what they're doing with health information," said Courtney Gray Haupt, Global Health Co-Chair and US Health Chair at Edelman.

One might imagine the Gen Z stare a patient might give to a doctor giving them health advice.

It’s also impacting the Gen Z members who are going into medicine. Grace Akatsu, an MD/PhD student, told Medscape: “I think in the past, a job like being a physician has been viewed more of a calling — an all-consuming entity without much room for anything else. Gen Z sees it more as an important part of your life, but not your entire life.” They added: “It is important — in a respectful and conscientious way — to try to push for change where needed, even if means pushing against the traditional hierarchies that can be baked into medicine,”

And, of course, expectations about technology are baked in. Lena Volpe, MD, a second-year resident in Ob/Gyn at Northwestern Medicine in Chicago, said: “The way that my coresidents and medical students think about applying technology to medicine…there’s an automatic assumption that tech will make it more thorough.”

Refreshingly, though, BuzzFeed reports that patients’ interactions with Gen Z clinicians are “strangely reassuring” – more informal and collaborative. Seems like the opposite of a Gen Z stare!

-----------

Healthcare is full of things that deserve a Gen Z stare, and not just from Zoomers. We all have our own stories of stupid things we’ve had to go through, whether as patients, clinicians, or administrators. We just keep tolerating them all. The least – the very least! – we should do is to give them a Gen Z stare.  

Monday, July 14, 2025

Ancient DNA Isn't Just History

I knew what DNA was. I knew what synthetic DNA was. I knew what mirror DNA was. I even knew what eDNA was. But I didn’t know about aDNA, or that the field of study for it is called genomic paleoepidemiology. A new study by one of the pioneers of the field illustrates its power.

Those cows are going to have some surprises for prehistoric humans. Credit: Microsoft Designers
The study was led by Eske Willerslev, who is both Professor of Ecology & Evolution, Department of Zoology at Cambridge University and Director/ Professor, Centre of Excellence in GeoGenetics at the University of Copenhagen. He studies ancient DNA, aka “aDNA.” The new study traces 37,000 years of human disease history by examining the DNA from 214 known human pathogens, coming from the remains of some 1,300 prehistoric humans.

Our recent experience with COVID-19 and, currently, with bird flu, should have made everyone aware that one of the dangers of living with large populations of animals (like livestock) creates opportunities for diseases to cross over from those animals to us, often with devastating effect. These are called zoonotic diseases, and they still kill millions of people each year.

 When humans transitioned from hunter/gatherers to a more pastoral lifestyle, and then to farming, the pathogens had their chance.

Humans are thought to have started domesticating animals around 11,000 years ago. “This is the time when you’re in close proximity to animals, and you get these jumps,” Dr. Willerslev told Carl Zimmer of The New York Times. “That was the expectation.” So the researchers were surprised to find that the earliest evidence of zoonotic diseases didn’t appear until around 6,500 years ago, and didn’t become widespread until about 5,000 years ago.

Not surprisingly, they found evidence of the plague bacterium, Yersinia pestis, in a 5,500-year-old sample. They also found traces of Malaria (Plasmodium vivax) -- 4,200 years ago; Leprosy (Mycobacterium leprae) -- 1,400 years ago; Hepatitis B virus -- 9,800 years ago; Diphtheria  (Corynebacterium diphtheriae) -- 11,100 years ago.   

Credit: Sikora, et. al.

In all, the researchers identified 5,486 DNA sequences that came from bacteria, viruses and parasites. Not bad for DNA from tens of thousands of years ago.

The authors note:

Although zoonotic cases probably existed before 6,500 years ago, the risk and extent of zoonotic transmission probably increased with the widespread adoption of husbandry practices and pastoralism. Today, zoonoses account for more than 60% of newly emerging infectious diseases.

“It’s not a new idea, but they’ve actually shown it with the data,” says Edward Holmes, a virologist at the University of Sydney, Australia. “The scale of the work is really pretty breathtaking. It’s a technical tour de force.”

“We’ve long suspected that the transition to farming and animal husbandry opened the door to a new era of disease – now DNA shows us that it happened at least 6,500 years ago,” said Professor Willerslev. “These infections didn’t just cause illness – they may have contributed to population collapse, migration, and genetic adaptation.”

The researchers speculate that populations in the Steppe region were among the first to tame horse and domesticate livestock at scale, and it was their migration west that caused the appearance of the zoonotic diseases in the wider population. Moreover, it seems likely that the Steppe populations had acquired better immunity for them, unlike the existing populations they encountered. That would have led to massive population losses and made the Steppe migration much easier.

Think of what happened to the indigenous populations of the Americas or Australia when European settlers first came to their shores, only this time it was the then-Europeans who were the victims, dying off in huge numbers. Those of “European” background may need to think further east for their actual heritage.

“It has played a really big role in genetically creating the world we know of today,” Dr. Willerslev told Mr. Zimmer.

This isn’t just of academic interest. Zoonotic diseases are still very much with us. “If we understand what happened in the past, it can help us prepare for the future. Many of the newly emerging infectious diseases are predicted to originate from animals,” said Associate Professor Martin Sikora at the University of Copenhagen, and first author of the report.

Professor Willerslev added: “Mutations that were successful in the past are likely to reappear. This knowledge is important for future vaccines, as it allows us to test whether current vaccines provide sufficient coverage or whether new ones need to be developed due to mutations.”

The study has several limitations. The samples are all from Eurasia. “Africa would of course be super exciting, but we don’t have enough data,” Dr. Sikora told Mr. Zimmer. The researchers were only able to identify pathogens present in high doses in the bloodstream. “I’m sure there’s more in there,” says Professor Sikora. Last but not least, it only looked at DNA-based pathogens, not ones that use RNA, such as the viruses that cause influenza or polio.

Nonetheless, as Hendrik Poinar, an expert on ancient DNA at McMaster University in Canada who was not involved in the study, told Mr. Zimmer: “The paper is large and sweeping and overall pretty cool.”

Pretty. Cool. Indeed.

The paper concludes:

Our findings demonstrate how the nascent field of genomic paleoepidemiology can create a map of the spatial and temporal distribution of diverse human pathogens over millennia. This map will develop as more ancient specimens are investigated, as will our abilities to match their distribution with genetic, archaeological and environmental data. Our current map shows clear evidence that lifestyle changes in the Holocene led to an epidemiological transition, resulting in a greater burden of zoonotic infectious diseases. This transition profoundly affected human health and history throughout the millennia and continues to do so today.

As Dr. Poinar told Mr. Zimmer: “It’s a great start, but we all have miles to go before we sleep.”

-----------

 I’ve long been amazed at what archaeologists and paleontologists have been able to tell us about our past, based on a few fossils, bones, or artifacts. I’m even more impressed that we’re recovering ancient DNA and using it to tell us even more of the story about how we got here.

It should be sobering to us all that, as much as we worry about weapons and invasions, the biggest risk to a population remains infectious diseases, especially zoonotic ones. The “winner” is the one who happens to have the best immunity.

Monday, July 7, 2025

We're Gonna Need a Bigger Boat

My friends, we are like explorers of yore standing at the edge of a known continent, looking out at the vast ocean in hopes of finding new, unspoiled, better lands across it. True, we may have despoiled the continent behind us, but certainly things will be better in the new lands.

When it comes to getting to the 22nd century healthcare system, we're going to need a bigger boat

In the metaphor I’m thinking of, the known continent is our shambles of a healthcare system. For all the protestations about the U.S. having the best health care in the world, that’s manifestly untrue. We don’t live as long, we have more chronic diseases, we kill each other and ourselves at alarming rates, we pay way more, we have too many people that can’t afford care and/or can’t obtain care, we have too much care that is ineffective, inappropriate, or even harmful, and we spend much too much on administration.

We don’t trust the healthcare system, we don’t think its quality of care is good, we have an unfavorable opinion of it, we think it fails us. The vast majority of us think it should be fundamentally changed or completely rebuilt. That’s what we want to flee, and its no wonder why.

Across that metaphorical ocean, in the distance, over the horizon, lies the 22nd century healthcare system. It will, we hope, be like magic. It will be more equitable, more effective, more efficient, more proactive, less invasive, more affordable. We don’t know exactly what it will look like or how it will work, but we’ve seen what we have, and we know it can be better – much better. We just need to get there.

This leads me to the next part of the metaphor. I recently read a great quote from the late nature writer Barry Lopez, from his posthumous book of essays Embrace Fearlessly the Burning World. Mr. Lopez laments: “We are searching for the boats we never built.”

The boats aren’t coming to save us, to transport us to that idealized 22nd century healthcare system. Because we never built them. Because we still don’t have the courage to build them.

We’ve never built a system to ensure universal coverage. We rely on a hodgepodge of coverage mechanisms, each of which is struggling with its own problems and still leaving some 25 million people without insurance – and that’s before the 10-20 million who are predicted to lose coverage due to the “Big, Beautiful Bill” – plus the tens of millions who are “underinsured.

We’ve never built a system that was remotely equitable, just as we never did for housing, education, or employment. Money matters, ethnicity matters, geography matters. Discrepancies in availability of care and in outcomes show up clearly for each of those, and more.

We’ve never built a system that prizes patients above all. We deferred to doctors and hospitals, not calling them out when they gave us substandard care or when they charged us too much. Now health care has gone from a “noble calling” to a jobs and wealth creator. A recent New York Times analysis found (among other things):

  • Health care is the nation’s largest employer;
  • In 1990, health care wasn’t the largest employer in any state; now it is in 38 states;
  • We spend more on health care than on groceries or housing.

Pick your favorite target: private equity firms buying up health care entities, for-profit companies extracting profits from our care (or nominal “non-profits” doing the same), the steady corporatization of health care. Thrown in favorite boogeymen like health insurers, PBMs, or Big Pharma. One way or another, it’s about the money, not us.

The adage about Big Tech comes to mind: we’re not the customer, we’re the product (or, as I’ve written before, we’re simply the NPCs.).

We’ve never built the systems to make administration easier. So many codes, so many rules, so many types of insurance, so many silos, so many administrators. By now you’ve no doubt seen the chart of the growth of administers versus clinicians in our health care system, and are aware that around a quarter of our healthcare dollar goes to administration. It doesn’t have to be this way, it shouldn’t be this way, but administrative bloat is getting worse, not better.  

We’ve never built the systems to properly track our health or risks to it. From wastewater monitoring to tracking of diseases/outbreaks to adverse impacts from prescriptions drugs, medical devices, we’re relying on haphazard methods that leave us with no effective warning systems. The various public health mechanisms we had in place were woefully unfunded prior to COVID, crashed (and were burned) during COVID, and now are gleefully being defunded.

Worst of all, we’ve never built a system to track what care actually works. Sure, there are gold standard controlled studies that are supposed to do that, but much care that is delivered is not based on such studies, the impacts of such studies take years to permeate actual practice patterns, and practitioners aren’t really monitored to ensure they are delivering the “right” care or in the “right” way. We submit to care, we pay for that care, without really knowing if it is the care we should be getting or from the person/institution who should be giving it to us. 

Shame on us, and the system that allows all this.

------------

Without building all those boats, we’re not getting to the 22nd century healthcare system that we want, and deserve.

Sure, there’s lots of exciting technology that will help make things look more like a 22nd century healthcare system. AI, robots, genetic editing, nanobots, smart cells, synthetic biology, and more – these are all exciting, and will all be useful in that 22nd century healthcare. But they won’t get us to the 22nd century healthcare system we should get. They’ll just take us to a slicker, more expensive version of the one we have.

You may have seen that a couple weeks ago was the 50th anniversary of the initial release of Jaws. One of its most iconic lines was the Chief Brody’s reaction when he first glimpsed the size of the shark he and two companions were foolishly hunting: “You’re gonna need a bigger boat.”

When it comes to getting us to the 22nd century healthcare system we should want, we’re gonna need a bigger boat too – and we better start building it now.  

Monday, June 30, 2025

A New Future for DNA

As a DNA-based creature myself, I’m always fascinated by DNA’s remarkable capabilities. Not just all the ways that life has found to use it, but our ability to find new ways to take advantage of them. I’ve written about DNA as a storage medium, as a neural network, as a computer, in a robot, even mirror DNA. So when I read about the Synthetic Human Genome (SynHG) project, last week, I was thrilled.   

Welcome to the Synthetic Human Genome Project, Credit: SynHG

The project was announced, and is being funded, by the Wellcome Trust, to the tune of £10 million pounds over five years. Its goal is “to develop the foundational tools, technology and methods to enable researchers to one day synthesise genomes.”

The project’s website elaborates:

Through programmable synthesis of genetic material we will unlock a deeper understanding of life, leading to profound impacts on biotechnology, potentially accelerating the development of safe, targeted, cell-based therapies, and opening entire new fields of research in human health. Achieving reliable genome design and synthesis – i.e. engineering cells to have specific functions – will be a major milestone in modern biology.

The goal of the current project isn’t to build a full synthetic genome, which they believe may take decades, but “to provide proof of concept for large genome synthesis by creating a fully synthetic human chromosome.”

That’s a bigger deal than you might realize.

“Our DNA determines who we are and how our bodies work,” says Michael Dunn, Director of Discovery Research at Wellcome. “With recent technological advances, the SynHG project is at the forefront of one of the most exciting areas of scientific research.” 

The project is led by Professor Jason Chin from the Generative Biology Institute at Ellison Institute of Technology and the University of Oxford, who says: “The ability to synthesize large genomes, including genomes for human cells, may transform our understanding of genome biology and profoundly alter the horizons of biotechnology and medicine.”

He further told The Guardian: “The information gained from synthesising human genomes may be directly useful in generating treatments for almost any disease.”

Project lead Professor Jason Chin. Credit: Magdalen College, Oxford
Professor Patrick Yizhi Cai, Chair of Synthetic Genomics at the University of Manchester boasted: “We are leveraging cutting-edge generative AI and advanced robotic assembly technologies to revolutionize synthetic mammalian chromosome engineering. Our innovative approach aims to develop transformative solutions for the pressing societal challenges of our time, creating a more sustainable and healthier future for all.”

Project member Dr Julian Sale, of the MRC Laboratory of Molecular Biology in Cambridge, told BBC News the research was the next giant leap in biology: "The sky is the limit. We are looking at therapies that will improve people's lives as they age, that will lead to healthier aging with less disease as they get older. We are looking to use this approach to generate disease-resistant cells we can use to repopulate damaged organs, for example in the liver and the heart, even the immune system.”

Consider me impressed.

Professor Matthew Hurles, director of the Wellcome Sanger Institute, explained to BBC News the advantage of synthesizing DNA: “Building DNA from scratch allows us to test out how DNA really works and test out new theories, because currently we can only really do that by tweaking DNA in DNA that already exists in living systems."

It’s mind-blowing to think about the potential benefits that could come of this work, but the potential risks are equally consequential. Designer babies, enhanced humans, hybrids with other animals – synthetic DNA might accommodate all those and more. The sky is the limit indeed.

The project leaders are aware that there are important ethical considerations in such work, and so are including a companion social science program, called Care-full Synthesis, that is being led by Professor Joy Zhang from the Centre for Global Science and Epistemic Justice at the University of Kent. It plans to undertake a “transdisciplinary and transcultural investigation into the socio-ethical, economic, and policy implications of synthesising human genomes,” placing particular emphasis on “fostering inclusivity within and across nation-states, while engaging emerging public–private partnerships and new interest groups.” 

“With Care-full Synthesis, through empirical studies across Europe, Asia-Pacific, Africa, and the Americas, we aim to establish a new paradigm for accountable scientific and innovative practices in the global age,” says Professor Zhang. “One that explores the full potential of synthesising technical possibilities and diverse socio-ethical perspectives with care.”

That may prove to be a harder task that synthesizing a human chromosome.

Working out the socio-ethical perspectives is going to be harder than this, Credit: Microsoft Designer

SynHG is not the only project looking at synthetic DNA; it is a technology whose time is coming. Does anyone think that researchers in China aren’t working on this? Does anyone think they’re equally looking at the ethical considerations? Or maybe the next breakthrough will be some U.S start-up, that is gambling big on a use for synthetic DNA and would be expecting a unicorn-level return.

Professor Bill Earnshaw, a genetic scientist at Edinburgh University, warned BBC News: “The genie is out of the bottle. We could have a set of restrictions now, but if an organisation who has access to appropriate machinery decided to start synthesising anything, I don't think we could stop them."

But Wellcome’s Dr. Tom Collins, who greenlit the funding, told BBC News: “We asked ourselves what was the cost of inaction. This technology is going to be developed one day, so by doing it now we are at least trying to do it in as responsible a way as possible and to confront the ethical and moral questions in as upfront way as possible."

Kudos to Wellcome for building these considerations into the project. They’d be considered too woke in the U.S. And kudos for acknowledging the costs of inaction, which many policymakers in the U.S. and elsewhere fail to recognize.

------

We’ve made remarkable progress on DNA in my lifetime. When I was born, it had just been discovered. The Human Genome Project launched in 1990 and the first sequence of the human genome by 2003. The CRISPR revolution – allowing gene editing -- started in 2012, and we’re now doing personalized gene editing therapy.  “Remarkable” is too mild a word.

But there’s still so much we don’t know. We don’t always know when/why genes turn on/off. We still have a very imperfect understanding of which diseases are genetic and which genes cause them, under what circumstances. And, for heaven’s sake, what is all that “junk DNA” doing? Is it just left over from evolution doing its long kludge towards survival, or does it carry some importance we haven’t learned yet?   

Those are the kinds of things SynHG might help us better understand, and I can’t wait to see what it finds out.