Monday, January 13, 2025

Maybe AI Doesn't Read Blueprints

Gosh, who knew that today would be an AI day, with at least three major announcements about “blueprints” for its development going forward? Of course, these days every day is an AI day; trying to take in all AI-related news can be overwhelming. But before some other AI news drowns them out, I wanted to at least outline today’s announcements.

The Biden Administration thinks it has an AI blueprint. Credit: AI.gov

The three I’m referring to are the Biden Administration’s Interim Final Rule on Artificial Intelligence Diffusion, OpenAI’s Economic Blueprint, and the UK’s AI-driven Plan for Change.  

The Biden Administration’s rules aim to preserve America’s lead in AI, stating: “it is essential that we do not offshore this critical technology and that the world’s AI runs on American rails.” It establishes who advanced chips can be sold to and how they can be used in other countries, with no restrictions on 18 key allies and partners.

It also sets limits on model weights for AI models, seeking to constrain non-preferred entities’ ability to train advanced AI models.

“The U.S. leads the world in AI now, both AI development and AI chip design, and it’s critical that we keep it that way,” Commerce Secretary Gina Raimondo said in a briefing with reporters ahead of Monday’s announcement

Not everyone is happy. The Information Technology & Innovation Foundation blasted the rule, claiming it would hamper America’s competitiveness.  Vice President Daniel Castro warned: “By pressuring other nations to choose between the United States and China, the administration risks alienating key partners and inadvertently strengthening China’s position in the global AI ecosystem.”

Similarly, Nvidia, which makes most of those advanced AI chips, expressed its opposition in a statement from Ned Finkle, vice president of government affairs, claiming the rule “threatens to derail innovation and economic growth worldwide.”  He explicitly contrasts how the first Trump Administration (and, one assumes, the next Trump Administration) sought to foster “an environment where U.S. industry could compete and win on merit without compromising national security.”  

Not to be outdone, Ken Glueck, Executive Vice President, Oracle, says the rule “will go down as one of the most destructive to ever hit the U.S. technology industry,” and “we are likely handing most of the global AI and GPU market to our Chinese competitors.”

It will be interesting to see what the Trump Administration does with the Rule.

Meanwhile, OpenAI’s economic blueprint believes “America needs to act now to maximize the technology’s possibilities while minimizing its harms…to ensure that AI’s benefits are shared responsibly and equitably.” Its goals are to:

  • Continue the country’s global leadership in innovation while protecting national security
  • Make sure we get it right in AI access and benefits from the start
  • Maximize the economic opportunity of AI for communities across the country.

It sees “infrastructure as destiny,” with investment in AI infrastructure “an unmissable opportunity to catalyze a reindustrialization of the US.” It wants to ensure that “an estimated $175 billion sitting in global funds awaiting investment in AI projects” get invested here rather than in China.

OpenAI does want “common-sense rules” that promote “free and fair competition” while allowing “developers and users to work with and direct our tools as they see fit” under those rules. And, of course, all this while “Preventing government use of AI tools to amass power and control their citizens, or to threaten or coerce other states.” It particularly wants to avoid a “patchwork of state-by-state regulations”

The company is planning an event in Washington D.C. on January 30 with CEO Sam Altman “to preview the state of AI advancement and how it can drive economic growth.”  I’ll bet Mr. Altman is hoping he gets plenty of Trump Administration officials, although probably not Elon Musk.

Credit: OpenAI
Last but not least, UK Prime Minister Keir Starmer has endorsed an ambitious set of AI recommendations, wanting to turbocharge the economy by turning the UK into an AI superpower. Mr. Starmer vowed:

But the AI industry needs a government that is on their side, one that won’t sit back and let opportunities slip through its fingers. And in a world of fierce competition, we cannot stand by. We must move fast and take action to win the global race.
Our plan will make Britain the world leader. It will give the industry the foundation it needs and will turbocharge the Plan for Change. That means more jobs and investment in the UK, more money in people’s pockets, and transformed public services.

There are three key elements:

First, “laying the foundations for AI to flourish in the UK,” including AI Economic Growth Zones and  a new supercomputer.

Second, “boosting adoption across public and private sectors,” such as through a new digital government center that “will revolutionise how AI is used in the public sector to improve citizens lives and make government more efficient.”

Third, “keeping us ahead of the pack,” with a new team that “will use the heft of the state to make the UK the best place for business.”

It will do so while also charting its own course on regulation. "I know there are different approaches (to AI regulation) around the world but we are now in control of our regulatory regime so we will go our own way on this," the PM said. "We will test and understand AI before we regulate it to make sure that when we do it, it's proportionate and grounded."

Credit: Gov.UK
Chris Lehane, Chief Global Affairs Officer at OpenAI, praised the plan: “The government’s AI action plan - led by the Prime Minister and Secretary Peter Kyle - recognises where AI development is headed and sets the UK on the right path to benefit from its growth:”

All nice words, but lots left unsaid. As Gaia Marcus of the Ada Lovelace Institute pointed out: "Just as the government is investing heavily in realising the opportunities presented by AI, it must also invest in responding to AI’s negative impacts now and in the future."

-----------

These things are true: AI is going to play a major role in the world economy, and to be a superpower, a country will have to be an AI superpower. To be an AI superpower, a country has to have the best AI infrastructure, including chips and data centers. AI is equally capable of positive impacts as well as negative impacts, and some regulation is needed to mitigate the latter. Lastly, regulation is going to lag innovation -- and AI will drive innovation at rates we haven’t seen before.

I envy the people working on AI innovation, but I don’t envy those trying to figure out how to best regulate it.

Tuesday, January 7, 2025

Program Me Some Cells, Please

Tempted though I might be to write about Nvidia’s new platform for “physical AI” – aka, robots – I figured plenty of others will do that. What’s another trillion in market cap for Nvidia, anyway?  On the other hand, I don’t see enough excitement about some recent research at Rice University on “smart cells.”

If you want to program cells, Xiaoyu Yang is your man. Credit: Jeff Fitlow/Rice University

The research, published in Science with the matter-of-fact title Engineering synthetic phosphorylation signaling networks in human cells (by contrast, Nvidia’s marketers named their foundation models for humanoid robots “GRooT Blueprint” -- now, that’s catchy), was about how to program human cells to detect and respond to signals in the body. Between synthetic biology and robots. I’ll pick synthetic biology everyday (unless the robots are nanobots, of course).  

“Imagine tiny processors inside cells made of proteins that can ‘decide’ how to respond to specific signals like inflammation, tumor growth markers or blood sugar levels,” said Xiaoyu Yang, a graduate student in the Systems, Synthetic and Physical Biology Ph.D. program at Rice who is the lead author on the study. “This work brings us a whole lot closer to being able to build ‘smart cells’ that can detect signs of disease and immediately release customizable treatments in response.”   

Imagine, indeed.

It turns out that there is a natural process called phosphorylation, which cells use to respond to their environment. As the Rice press release explains: “Phosphorylation is involved in a wide range of cellular functions, including the conversion of extracellular signals into intracellular responses — e.g., moving, secreting a substance, reacting to a pathogen or expressing a gene.”

It goes on to elaborate:

Phosphorylation is a sequential process that unfolds as a series of interconnected cycles leading from cellular input (i.e. something the cell encounters or senses in its environment) to output (what the cell does in response). What the research team realized — and set out to prove — was that each cycle in a cascade can be treated as an elementary unit, and these units can be linked together in new ways to construct entirely novel pathways that link cellular inputs and outputs.

“This opens up the signaling circuit design space dramatically,” said Caleb Bashor, an assistant professor of bioengineering and biosciences and corresponding author on the study. “It turns out, phosphorylation cycles are not just interconnected but interconnectable — this is something that we were not sure could be done with this level of sophistication before. Our design strategy enabled us to engineer synthetic phosphorylation circuits that are not only highly tunable but that can also function in parallel with cells’ own processes without impacting their viability or growth rate.”

The “sense-and-respond” cellular circuit design occurs rapidly – seconds or minutes – which allows it to be used for processes that occur on similar timescales, unlike other previous efforts. For example, the researchers tested it to detect and respond to inflammatory factors, which they believe could be used to control autoimmune flare-ups and reduce immunotherapy-associated toxicity.

Soon-to-be Dr. Yang added: “We didn’t necessarily expect that our synthetic signaling circuits, which are composed entirely of engineered protein parts, would perform with a similar speed and efficiency as natural signaling pathways found in human cells. Needless to say, we were pleasantly surprised to find that to be the case. It took a lot of effort and collaboration to pull it off.”

Professor Bashor concluded: “Our research proves that it is possible to build programmable circuits in human cells that respond to signals quickly and accurately, and it is the first report of a construction kit for engineering synthetic phosphorylation circuits.”

A “construction kit” for “programmable circuits” in human cells.  Tell me that’s not exciting stuff.

Caroline Ajo-Franklin, director of the Rice Synthetic Biology Institute, added: “If in the last 20 years synthetic biologists have learned how to manipulate the way bacteria gradually respond to environmental cues, the Bashor lab’s work vaults us forward to a new frontier — controlling mammalian cells’ immediate response to change.” 

“This is like embedding tiny processors in cells, made entirely of proteins, that can ‘decide’ how to respond to specific signals such as inflammation, tumor growth, or high blood sugar,” Dr. Yang explained to SynBioBeta. “Our work moves us significantly closer to constructing ‘smart cells’ that can detect disease indicators and instantly produce tailor-made treatments.”

You had me at “smart cells.”



I would be remiss if I didn’t mention a couple of other developments that offer to make this kind of advance even more powerful. Last month researchers at University of California San Diego announced a new software package they call SMART: Spatial Modeling Algorithms for Reactions and Transport. “SMART provides a significant advancement in modeling cellular processes,” said Emmet Francis, PhD, lead author of the study and a postdoctoral fellow at UC San Diego

They believe it can realistically simulate cell-signaling networks; it “takes in high-level user specifications about cell signaling networks and then assembles and solves the associated mathematical systems.” This “could help accelerate research in fields across the life sciences, such as systems biology, pharmacology and biomedical engineering.”

If you are not a fan of geometry, much less computational geometries, SMART is not for you, but if you are a biologist it opens up lots of possibilities. Someone such as Blaise Manga Enuh, a Postdoctoral Research Associate in Microbial Genomics and Systems Biology at University of Wisconsin-Madison. He writes in The Conversation about genome-scale metabolic models, or GEMs, which can be used to virtually carry out experiments that would have taken painstaking, time-consuming experiments in the lab.

“With GEMs,” Dr. Enuh says, “researchers cannot only explore the complex network of metabolic pathways that allow living organisms to function, but also tweak, test and predict how microbes would behave in different environments, including on other planets.”

Moreover:

Synthetic biologists can use GEMs to design entirely new organisms or metabolic pathways from scratch. This field could advance biomanufacturing by enabling the creation of organisms that efficiently produce new materials, drugs or even food.

I have a feeling Dr. Yang, Dr. Francis, and Dr. Enuh would have a lot to talk about.

So with GEMs or SMART, you could model out what you want to happen at a cellular level, then use the Rice technique to program cells to accomplish that. That’s 22nd century medicine – and we’re lucky enough to be catching glimpses of it in 2025.


Monday, December 30, 2024

Mayday, Mayday

They’re crucial to the U.S. and the world economy, yet most people rarely think about them. The U.S. used to lead in their manufacturing, but now has fallen far behind, losing tens of thousands of well-paying blue collar jobs as a result. China has become a leader, while the U.S. has become heavily dependent on southeast Asia, particularly South Korea. Developing a more proactive federal industrial policy for rebuilding the U.S. capacity has bipartisan support, yet it is not clear if this can happen quickly enough – or at all.

Yeah, that's probably not in the U.S. Credit: Bing Image Creator

You’d be forgiven if you assumed I was referring to chips, but that’s one letter off. I am worried about U.S. chip production, but for today I want to talk about ships.  

Daniel Michaels writes in The Wall Street Journal: “No nation has ever successfully ranked as a world naval power without also being a global maritime power.” We used to be such a power, with the biggest navy, protecting the biggest merchant fleet. Those days are long gone. Mr. Michaels laments:

U.S. commercial ships today account for less than 1% of the world fleet. U.S. ports are racked by strikes and battles over the type of automation that has supercharged expansion of container terminals across the globe. The Navy struggles to find commercial vessels to support its far-flung operations.

In Noahpinion, Brian Potter of Construction Physics shares similar concerns:

Commercial shipbuilding in the U.S. is virtually nonexistent: in 2022, the U.S. built just five oceangoing commercial ships, compared to China’s 1,794 and South Korea’s 734. The U.S. Navy estimates that China’s shipbuilding capacity is 232 times our own. It costs roughly twice as much to build a ship in the U.S. as it does elsewhere.

Credit: Voronoi/Visual Capitalist
James Watson, a retired U.S. Coast Guard rear admiral, told Mr. Michaels: “Not thinking of the maritime industry as an important part of your economy, that’s kind of crazy.”

Yes, it is.

We spend massive amounts on our military budget – more than the next nine countries (Chinaincluded) combined spend – yet China’s navy already has more ships and is planning to double that number by the end of the decade. U.S. Navy leaders believe our ships are more capable, but, at some point, quantity outweighs quality.

“It’s a major problem for us, especially if we wound up in a conflict or we wind up in a situation where China decides for whatever reason that they want to, you know, stop our economy and put brakes on it in a big way,” Senator Mark Kelly said in an interview. “They have the ability to do that.” 

And, of course, if and when the U.S. needs to boost its number of navy (or other) ships, we’ll be dependent on our South Korean friends to produce and maintain them. President-elect Trump is already talking to South Korean leaders about our reliance on their capabilities.

Last June, The Center for Strategic and International Studies (CSIS) warned:  

China’s massive shipbuilding industry would provide a strategic advantage in a war that stretches beyond a few weeks, allowing it to repair damaged vessels or construct replacements much faster than the United States, which continues to face a significant maintenance backlog and would probably be unable to quickly construct many new ships or to repair damaged fighting ships in a great power conflict.

“Part of it is we don't have the backbone of a healthy commercial shipbuilding base to rest our naval shipbuilding on top of,” National Security Advisor Jake Sullivan said earlier this month at the Aspen Security Forum in Washington. “And that's part of the fragility of what we're contending with and why this is going to be such a generational project to fix.”

Anyone believe we have a generation before China flexes its naval or maritime prowess, such as in the South China Sea (or the Panama Canal)?

Credit: Infomaritime.EU
When it came to chip manufacturing, Congress finally acted, passing the Chips and Science Act in 2022. It is starting to have an impact, although slower than originally hoped, and it is not clear that it will ever put us back in world leadership. We’re starting to take ship manufacturing more seriously as well. Earlier this month Senators Kelly (D-AZ) and Todd Young (R-IN) and Representative John Garamendi (D-CA) introduced The Ships for America Act.  

Senator Kelly explained:

We’ve always been a maritime nation, but the truth is we’ve lost ground to China, who now dominates international shipping and can build merchant and military ships much more quickly than we can.
The SHIPS for America Act is the answer to this challenge. By supporting shipbuilding, shipping, and workforce development, it will strengthen supply chains, reduce our reliance on foreign vessels, put Americans to work in good-paying jobs, and support the Navy and Coast Guard’s shipbuilding needs.

Sal Mercogliano, associate professor of history at Campbell University, told USNI News: “This is the first major piece of maritime reform since the Merchant Marine Act of 1970. So you’re talking about 55 years since we’ve had anything like this.”

Of course, whether this bill gets a high priority in the next Congress, which will be obsessed with tax and immigration legislation, remains to be seen. But, as CSIS noted, “the clock is ticking.”

As with chips, rebuilding shipbuilding capabilities won’t be easy. Robert Kunkel, president of Alternative Marine Technologies, writes in MarineLink: “The problem is not the cost of labor. It is our inability to build infrastructure that supports ship manufacturing. And with that, the path forward needs to be a fresh start with greenfield locations and new technology in commercial shipyards surrounded by a manufacturing base that supports the effort.”

Mr. Kunkel hopes for some uniquely American approaches:

We are seeing interest from American technology and investment capital as we address questions from investors asking if we can move ship manufacturing to a “Space X” model.  Is it possible to 3D print a vessel or provide new technology to redefine “ship manufacturing”? Can we move toward a full production line similar to the auto industry?  Can this manufacturing process be operated by robotics to ease the reported labor shortages and train a new shipbuilding work force.

Mr. Michaels quotes Navy Secretary Carlos Del Toro, who likes to cite early 20th century naval strategist Alfred Thayer Mahan: “naval power begets maritime commercial power, and control of maritime commerce begets greater naval power.” We’ve forgotten part of that equation, and that is putting both sides at risk.

As Mr. Potter concludes his piece, when it comes to regaining our shipbuilding capabilities: “The picture is not pretty, and it should concern us all.”

Consider me concerned.

Monday, December 23, 2024

It's Quantum Time

In Fast Company, Adam Bluestein writes: “It’s an unscientific fact that nine out of every 10 conversations about tech in the past year have been about AI. But the 10th has been about quantum computing.”

This is one of those conversations.

You're going to be surprised by quantum computers. Credit: Bing Image Creator

Mr. Bluestein continues:

In a period of just over a year, the perennial technology of the future has become suddenly real—with major breakthroughs in computing hardware and software, significant public and private investments, and rising stock prices for companies in the burgeoning quantum ecosystem.

For example, you may have noticed that earlier this month Google announced Willow, its breakthrough quantum chip. Hartmut Neven, Founder and Lead of Google Quantum AI, claimed two major accomplishments for Willow. One was that it “performed a standard benchmark computation in under five minutes that would take one of today’s fastest supercomputers 10 septillion (that is, 1025) years — a number that vastly exceeds the age of the Universe.” 

The second was that it cracked a key problem with quantum computing, reducing errors “exponentially” as it scaled up using more qubits (the quantum version of bits). The error reduction was also important because Dr. Neven states: “it’s also one of the first compelling examples of real-time error correction on a superconducting quantum system.” Googles believes it has reached an “error correction threshold” that is critical to making quantum computing reliable.

Credit: Google Quantum AI
“When quantum computing was originally envisioned, many people — including many leaders in the field — felt that it would never be a practical thing,” Mikhail Lukin, a professor of physics at Harvard, told The New York Times. “What has happened over the last year shows that it is no longer science fiction.” Even Elon Musk responded to Google’s announcement with “Wow” on X.

Meanwhile, Steven Rosenbush wrote a Wall Street Journal article “The Age of Quantum Software Has Already Started.” For example, IBM already has some 250 paying customers for its quantum systems and services. “It’s actually not a matter of waiting for that to be built. In fact, it’s already happening,” Jerry M. Chow, an IBM fellow and director of quantum infrastructure, told him.

Mr. Rosenbush mentions several other quantum computing companies, such as Terra Quantum, which touts itself as “The leading global independent full stack quantum technology company, IonQ, whose mission is “To build the world’s best quantum computers to solve the world’s most complex problems,” and PsiQuantum, which claims it is “building the world’s first useful quantum computer.”

I would be remiss if I didn’t note that Terra Quantum specifically mentions life sciences and healthcare as a target industry, pointing out: “Quantum technologies help us truly understand, simulate and optimize the quantum properties of nature. This, in turn, unlocks enormous potential in the areas of biochemistry, pharmacogenomics, medical imaging, and others.” You can bet other quantum companies are or will be doing the same.

PsiQuantum is interesting for at least three other reasons. As Elizabeth Gibney outlines in Nature, first, it is basing its structure on photons instead of atoms to make qubits – a “photonics approach.” Second, it has already raised $1b and values itself at $3b. “They have received one of the biggest venture-capital investments in the quantum community,” Doug Finke, of the business-analysis firm Global Quantum Intelligence, told her.

Third, and more to the point, by the end of 2027 it “aims to be operating a photonic quantum computer that can run commercially useful problems and is ‘fault-tolerant’.” It plans to have this operational at its Brisbane site in 2027 and at Chicago’s new Illinois Quantum and Microelectronics Park in 2028.

Illinois, it should be noted, is investing some $500 million in IQMP, and will give PsiQuantum $200 million more in incentives. It wants the park to be the Silicon Valley of quantum computing. “Our vision of Illinois as a global quantum capital comes further into focus at Illinois Quantum and Microelectronics Park, providing limitless opportunities for economic investment and innovation right here on the South Side,” said Governor JB Pritzker.   

Rendering of IQMP. Credit: State of Illinois
PsiQuantum Chief Business Officer Stratton Sclavos told Mr. Rosenbush: “We’re not building a science project.” He further explained: “I’m not trying to speed up your simulation that, you know, works perfectly, well. What I’m trying to do is give you a simulation of a thing you can’t simulate today.”

Still, PsiQuantum has not published much research, so validating its hopes is difficult. “My impression is there’s a lot of skepticism about how much progress PsiQuantum has made,” Shimon Kolkowitz, a quantum physicist at the University of California, Berkeley, told Ms. Gibney.  He warned her that a bet on them would be “extremely high risk.”

Still, money is flowing into the field. Fast Company reports:

The number of quantum computing deals soared more than 700% from 2015 through 2023, according to PitchBook data, and total deal value grew tenfold to $1 billion. Governments around the world have made quantum computing a strategic national defense priority. As of February 2024, the U.S government had invested $3 billion in quantum computing projects, plus an additional $1.2 billion from the National Quantum Computing Initiative. China, meanwhile, has reportedly invested some $15 billion in quantum computing efforts.

One reason interest is s high – and this may be the main thing you knew about quantum computing – is that once it happens, our traditional encryption methods become useless; a quantum computer could crack them in nanoseconds.

Cryptocurrencies, for example, would be very vulnerable. “What you’ve got here is a time bomb waiting to explode, if and when someone gets that ability to develop quantum-computer hacking and decides to use that to target cryptocurrencies,” said Arthur Herman, senior fellow at the Hudson Institute.

People are already working on how to make encryption safe from quantum computers, such as quantum token, using quantum key distribution (QKD). IEEE Spectrum reports:

QKD is, in theory at least, an unbreakable method for sharing a cryptographic key between two parties that can then be used to encrypt and decrypt private messages. The technology is currently being tested by financial institutions, government entities, major technology firms and militaries.

“There is definitely a quantum apocalypse on the horizon at some point in the future, but that point is a sufficiently long time away that there is no need for panic,” Emin Gün Sirer, founder of the Avalanche cryptocurrency, told WSJ.

Umm, you might want to start panicking – or at least start planning.

As Dr. Lukin told NYT: “People no longer doubt it will be done. The question now is: When?” As AI has recently proven, I’m betting “when” will be much sooner than we’ll be ready for.

Sunday, December 15, 2024

Mirror, Mirror...Everywhere

One biology fact that, until last week, that I never had to worry about is why life on earth not just is all DNA-based but also all share the same chirality. E.g., all life we know about has DNA with a right-handed double helix, uses right-handed sugar molecules, but builds proteins with left-handed amino acids. That’s just how life is, right?

DNA's mirror image could be scary. Credit: Bing Image Creator

But it turns out that life’s chirality is not a law of nature, and that scientists believe that “mirror life” is not only theoretically feasible but plausible within the next ten years. And, many of them believe, that is something we should be very worried about.

Last week, a group of scientists released a lengthy Technical Report on Mirror Bacteria: Feasibility and Risks, along with an accompanying commentary in Science: Confronting risks of mirror life. Long story short, this is a mirror into which we should look very cautiously – if at all.

The report explains the fundamentals:

In a mirror bacterium, all of the chiral molecules of existing bacteria—proteins, nucleic acids, and metabolites—are replaced by their mirror images. Mirror bacteria could not evolve from existing life, but their creation will become increasingly feasible as science advances.

Credit: Adamala, et. alia
That’s the kind of progress science makes, for better and, sometimes, for worse. The problem, as the report also points out, is:

Interactions between organisms often depend on chirality, and so interactions between natural organisms and mirror bacteria would be profoundly different from those between natural organisms. Most importantly, immune defenses and predation typically rely on interactions between chiral molecules that could often fail to detect or kill mirror bacteria due to their reversed chirality. It therefore appears plausible, even likely, that sufficiently robust mirror bacteria could spread through the environment unchecked by natural biological controls and act as dangerous opportunistic pathogens in an unprecedentedly wide range of other multicellular organisms, including humans.

“The threat we’re talking about is unprecedented,” co-author Prof Vaughn Cooper, an evolutionary biologist at the University of Pittsburgh, told The Guardian. “The consequences could be globally disastrous,” another co-author, Jack W. Szostak, chemist at the University of Chicago, agreed in The New York Times

OK, that does sound bad.

Michael Kay, MD, PhD, professor of biochemistry at the University of Utah and one of the contributors to the report & commentary, warns:

If these bacteria are able to grow at all—and there is evidence that they probably would be able to grow, at least to some extent, in our natural world—maybe, over time, they could evolve the ability to eat our food and convert it to mirror food. If that happened, that would release a brake on their growth, and then all these other controlling mechanisms, as far as we can tell, would not be effective against these mirror bacteria.

I was especially chilled by this statement from Professor Kay: “There is a real possibility that mirror bacteria would struggle to find enough food to eat in order to grow, but we are humble in the face of evolution.”

As we should be. Evolution tells us that life finds a way (or did you not see Jurassic Park?).

Credit: Adamala, et. alia
The authors believe this is a time for caution. They urge:

However, in the absence of compelling evidence for reassurance, our view is that mirror bacteria and other mirror organisms should not be created…In light of our initial findings, we believe that it is important to begin a conversation on how the risks can be mitigated, and we call for collaboration among scientists, governments, funders, and other stakeholders to consider an appropriate path forward.

The people involved in the report and commentary are not alarmists. They are scientists who have been working in the field. “It’s inherently incredibly cool,” co-author Kate Adamala, a synthetic biologist at the University of Minnesota, told The New York Times. “If we made a mirror cell, we would have made a second tree of life.”

Cool indeed. But when they started talking about risks, they grew more concerned. “We’ve all done our best to shoot it down,” Professor Cooper admitted to The New York Times. “And we failed.”

Not everyone is so worried. Andrew Ellington, a molecular biologist at the University of Texas at Austin, told Scientific American:  “I’d argue a mirror-image bacteria would be at a gross competitive disadvantage and isn’t going to survive well.” He is dismissive of efforts to curb research: “This is like banning the transistor because you're worried about cybercrime 30 years down the road. I’m not particularly worried about a mostly unknown threat 30 years from now versus the good that can be done now.”

For example, one of the uses researchers were investigating mirror life for was for prescription drugs. Professor Kay explains: “they have the potential to last for a much longer period of time and to open up a whole new class of therapeutics that would allow us to treat a variety of diseases that are currently challenging.” A moratorium on research could hamper progress in drug development, such as for H.I.V. or Alzheimer’s.

The authors conclude their commentary with a very reasoned plan: “To facilitate greater understanding of the risks associated with mirror life and further progress on governance, we plan to convene discussions on these topics in 2025. We are hopeful that scientists and society at large will take a responsible approach to managing a technology that might pose unprecedented risks.”

Yeah, well, that’s not going to happen. As with AI, nuclear weapons, or any other transformative technologies, we’re more likely to plunge ahead regardless of risks, afraid that other scientists/companies/countries will get a jump on us if we slow up.

I wish we were more thoughtful; I wish we were better at anticipating risks versus benefits. I’m proud of these scientists for their innovative work, and for being brave enough to advocate caution about it, but I’m not optimistic that their plea will be heeded. Someone is going to make mirror cells, and probably sooner than we expect.

Then, Professor Kay predicts: “Once a mirror cell is made, it's going to be incredibly difficult to try to put that genie back in the bottle.”

Monday, December 9, 2024

You, Me, and Our Microbiome

You may have heard about the microbiome, that collection of microorganisms that fill the world around, and in, us. You may have had some digestive tract issues after a round of antibiotics wreaked havoc with your gut microbiome. You may have read about the rafts of research that are making it clearer that our health is directly impacted by what is going on with our microbiome.  You may even take probiotics to try to encourage the health of your microbiome.

Our microbiome is all around us. Credit: Bing Image Creator

But you probably don’t realize how interconnected our microbiomes are.

Research published in Nature by Beghini, et. al., mapped microbiomes of almost 2,000 individuals in 18 scattered Honduras villages. “We found substantial evidence of microbiome sharing happening among people who are not family and who don’t live together, even after accounting for other factors like diet, water sources, and medications,” said co-lead author Francesco Beghini, a postdoctoral associate at the Yale Human Nature Lab. “In fact, microbiome sharing was the strongest predictor of people’s social relationships in the villages we studied, beyond characteristics like wealth, religion, or education.”

“Think of how different social niches form at a place like Yale,” said co-lead author Jackson Pullman. “You have friend groups centered on things like theater, or crew, or being physics majors. Our study indicates that the people composing these groups may be connected in ways we never previously thought, even through their microbiomes.”

“What’s so fascinating is that we’re so interconnected,” said Mr. Pullman. “Those connections go beyond the social level to the microbial level.”

Credit: Beghini, et. alia
Study senior author Nicholas Christakis, who directs the Human Nature Lab, explained that the research “reflects the ongoing pursuit of an idea we articulated in 2007, namely, that phenomena like obesity might spread not only by social contagion, but also by biological contagion, perhaps via the ordinary bacteria that inhabit human guts.” Other conditions, such as hypertension or depression, may also be spread by social transmission of the microbiome.

Professor Christakis thinks the findings are of broad importance, telling Science Alert: "We believe our findings are of generic relevance, not bound to the specific location we did this work, shedding light on how human social interactions shape the nature and impact of the microbes in our bodies." But, he added: "The sharing of microbes per se is neither good nor bad, but the sharing of particular microbes in particular circumstances can indeed be good or bad.”

This research reminded me of 2015 research by Meadow, et. al., that suggested our microbiome doesn’t just exist in our gut, inside other parts our body, and on our skin, but that, in fact, we’re surrounded by a “personal microbial cloud.” Remember the Peanuts character Pigpen, who walked around in his personal dirt cloud? Well, that’s each of us, only instead of dirt we’re surrounded by our microbial cloud – and those clouds are easily discernable from each other.

We're all like that, but with a microbiome cloud. Credit: Charles M. Schulz 


Dr. Meadow told BBC at the time: "We expected that we would be able to detect the human microbiome in the air around a person, but we were surprised to find that we could identify most of the occupants just by sampling their microbial cloud."

Those researchers predicted:

While indoors, we are constantly interacting with microbes other people have left behind on the chairs in which we sit, in dust we perturb, and on every surface we touch. These human-microbial interactions are in addition to the microbes our pets leave in our houses, those that blow off of tree leaves and soils, those in the food we eat and the water we drink. It is becoming increasingly clear that we have evolved with these complex microbial interactions, and that we may depend on them for our well-being (Rook, 2013). It is now apparent, given the results presented here, that the microbes we encounter include those actively emitted by other humans, including our families, coworkers, and perfect strangers.

Dr. Beghini and colleagues would agree, and further suggest that it’s not only indoors where we’re sharing microbes.

I would be remiss if I didn’t point out new research which found that our brains, far from being sterile, are host to a diverse microbiome and that impacts to it may lead to Alzheimer’s and other forms of dementia.

Could we catch Alzheimer’s from someone else’s personal microbiome cloud?  It’s possible. Could we prevent or even cure it by careful curation of the brain (or gut) microbiome? Again, possible.

The truth is that, despite decades of understanding that we have a microbiome, we still have a very limited understanding of what a healthy microbiome is, what causes it to not be healthy, what problems arise for us when it isn’t healthy, or what we can do to bring it (and us) to more optimal health. We’re still struggling to understand where besides our gut it plays a crucial role.

We now know that we can “share” parts of our microbiome with those around us, but not quite what the mechanisms for that are – e.g., touch, sharing objects, or having our personal clouds intersect.

We feel like we are where scientists were two hundred years ago in the early stages of the germ theory of disease. They knew germs impacted health, they even could connect some specific germs with specific diseases, they even had rudimentary interventions based on it, but much remained to be discovered. That led to vaccines, antibiotics, and other pharmaceuticals, all of which gave us “modern medicine,” but failed to anticipate the importance of the microbiome on our health.

Similarly, we’re justifiably proud of the progress we’ve made in terms of understanding our genetic structure and its impacts on our health, but fall far short of recognizing the vastly larger genetic footprint of the microbiome with which we co-exist.

A few years ago I called for “quantum theory of health” – not literally, but incorporating and surpassing “modern medicine” in the way that quantum physics upended classical physics. That kind of revolution would recognize that there is no health for us without our microbiome, and that “our microbiome” includes some portion of the microbiomes of those around us.  We talk about “personalized medicine,” but a quantum breakthrough for health would be treating each person as the symbiosis with our unique microbiome.

We won’t get to 22nd century medicine until we can assess the microbiome in which we exist and offer interventions to optimize it. I just hope we don’t have to wait until the 22nd century to achieve that.

Monday, December 2, 2024

You Can't Spell Fair Pay Without AI

Everything’s about AI these days. Everything is going to be about AI for a while. Everyone’s talking about it, and most of them know more about it than I do. But there is one thing about AI that I don’t think is getting enough attention. I’m old enough that the mantra “follow the money” resonates, and, when it comes to AI, I don’t like where I think the money is ending up.

Will we use AI to help workers, or to eliminate them? Credit: Bing Image Creator

I’ll talk about this both at a macro level and also specifically for healthcare.

On the macro side, one trend that I have become increasingly radicalized about over the past few years is income/wealth inequality.  I wrote a couple weeks ago about how the economy is not working for many workers: executive to worker compensation ratios have skyrocketed over the past few decades, resulting in wage stagnation for many workers; income and wealthy inequality are at levels that make the Gilded Age look positively progressive; intergenerational mobility in the United States is moribund.

That’s not the American Dream many of us grew up believing in.

We’ve got a winner-take-all economy, and it’s leaving behind more and more people. If you are a tech CEO, a hedge fund manager, or a highly skilled knowledge worker, things are looking pretty good. If you don’t have a college degree, or even if you have a college degree but with the wrong major or have the wrong skills, not so much.  

All that was happening before AI, and the question for us is whether AI will exacerbate those trends, or ameliorate them. If you are in doubt about the answer to that question, follow the money. Who is funding AI research, and what might they be expecting in return?

It seems like every day I read about how AI is impacting white collar jobs. It can help traders! It can help lawyers! It can help coders! It can help doctors! For many white collar workers, AI may be a valuable tool that will enhance their productivity and make their jobs easier – in the short term. In the long term, of course, AI may simply come for their jobs, as it is starting to do for blue collar workers.

Automation has already cost more blue collar jobs than outsourcing, and that was before anything we’d now consider AI. With AI, that trend is going to happen on steroids; jobs will disappear in droves. That’s great if you are an executive looking to cut costs, but terrible if you are one of those costs.

So, AI is giving the upper 10% tools to make them even more valuable, and will help the upper 1% further boost their wealth. Well, you might say, that’s just capitalism. Technology goes to the winners.

That's how rich people view AI. Credit: Bing Image Creator


We need to step back and ask ourselves: is that really how we want to use AI?

Here’s what I’d hope: I want AI to be first applied to making blue collar workers more valuable (and I’m using “blue collar” broadly). Not to eliminate their jobs, but to enhance their jobs. To make their jobs better, to make their lives less precarious, to take some of the money that would otherwise flow to executives and owners and put it in workers’ pockets. I think the Wall Street guys, the lawyers, the doctors, and so on can wait a while longer for AI to help them.

Exactly how AI could do this, I don’t know, but AI, and AI researchers, are much smarter than I am. Let’s have them put their minds to it. Enough with having AI pass the bar exam or medical licensing tests; let’s see how it can help Amazon or Walmart workers.

Then there’s healthcare. Personally, I have long believed that we’re going to have AI doctors (although “doctor” may be too limiting a concept). Not assistants, not tools, not human-directed, but an entity that you’ll be comfortable getting advice, diagnosis, and even procedures from. If things play out as I think they might, you might even prefer them to human doctors.

But most people – especially most doctors – think that they’ll “just” be great tools. They’ll take some of the many administrative burdens away from physicians (e.g., taking notes or dealing with insurance companies), they’ll help doctors keep current with research findings, they’ll propose more appropriate diagnoses, they’ll offer a more precise hand in procedures. What’s not to like?

I’m wondering how that help will get billed.

Doctors see AI assisting. Credit: Bing Image Creator

I can already see new CPT codes for AI-assisted visits. Hey, doctors will say, we have this AI expense that needs to get paid for, and, after all, isn’t it worth more if the diagnosis is more accurate or the treatment more effective? In healthcare, new technology always raises costs; why should AI be any different?

Well, it should be.

When we pay physicians, we’re essentially paying for all those years of training, all those years of experience, all of which led to their expertise. We’re also paying for the time they spend with us, figuring out what is wrong with us and how to fix it. But the AI will be supplying much of that expertise, and making the figuring out part much faster. I.e., it should be cheaper.

I’d argue that AI-assisted CPT codes should be priced lower than non-AI ones (which, of course, might make physicians less inclined to use them). And when, not if, we get to the point of fully AI visits, those should be much, much cheaper.

Of course, one assignment I would offer AI is to figure out better ways to pay than CPT codes, DRGs, ICD-9 codes, and all the other convoluted ways we have for people to get paid in our existing healthcare system. Humans got us into these complicated, ridiculously expensive payment systems; it’d be fitting AI could get us out of them and into something better.

If we allow AI to just get added on to our healthcare reimbursement structures, instead of radically rethinking them, we’ll be missing a once-in-lifetime opportunity. AI advice (and treatment) should be ubiquitous, easy to use, and cheap.

So to all you AI researchers out there: do you want your work to help make the rich (and maybe you) richer, or do you want it to benefit everyone?