Monday, September 30, 2024

Someone (Else) Should Regulate AI

There’s some good news/bad news about AI regulation. The good news is that this past weekend California Governor Gavin Newsome vetoed the controversial S.B. 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. The bad news is that he vetoed S.B. 1047.

Or maybe it’s the other way around.

Regulating AI is tricky. Credit: NCSL

Honestly, I’m not sure how I should feel about the veto. Smarter, more knowledgeable people than me had lined up on both sides. No legislation is ever perfect, of course, and it’s never possible to fully anticipate the consequence of most new laws, but a variety of polls indicate that most Americans support some regulation of AI.

“American voters are saying loud and clear that they don’t want to see AI fall into the wrong hands and expect tech companies to be responsible for what their products create,” said Daniel Colson, Executive Director of the Artificial Intelligence Policy Institute. “Voters are concerned about AI advancement—but not about the U.S falling behind China; they are concerned about how powerful it can become, how quickly it can do so and how many people have access to it.”

Credit: AIPI

S.B. 1047 would have, among other things, required safety testing of large AI models before their public release, given the state the right to sue AI companies for damages caused by their AI, and mandated a “kill switch” in case of catastrophic outcomes. Critics claimed it was too vague, only applied to large models, and, of course, would stifle innovation.

In his statement explaining his veto, Governor Newsome pointed out the unequal treatment of the largest models and “smaller, specialized” models, while stressing that action is needed and that California should lead the way. He pointed out that California has already taken some action on AI, such as for deepfakes, and punted the issue back to the legislature, while promising to work with AI experts on improved legislation/regulation.

The bill’s author, Senator Scott Wiener, expressed his disappointment: “This veto is a setback for everyone who believes in oversight of massive corporations that are making critical decisions that affect the safety and welfare of the public and the future of the planet.” Moreover, he added: “This veto leaves us with the troubling reality that companies aiming to create an extremely powerful technology face no binding restrictions from U.S. policymakers, particularly given Congress’s continuing paralysis around regulating the tech industry in any meaningful way.”

Indeed, as on most tech issues, Congress has been largely missing in action. “States and local governments are trying to step in and address the obvious harms of A.I. technology, and it’s sad the federal government is stumped in regulating it,” Patrick Hall, an assistant professor of information systems at Georgetown University, told The New York Times. “The American public has become a giant experimental population for the largest and richest companies in world.”

I don’t know why we’d expect any more from Congress; it’s never gotten its hands around the harms caused by Facebook, Twitter, or Instagram, and the only reason it took any action against TikTok was because of its Chinese parent company. It may take Chinese AI threatening American for Congress to act.

As was true with privacy, the European Union was quicker to take action, agreeing on regulationthe A.I. Act – last year, after debating it some three years. That being said, the Act won’t be in effect until August 2025, and the details are still being drafted. Meanwhile, big tech companies – mostly American – are working to weaken it.  

So it goes.

Summary of EU AI Act Credit: Analytics Creator
In the absence of new legislation, not all is lost. For example, Owen J. Daniels and Jack Corrigan, writing in FedScoop, outline three approaches regulators should be taking:

First, agencies must begin to understand the landscape of AI risks and harm in their regulatory jurisdictions. Collecting data on AI incidents — where AI has unintentionally or maliciously harmed individuals, property, critical infrastructure, or other entities — would be a good starting point.

Second, agencies must prepare their workforces to capitalize on AI and recognize its strengths and weaknesses. Developing AI literacy among senior leaders and staff can help improve understanding and more measured assessments of where AI can appropriately serve as a useful tool.

Third and finally, agencies must develop smart, agile approaches to public-private cooperation. Private companies are valuable sources of knowledge and expertise in AI, and can help agencies understand the latest, cutting-edge advancements. Corporate expertise may help regulators overcome knowledge deficiencies in the short term and develop regulations that allow the private sector to innovate quickly within safe bounds.

Similarly, Matt Keating and Malcolm Harkins, writing in CyberScoop, warn: “Most existing tech stacks are not equipped for AI security, nor do current compliance programs sufficiently address AI models or procurement processes. In short, traditional cybersecurity practices will need to be revisited and refreshed.” They urge that AI developers build with security best practices in mind, and that organizations using AI “should adopt and utilize a collection of controls, ranging from AI risk and vulnerability assessments to red-teaming AI models, to help identify, characterize, and measure risk.” 

In the absence of state or federal legislation, we can’t just throw our hands up and do nothing. AI is evolving much too fast.

-----------

There are some things that I’d hope we can agree on. For example, our images, voices, and other personal characteristics shouldn’t be allowed to be used/altered by AI. We should know what information is original and what is AI-generated/altered. AI shouldn’t be used to ferret out even more of our personal information. We should be careful about to whom we sell/license it to, and we should be hardening all of our technology against the AI-driven cyberattacks that will, inevitably, come. We need to determine who is responsible, how, for which harms.

And we need to have a serious discussion about who benefits from AI. If AI is used to make a handful of rich people even richer, while costing millions of people jobs, that is a societal problem that we cannot just ignore – and must not allow.

Regulating a new technology, especially a world-changing one like AI, is tricky. Do it too soon/too harsh, and it can deter innovation, especially while other jurisdictions don’t impose them. Do it too late/too lightly, and, well, you get social media.  

There’s something important we all can do. When voting this fall, and in every other election, we should be asking ourselves: is this candidate someone who understands the potentials and perils of AI and is prepared to get us ready, or is it someone who will just try to ignore them?

Monday, September 23, 2024

Red Alert About Red Buttons

In a week where, say, the iconic brand Tupperware declared bankruptcy and University of Michigan researchers unveiled a squid-inspired screen that doesn’t use electronics, the most startling stories have been about, of all things, pagers and walkie-talkies.

Pushing that red button probably isn't going to be good. Credit: Bing Image Creator

Now, most of us don’t think much about either pagers or walkie-talkies these days, and when we do, we definitely don’t think about them exploding. But that’s what happened in Lebanon this week, in ones carried by members of Hezbollah. Scores of people were killed and thousands injured, many of them innocent bystanders. The suspicion, not officially confirmed, is that Israel engineered the explosions.

I don’t want to get into a discussion about the Middle East quagmire, and I condemn the killing of innocent civilians on either side, but what I can’t get my mind around is the tradecraft of the whole thing. This was not a casual weekend cyberattack by some guys sitting in their basements; this was a years-in-the-making, deeply embedded, carefully planned move.

A former Israeli intelligence official told WaPo that, first, intelligence agencies had to determine “what Hezbollah needs, what are its gaps, which shell companies it works with, where they are, who are the contacts,” then “you need to create an infrastructure of companies, in which one sells to another who sells to another.”  It’s not clear, for example, if Israel someone planted the devices during the manufacturing process or during the shipping, or, indeed, if its shell companies actually were the manufacturer or shipping company.  

Either way, this is some James Bond kind of shit.

Exploded pager. Credit: AFP
The Washington Post reports that this is what Israeli officials call a “red-button” capability, “meaning a potentially devastating penetration of an adversary that can remain dormant for months if not years before being activated.” One has to wonder what other red buttons are out there.

Many have attributed the attacks to Israel’s Unit 8200, which is roughly equivalent to the NSA.  An article in Reuters described the unit as “famous for a work culture that emphasizes out-of-the-box thinking to tackle issues previously not encountered or imagined.”  Making pagers explode upon command certainly falls in that category.

If you’re thinking, well, I don’t carry either a pager or a walkie-talkie, and, in any event, I’m not a member of Hezbollah, don’t be so quick to think you are off the hook. If you use a device that is connected to the internet – be it a phone, a TV, a car, even a toaster – you might want to be wondering if it comes with a red button. And who might be in control of that button.

Just today, for example, the Biden Administration proposed a ban on Chinese software used in cars. “Cars today have cameras, microphones, GPS tracking and other technologies connected to the internet. It doesn’t take much imagination to understand how a foreign adversary with access to this information could pose a serious risk to both our national security and the privacy of U.S. citizens,” said Commerce Secretary Gina Raimondo. “In an extreme situation, foreign adversaries could shut down or take control of all their vehicles operating in the United States all at the same time.

“The precedent is significant, and I think it just reflects the complexities of a world where a lot of connected devices can be weaponized,” Brad Setser, a senior fellow at the Council on Foreign Relations, told The New York Times.  In a Wall Street Journal op-ed, Mike Gallaher, head of defense for Palantir Technologies, wrote: “Anyone with control over a portion of the technology stack such as semiconductors, cellular modules, or hardware devices, can use it to snoop, incapacitate or kill.”

Similarly, Bruce Schneier, a security technologist, warned: “Our international supply chains for computerized equipment leave us vulnerable. And we have no good means to defend ourselves…The targets won’t be just terrorists. Our computers are vulnerable, and increasingly so are our cars, our refrigerators, our home thermostats and many other useful things in our orbits. Targets are everywhere.”

If all this seems far-fetched, last week the FBI, NSA, and the Cyber National Mission Force (CNMF) issued a Joint Cybersecurity Advisory detailing how the FBI had just taken control of a botnet of 260,000 devices. “The Justice Department is zeroing in on the Chinese government backed hacking groups that target the devices of innocent Americans and pose a serious threat to our national security,” said Attorney General Merrick B. Garland. The hacking group is called Flax Typhoon, working for a company called Integrity Technology Group, which is believed to be controlled by the Chinese government.

Ars Technica described the network as a “sophisticated, multi-tier structure that allows the botnet to operate at a massive scale.” It is the second such botnet taken down this year, and one has to wonder how many others remain active. Neither of these attacks were believed to be preparing anything to explode, being more focused on surveillance, but their malware impacts could certainly cause economic or physical damage.

Unit 8200, meet Flax Typhoon.

Sophisticated? Yeah. Credit: Black Lotus Labs

Earlier this year Microsoft said Flax Typhoon had infiltrated dozens of organizations in Taiwan, targeting “government agencies and education, critical manufacturing, and information technology organizations in Taiwan.” Red buttons abound.

--------------

Ian Bogost, a contributing writer for The Atlantic, tried to be reassuring, saying that your smartphone “almost surely” wasn’t going to just explode one day. “In theory,” Professor Bogost writes, “someone could interfere with such a device, either during manufacture or afterward. But they would have to go to great effort to do so, especially at large scale. Of course, this same risk applies not just to gadgets but to any manufactured good.”

The trouble is, there are such people willing to go to such great effort, at large scale.

We live in a connected world, and it is growing evermore connected. That has been, for the most part, a blessing, but we need to recognize that it can also be a curse, in a very real, very physical way.

If you thought pagers exploding was scary, wait until self-driving cars start crashing on purpose. Wait until your TVs or laptops start exploding. Or wait until the nanobots inside you that you thought were helping you suddenly start wreaking havoc instead.

If you think the current red button capabilities are scary, wait until they are created – and controlled – by AI.

Monday, September 16, 2024

Oh, Give Me a Home...Please!

It’s way too expensive. There’s often not enough of it where/when needed. Too much of it is of substandard quality. It remains rooted in outdated standards and practices. It is hyper-local. Private equity firms have taken a big interest, driving up prices. Most significantly, its presence or absence has a huge impact on people’s quality of life.

I must be talking about health care, right? No -- housing.

3D printed homes in Austin. Credit: Icon/Twitter

America is in the midst of a housing crisis. Home prices have surged 54% since 2019, and 5.8% in the past year. The National Association of Realtors reports that the median price for an existing single family home is $422,000. A Washington Post analysis indicates that rents have gone up by 19% since 2019. Although increases have cooled lately, Harvard’s Joint Center for Housing Studies says that half of renters spend more than 30% of income on rent, and a quarter spend more than 50%.

Credit: Washington Post
Meanwhile, we’re not building nearly enough homes. Zillow says we’re 4.5 million homes short, while other estimates put the number as high as 7 million. And when builders do build new houses, they’re not focusing on so-called starter homes. Between increases in land and materials, and more prescriptive local regulations, the economics don’t work. “You’ve basically regulated me out of anything remotely on the affordable side,” Justin Wood, the owner of Fish Construction NW, told The New York Times.

New research from the University of Kansas takes a contrarian view: most metropolitan areas have plenty of housing; it’s just that not enough of it is affordable to low income households. “Our nation’s affordability problems result more from low incomes confronting high housing prices rather than from housing shortages,” co-author Kirk McClure said. “This condition suggests that we cannot build our way to housing affordability.”

Whichever side is right, keep in mind that 60% of Gen Z worry they might never be able to afford a home, and 52% of Gen Z renters have struggled to pay their rent. Some 6.7 million households live in substandard homes, “with multiple structural deficiencies or lacking basic features such as electricity, plumbing, or heat.” And, of course, the U.S. has an estimated 653,000 homeless people at any given time.

Improving the housing situation is something that both Presidential candidates agree on, although their solutions differ. Former President Trump believes illegal immigrants are driving up housing costs, so stopping the influx and perhaps deporting millions of them will cause prices to go down. He would also “eliminate costly regulations, and free up appropriate portions of federal land for housing,” according to a spokesperson.

Vice President Harris, on the other hand, wants to build 3 million new units, give first time buyers $25,000, give more tax credits, and “expand rental assistance for Americans including for veterans, boost housing supply for those without homes, enforce fair housing laws, and make sure corporate landlords can’t use taxpayer dollars to unfairly rip off renters.”

They’re talking more about housing than health care.

A 2023 Pew survey found that Americans are broadly supportive of policies to increase housing supply, such as allowing apartments to be built in more areas or making permit decisions faster. They are most keen on such changes in what are now largely commercial areas, rather than in the residential areas they might live in.

That’s NIMBY: Not in My Back Yard. People fear that allowing lower cost homes or multi-family units in their neighborhood might decrease their own home’s value.  NIMBY, of course, is much broader than just housing. We want more manufacturing, but not near us. We need power plants, water filtration centers, and solid waste landfills, but somewhere else. We need places to raise all those cows, pigs, and chickens we eat, not to mention the plants that process them, but, good heavens, the smell! The mess! And, please, please, don’t make us live near poor people.

There is now a countervailing movement, at least for housing: YIMBY. “I could not be more thrilled that every top Democrat in America is becoming a Yimby!” Laura Foote, the executive director of the national Yimby Action group, said on a recent Harris fundraising call. “We have officially made zoning and permitting reform cool! I just want everyone to take that in.”

“What we’re seeing is a generational shift,” Sen. Brian Schatz (D., Hawaii) also said on the call. “If we want to actually solve the problem of the housing shortage, the simplest way is to make it permissible to build.” 

The problem is that federal officials can talk all they want about zoning and permitting reform and easing the permitting process, but that zoning and permitting happens at the local level. As Jerusalem Demsas explains in The Atlantic, California started trying to make it easier to build accessary dwelling units (ADUs) – think mother-in-law suites – back in 1982, but only recently, and after additional legislation, has there been much progress. “Cities are openly flaunting state law to prohibit home building,” says Matthew Lewis, communications director at California YIMBY.

Edward L. Glaeser, a Harvard economic professor, offers a potential solution in a New York Times op-ed: threaten to cut off federal funding if states don’t move “to reduce the ability of communities to zone out change,” much as they forced states to raise their drinking age in 1984 or face loss of highway funds. Moral persuasion doesn’t seem to be working.

Doing nothing is not an option. As David Dworkin, president and CEO of the nonprofit National Housing Conference, told Adele Peters of Fast Company:

West Coast cities have struggled with housing affordability, but now we’re seeing these kinds of problems in Boise, Idaho; Little Rock, Arkansas and Charlotte, North Carolina. And that’s really a game changer. The bottom line is if you don’t want affordable housing in your backyard, you’re going to end up with homeless people in your front yard. And you don’t have to go far today to see what that looks like.

Look, we’re still building homes like it was 1924, not 2024. Where are our armies of robots building them in a day or two? Why hasn’t 3D printing of houses taken off faster and cheaper (as a 100 unit development in Texas has shown to be feasible)? With the current commercial real estate glut, converting those buildings to residential is a win/win. We can do better.

A recent editorial in The Lancet called housing ”an overlooked social determinant of health,” and concluded: “Making housing a priority public health intervention not only presents a pivotal opportunity, but a moral imperative. The health of our communities depends on it.”

So, yeah: YIMBY.

Monday, September 9, 2024

We Should Learn to Have More Fun (or Vice Versa)

For several years now, my North Star for thinking about innovation has been Steven Johnson’s great quote (in his delightful Wonderland: How Play Made the Modern World): “You will find the future where people are having the most fun.” No, no, no, naysayers argue, inventing the future is serious business, and certainly fun is not the point of business.  Maybe they’re right, but I’m happier hoping for a future guided by a sense of fun than by one guided by P&Ls.

Playing games - and having fun - is important business. Credit: Bing Image Creator

Well, I think I may have found an equally insightful point of view about fun, espoused by game designer Raph Koster in his 2004 book A Theory of Fun for Game Design: “Fun is just another word for learning.”

Wow.

That’s not how most of us think about learning. Learning is hard, learning is going to school, learning is taking tests, learning is something you have to do when you’re not having fun. So “fun is just another word for learning” is quite a different perspective – and one I’m very much attracted to.

I regret that it took me twenty years to discover Mr. Koster’s insight. I read it in a more current book: Kelly Clancy’s Playing With Reality: How Games Have Shaped Our World. Dr. Clancy is not a game designer; she is a neuroscientist and physicist, but she is all about play. Her book looks at games and game theory, especially how the latter has been misunderstood/misused.



We usually think of play as a waste of time, as something inherently unserious and unimportant, when, in fact, it is how our brains have evolved to learn. The problem is, we’ve turned learning into education, education into a requirement, teaching into a profession, and fun into something entirely separate. We’ve gotten it backwards.

“Play is a tool the brain uses to generate data on which to train itself, a way of building better models of the world to make better predictions,” she writes. “Games are more than an invention; they are an instinct.”  Indeed, she asserts: “Play is to intelligence as mutation is to evolution.”

Mr. Koster’s fuller quote about fun and learning is on target with this:

That’s what games are, in the end. Teachers. Fun is just another word for learning. Games teach you how aspects of reality work, how to understand yourself, how to understand the actions of others, and how to imagine.

We don’t look at our teachers as a source of fun (and many students barely look at them as a source of learning). We don’t look at schools as a place for games, except on the playground, and then only for the youngest students. We drive students to boredom, and, as Mr. Koster says, “boredom is the opposite of learning” (although, ironically, boredom may be important to creativity).  

Learning is actually fun, especially from a physiological standpoint. “Interestingly, learning itself is rewarding to the brain,” Dr. Clancy points out. “Researchers have found that the “Aha1” moment of insight in solving a puzzle triggers dopamine release in the same way sugar or money can.” We love learning; our brains are hardwired to reward us when we figure something new out. Play is a crucial way we get to that; as Dr. Clancy writes: “Play is all about the unknown and learning how to navigate it.”

Dr. Clancy is not the first to articulate this point of view. Almost 90 years ago Dutch historian Johan Huizinga wrote Homo Ludens: A Study of the Play-Element in Culture. Dr. Clancy summaries his point: “Play, historian Johan Huizinga argues in his classic book Homo Ludens, is how humans innovate, from new tools to new social contracts….Huizinga sees games as foundational cultural technology: Civilization arises and unfolds in and as play.”



I am wowed by the assertion that play is how humans innovate. If that seems extreme to you, contrast the crazy, reckless, boisterous atmosphere of many start-ups with the atmosphere of most corporate innovation departments. Not much playing – not much fun! – going on in the latter, I suspect.

Dr. Clancy goes one very interesting step further: “Play has served as a crucible of culture and innovation; it’s at the heart of design itself…Design is what happens when we uncover rules latent in the world and use these to define the logic of a new, separate system.”

That’s not how most of us typically think about design, but how I hope more of us will.   

And if you want to bring up the trend towards the gamification of everything, don’t get Dr. Clancy started: “Gamification, in other words, replaces what people actually want with what corporations want,” and “Many jobs that can easily be gamified will more profitably be automated.” You need more than gamifying to make play.

All this focus on the importance of play and having fun reminds me of the classic essay A Mathematician’s Lament, by Paul Lockhart. In it, he argues that when people say they are just bad at math, what they really are saying is that they’ve been taught math badly. “Math is not about following directions,” he wrote. “It’s about making new directions.” I.e., playing. 

Imagine, he suggests, if music was taught by simply teaching students how to transcribe notes, or art by having students identify colors. The students never get to hear music or to see art, much less to create either on their own. They’d hate both and claim to be bad at them. That, he charges, is what has happened with teaching math.  We’ve drained all the fun out of it, taken all the discovery from it.

“What a sad endless cycle of innocent teachers inflicting damage upon innocent students,” Professor Lockhart laments in closing. “We could all be having so much more fun.”

We should.

We’re living in very serious times. If it’s not climate change, it’s microplastics. If it’s not the threat of nuclear war, it’s of biochemical attacks. If it’s not the danger of cyberattacks, it’s of AI. If it’s not the impact of social media, it’s the breakdown of civility. Pick your poison; honestly, it’s hard to keep up with the things we should be worrying about. Fun seems pretty far down our priority list.

Fun is just another word for learning?  Play is at the heart of design? Play is how humans innovate? These are radical concepts in our troubled times, but ones that we should take more seriously -- or, perhaps, more mischievously.

Monday, September 2, 2024

Biohybrid Bots Are Mushrooming

I hadn’t expected to write about a biology-related topic anytime soon after doing so last week, but, gosh darn it, then I saw a press release from Cornell about biohybrid robots – powered by mushrooms (aka fungi)! They had me at “biohybrid.”  

A mushroom powered robot. Credit: Cornell University

The release talks about a new paper -- Sensorimotor Control of Robots Mediated by Electrophysiological Measurements of Fungal Mycelia – from the Cornell’s Organic Robotics Lab, led by Professor Rob Shepherd. As the release describes the work:

By harnessing mycelia’s innate electrical signals, the researchers discovered a new way of controlling “biohybrid” robots that can potentially react to their environment better than their purely synthetic counterparts.

Or, in the researchers’ own words:

The paper highlights two key innovations: first, a vibration- and electromagnetic interference–shielded mycelium electrical interface that allows for stable, long-term electrophysiological bioelectric recordings during untethered, mobile operation; second, a control architecture for robots inspired by neural central pattern generators, incorporating rhythmic patterns of positive and negative spikes from the living mycelia.

Let’s simplify that: “This paper is the first of many that will use the fungal kingdom to provide environmental sensing and command signals to robots to improve their levels of autonomy,” Professor Shepherd said. “By growing mycelium into the electronics of a robot, we were able to allow the biohybrid machine to sense and respond to the environment.”

Lead author Anand Mishra, a research associate in the lab, explained: “If you think about a synthetic system – let’s say, any passive sensor – we just use it for one purpose. But living systems respond to touch, they respond to light, they respond to heat, they respond to even some unknowns, like signals. That’s why we think, OK, if you wanted to build future robots, how can they work in an unexpected environment? We can leverage these living systems, and any unknown input comes in, the robot will respond to that.”

The team build two robots: a soft one shaped like a spider, and a wheeled one. The researchers first used the natural spike in the mycelia to make them walk and roll, respectively, using the natural signals from the mycelia. Then researchers exposed them to ultraviolet light, which caused the mycelia to react and changed the robots’ gaits. Finally, the researchers were able to override the mycelia signals entirely.

“This kind of project is not just about controlling a robot,” Dr. Mishra said. “It is also about creating a true connection with the living system. Because once you hear the signal, you also understand what’s going on. Maybe that signal is coming from some kind of stresses. So you’re seeing the physical response, because those signals we can’t visualize, but the robot is making a visualization.”

Dr. Shepherd believes that instead of using light as the signal, they will use chemical signals. For example: “The potential for future robots could be to sense soil chemistry in row crops and decide when to add more fertilizer, for example, perhaps mitigating downstream effects of agriculture like harmful algal blooms.”

It turns out that biohybrid robots in general and fungal computing in particular are a thing. In last week’s article I quoted Professor Andrew Adamatzky, of the University of the West of England about his preference for fungal computing. He not only is the Professor in Unconventional Computing there, and is the founder and Editor-in-Chief of the International Journal for Unconventional Computing, but also literally wrote the book about fungal computing.  He’s been working on fungal computing since 2018 (and before that on slime mold computing).

Professor Adamatzky notes that fungi have a wide array of sensory inputs: “They sense light, chemicals, gases, gravity, and electric fields,” which opens the door to a wide variety of inputs (and outputs). Accordingly, Ugnius Bajarunas, a member of Professor Adamatzy’s team, told an audience last year: “Our goal is real-time dialog between natural and artificial systems.”

With fungal computing, TechHQ predicts: “The future of computing could turn out to be one where we care for our devices in a way that’s closer to looking after a houseplant than it is to plugging in and switching on a laptop.”

But how would we reboot them?

There are some who feel that we’re making progress on biohybrid robotics faster than we’re thinking about the ethics of it. A paper earlier this summer -- Ethics and responsibility in biohybrid robotics researchurged that we quickly develop and ethical framework, and potentially regulation.

The authors state: “While the ethical dilemmas associated with biohybrid robotics resonate with challenges seen in fields like biomedicine, conventional robotics, or artificial intelligence, the unique amalgamation of living and nonliving components in biohybrid robots, also called biorobots, breeds its own set of ethical complexities that warrant a tailored investigation.”

Co-lead author Dr. Rafael Mestre, from the University of Southampton, said: "But unlike purely mechanical or digital technologies, bio-hybrid robots blend biological and synthetic components in unprecedented ways. This presents unique possible benefits but also potential dangers."  His co-lead author AnĂ­bal M. Astobiza, an ethicist from the University of the Basque Country, elaborated:

Bio-hybrid robots create unique ethical dilemmas. The living tissue used in their fabrication, potential for sentience, distinct environmental impact, unusual moral status, and capacity for biological evolution or adaptation create unique ethical dilemmas that extend beyond those of wholly artificial or biological technologies.

Dr. Matt Ryan, a political scientist from the University of Southampton and a co-author on the paper, added: “Compared to related technologies such as embryonic stem cells or artificial intelligence, bio-hybrid robotics has developed relatively unattended by the media, the public and policymakers, but it is no less significant.”

Big Think recently focused on the topic, asking: Revolutionary biohybrid robots are coming. Are we prepared? The article points out: “Now, scientific advances have increasingly shown that biological beings aren’t just born; they can be built.” It notes: “Biohybrid robots take advantage of living systems’ millions of years of evolution to grant robots benefits such as self-healing, greater adaptability, and superior sensor resolution. But are we ready for a brave new world where blending the artificial and the biological blurs the line between life and non-life?”

Probably not. As Dr. Mestre and his colleagues concluded: “If debates around embryonic stem cells, human cloning, or artificial intelligence have taught us something, it is that humans rarely agree on the correct resolution of the moral dilemmas of emergent technologies.”

Biohybrid robotics and fungal computing are emerging fast.

Think you know what robots are? You don’t. Think you understand how computing works? Maybe silicon-based, but probably not “unconventional.” Think you’re ready for artificial intelligence? Fungi-powered AI might still surprise you.  

Exciting times indeed.