Chances are, you’ve read about AI lately. Maybe you’ve actually even tried DALL-E or ChatGPT, maybe even GPT-4. Perhaps you can use the term Large Language Model
(LLM) with some degree of confidence.
But chances are also good that you haven’t heard of “liquid neural
networks,” and don’t get the worm reference above.
Credit: Jose-Luis Olivares, MIT
That’s the thing about artificial intelligence: it’s
evolving faster than we are. Whatever you think you know is already probably
out-of-date.
Liquid neural networks were first introduced in 2020. The authors wrote: “We introduce a new class of time-continuous recurrent neural
network models.” They based the networks on the brain of a tiny roundworm,
Caenorhabditis elegans. The goal was networks that were more
adaptable, that could change “on the fly” and would adapt to unfamiliar circumstances.
Researchers at MIT’s CSAIL have shown some significant
progress. A new paper in Science
Robotics discussed how they created “robust flight navigation agents” using
liquid neural networks to autonomously pilot drones. They claim that these networks
are “causal and adapt to changing conditions,” and that their “experiments
showed that this level of robustness in decision-making is exclusive to liquid
networks.”
An MIT
press release notes: “deep learning systems struggle with capturing
causality, frequently over-fitting their training data and failing to adapt to
new environments or changing conditions…Unlike traditional neural networks that
only learn during the training phase, the liquid neural net’s parameters can
change over time, making them not only interpretable, but more resilient to
unexpected or noisy data.”
“We wanted to model the
dynamics of neurons, how they perform, how they release information, one neuron
to another, Ramin
Hasani, a research affiliate at MIT and one of the co-authors, told
Popular Science.
Essentially, they trained the neural network to pilot
the drone to find a red camping chair, then moved the chair to a variety of environments,
in different lightening conditions, at different times of year, and at
different distances to see if the drone could still find the chair. “The
primary conceptual motivation of our work,” the authors wrote, “was not
causality in the abstract; it was instead task understanding, that is, to
evaluate whether a neural model understands the task given from
high-dimensional unlabeled offline data.”
Credit: Chahine, et. alia |
Daniela Rus, CSAIL director and one of the co-authors, said: “Our experiments demonstrate that we can effectively teach a drone to locate an object in a forest during summer, and then deploy the model in winter, with vastly different surroundings, or even in urban settings, with varied tasks such as seeking and following.”
Their video:
Essentially, Dr. Hasani says, “they can generalize to situations that they have never seen.” The liquid neural nets can also "dynamically capture the true cause-and-effect of their given task," the authors wrote. This is "the key to liquid networks’ robust performance under distribution shifts.”The key advantage of liquid neural networks is their adaptability;
the neurons behave more like the worm’s (or the neurons of other living creatures)
would, responding to real world circumstances in real time. “They’re
able to change their underlying equations based on the input they observe,”
Dr. Rus told
Quanta Magazine.
Dr. Rus further noted: “We are thrilled by the immense
potential of our learning-based control approach for robots, as it lays the
groundwork for solving problems that arise when training in one environment and
deploying in a completely distinct environment without additional training…These
flexible algorithms could one day aid in decision-making based on data streams
that change over time, such as medical diagnosis and autonomous driving
applications.”
Sriram Sankaranarayanan, a computer scientist at the University of
Colorado, was impressed, telling
Quanta Magazine: “The main contribution here is that stability and
other nice properties are baked into these systems by their sheer structure…They
are complex enough to allow interesting things to happen, but not so complex as
to lead to chaotic behavior.”
Alessio Lomuscio, professor of AI safety in the Department of Computing at Imperial College London, was also impressed, telling MIT:
Robust learning and performance in out-of-distribution tasks and scenarios are some of the key problems that machine learning and autonomous robotic systems have to conquer to make further inroads in society-critical applications. In this context, the performance of liquid neural networks, a novel brain-inspired paradigm developed by the authors at MIT, reported in this study is remarkable. If these results are confirmed in other experiments, the paradigm here developed will contribute to making AI and robotic systems more reliable, robust, and efficient.
It's easy enough to imagine lots of drone applications
where these could prove important, with autonomous driving another logical use.
But the MIT team is looking more broadly. “The results in this paper open the
door to the possibility of certifying machine learning solutions for safety
critical systems,” Dr. Rus says. With all the discussion about the importance
of ensuring that AI was giving valid answers in healthcare uses, as noted
above, she specifically mentioned medical diagnosis decision making as one for liquid
neural networks.
“Everything that we do as a robotics and machine
learning lab is [for] all-around safety and deployment of AI in a safe and
ethical way in our society, and we really want to stick to this mission and
vision that we have,” Dr. Hasani says. We
should hope that other AI labs feel the same.
Healthcare, like most parts of our economy, is going to
increasingly use and even rely on AI. We’re going to need AI that not only
gives us accurate answers but also can adapt to quickly changing conditions,
rather than pre-set data models. I don’t
know if it’s going to be based on liquid neural networks or something else, but
we’re going to want not just adaptability but also safety and ethics baked in.
----------------
Last month I wrote about Organoid Intelligence (OI), which intends to gets to AI using structures that world more like our brains. Now liquid neural networks based on worms’ brains. It’s intriguing to me that after several decades of working on, and perhaps for, our silicon overlords, we’re starting to move to more biological approaches.As Sayan Mitra, a computer scientist at the University of Illinois, Urbana-Champaign, told Quanta Magazine: “In a way, it’s kind of poetic, showing that this research may be coming full circle. Neural networks are developing to the point that the very ideas we’ve drawn from nature may soon help us understand nature better.”