The Power Of Liquid AI: Compact, Fluid And Task-Minded Neural Networks

0

Daniela Rus has some experience with a ground-breaking new idea, Liquid Neural Networks, that seems to solve some of AI’s notorious complexity problems, in part, by using fewer yet more powerful neurons. She talks about some of the societal challenges of machine learning, issues that are widely shared by experts and those close to the field.

VIDEO: Daniela Rus, MIT CSAIL Director, showcases MIT’s recent breakthroughs in liquid AI models.

“We began to develop the work as a way of addressing some of the challenges that we have with today’s AI solutions,” Rus said in a presentation.

Acknowledging the opportunities that are evident with AI, Rus talks about the need to handle very large amounts of data and “immense models,” as well as the computational and environmental costs of AI, and the need for data quality.

“Bad data means bad performance,” she says.

She pointed out that ‘black box’ AI/ML systems present their own problems for practical use of AI modeling. We’ve seen how the lack of explainable AI has caused heartburn in the developer community and elsewhere; according to Rus’s research and presentation, changing network builds can work to alleviate some of this essential mystery.

As an example, she provided a visual look at a network that uses 100,000 artificial neurons, pointing out a “noisy” attention map that is jumbled, all over the map, and very difficult to understand for a human observer. Where the visual map provided for this complex network is a hash of signals, many of which fall in the periphery, Rus wants to introduce a different result where the same maps are smoother, and more targeted.

Liquid neural networks, she said, use an alternate system including command and motor neurons to form an understandable decision tree that helps to create these new results.

She showed how a dashboard view of a self-driving system can be a lot more explainable with these types of smaller yet more expressive networks – but it’s not just that the network has fewer neurons – that’s only part of the equation.

Going over continuous-time RNNs and the modeling of physical dynamics, and looking at the nuts and bolts of liquid time constant networks, Rus showed how these types of systems can change equations with a combination of linear state space models and nonlinear synapse connections.

These innovations, she says, allow the systems to change underlying equations based on input, to become, in some important ways, dynamic, and to usher in what she called “robust upstream representations.”

“We also do some other changes, like we change the wiring architecture of the network,” Rus said. “You can read about this in our papers.”

The upshot of all of this, Rus explained, is a model that moves the ball forward in terms of making sure that AI applications have more versatile working foundations.

“All previous solutions are really looking at the context, not the actual task,” she said. “We can actually prove that our (systems) are causal – they connect cause and effect in ways that are consistent with the mathematical definition of causality.”

Noting aspects like input stream and perception model, Rus explored the potential for these dynamic causal models to change all sorts of industries that now rely on AI/ML work.

“These networks recognize when their inputs are being changed by certain interactions, and they learn how to correlate cause and effect,” she said.

Giving some examples of training data for a drone, Rus showed how similar models, (one with only 11 liquid neurons, for example) can identify and navigate an autonomous flying vehicle to its target as it moves around a “canyon of unknown geometry” in a capable and understandable way.

“The plane has to hit these points at unknown locations,” she said. “And it’s really extraordinary that all you need is 11 artificial neurons, liquid network neurons, in order to solve this problem.”

The bottom line, she suggested, is that these new types of networks bring a sort of simplicity, but also use the dynamic build to do new things that are going to be absolutely useful in evolving AI applications.

“Liquid networks are a new model for machine learning,” she told the audience in closing. “They’re compact, interpretable and causal. And they have shown great promise in generalization under heavy distribution shifts.”

Daniela Rus is the MIT CSAIL Director and the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT, and the Schwarzman College of Computing Deputy Dean of Research.

Stay connected with us on social media platform for instant update click here to join our  Twitter, & Facebook

We are now on Telegram. Click here to join our channel (@TechiUpdate) and stay updated with the latest Technology headlines.

For all the latest Technology News Click Here 

Read original article here

Denial of responsibility! Rapidtelecast.com is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.
Leave a comment