A first physical system to learn nonlinear tasks without a traditional computer processor

A first physical system to learn nonlinear tasks without a traditional computer processor

Sam Dillavou, a postdoctoral fellow in the Durian Research Group in the School of Arts & Sciences, built the components of this contrasting local learning network, an analog system that is fast, energy-efficient, scalable, and can learn nonlinear tasks. Credit: Erica Moser

Scientists face many trade-offs in building and scaling brain-like systems that can perform machine learning. For example, artificial neural networks can learn complex language and visual tasks, but the process of training computers to perform these tasks is slow and requires a lot of power.

Training machines to learn digitally but perform tasks analogically (meaning the input varies with a physical quantity, such as voltage) can save time and energy, but small errors can quickly pile up.

An electrical network previously designed by physicists and engineers at the University of Pennsylvania is more scalable because errors don’t pile up in the same way as the size of the system increases. However, it has serious limitations because it can only learn linear tasks, tasks with a simple relationship between inputs and outputs.

Now, the researchers have created an analogous system that is fast, energy-efficient, scalable, and can learn more complex tasks, including exclusive-or (XOR) relationships and nonlinear regression. Called a contrastive local learning network, its components evolve on their own based on local rules without knowledge of the larger structure.

Physics professor Douglas J. Durian compares it to the way neurons in the human brain don’t know what other neurons are doing and yet learning occurs.

“It can learn, in a machine learning sense, to do useful tasks, similar to a computational neural network, but it is a physical object,” says physicist Sam Dillavou, a postdoc in the Durian Research Group and first author of a paper on the system published in Proceedings of the National Academy of Sciences.

“One of the things we’re really excited about is that because it has no knowledge of the structure of the network, it’s very tolerant to failures and very robust to being built in different ways. We think that opens up a lot of possibilities for scaling these kinds of things,” said engineering professor Marc Z. Miskin.

“I think it’s an ideal model system that we can study to understand all sorts of problems, including biological problems,” says Andrea J. Liu, professor of physics. She also says it could be useful for coupling with devices that collect data that need to be processed, such as cameras and microphones.

In the paper, the authors say that their machine-learning system “provides a unique opportunity to study emergent learning. Compared to biological systems, including the brain, our system relies on simpler, well-understood dynamics, is precisely trainable, and uses simple modular components.”

This research is based on the Coupled Learning framework developed by Liu and postdoc Menachem (Nachi) Stern, whose findings were published in 2021. In this paradigm, a physical system that is not designed to perform a particular task adapts to applied inputs to learn the task, while using local learning rules and no centralized processor.

Dillavou says he came to Penn specifically for this project and has been working to translate the framework from working in simulation to working in today’s physical design, which can be made using off-the-shelf circuit components.

“One of the craziest things about this is that it really learns on its own; we just set it up to do it,” Dillavou says. Researchers simply feed voltages as input, and then the transistors connecting the nodes update their properties based on the Coupled Learning rule.

“Because the way it both computes and learns is based on physics, it’s much more interpretable,” Miskin said. “You can actually figure out what it’s trying to do, because you have a good understanding of the underlying mechanism. That’s pretty unique, because a lot of other learning systems are black boxes, where it’s much harder to know why the network did what it did.”

Durian says he hopes this is “the start of a huge field,” and points out that another postdoc in his lab, Lauren Altman, is building mechanical versions of contrasting local learning networks.

The researchers are currently working on scaling up the design, and Liu says many questions remain about the duration of the memory storage, the effects of noise, the best architecture for the network, and whether there are better forms of nonlinearity.

“It’s not really clear what changes as we scale a learning system,” Miskin says.

“If you think about a brain, there’s a huge gap between a worm with 300 neurons and a human, and it’s not clear where those capabilities arise, how things change as you scale up. Having a physical system that you can make bigger and bigger and bigger and bigger is an opportunity to actually study that.”

More information:
Sam Dillavou et al, Machine learning without a processor: Emergent learning in a nonlinear analog network, Proceedings of the National Academy of Sciences (2024). DOI: 10.1073/pnas.2319718121

Offered by the University of Pennsylvania

Quote: A first physical system to learn nonlinear tasks without a traditional computer processor (2024, July 8) Retrieved July 8, 2024, from https://techxplore.com/news/2024-07-physical-nonlinear-tasks-traditional-processor.html

This document is subject to copyright. Except for fair dealing for private study or research, no part may be reproduced without written permission. The contents are supplied for information purposes only.

Leave a Comment