Machine learning goes green

Los Alamos develops an algorithm to reduce power use in autonomous systems.

By Cristina Olds | July 25, 2022

Machine Learning Feature@2x Opt
The on-chip neuromorphic backpropagation algorithm was tested on this neuromorphic research processor, codenamed Loihi, from Intel. Los Alamos National Laboratory

Machine learning—a type of artificial intelligence in which software uses complex algorithms to become increasingly better at predicting outcomes—is everywhere these days. Technologies such as drones, satellites, and robots all use machine learning to (for the most part) make our lives easier. But traditional machine learning software also uses a lot of power, which means it has a big carbon footprint. For example, the carbon produced while training an autonomous car to drive equals that of five regular cars over their entire lifetimes.

To lessen the environmental impact of artificial intelligence technologies, researchers from Los Alamos National Laboratory have developed a new type of machine learning algorithm that runs on a neuromorphic processor.

Machine Learning Image1@2x

Neuromorphic processors are inspired by the human brain, which uses synapses to send signals from neuron to neuron. Computations on a neuromorphic processor are similar; the computing elements mimic neurons and synapses. Thus, like the human brain, many subprocesses can occur simultaneously but require less energy than standard computers.

While operating on neuromorphic processors, the algorithm developed by Los Alamos uses backpropagation, which can determine precisely which “synapses” need to be modified within a “neural network” to improve the network’s predictions as new information is processed. Unlike previous machine learning algorithms, the Los Alamos algorithm implements backpropagation on a self-contained neuromorphic processor without offloading any tasks to a traditional graphics processing unit (GPU) or central processing unit (CPU). This enables the algorithm to use 100 times less power than the equivalent algorithm on a conventional GPU.

Because of these important features, the algorithm is called the on-chip neuromorphic backpropagation algorithm, or ONBA for short. The algorithm, which was submitted for a 2022 R&D 100 Award—an Oscar of innovation—is now being used as a module in more sophisticated machine learning frameworks and will eventually help curb the environmental impact of the machine learning technology that we’ve all come to rely on.