How does AI really work? Comparing neural networks gives a peek into the black box

November 16, 2022

By Haydn Jones

Artificial intelligence is everywhere. Once the domain of science fiction, it’s used in everything from virtual assistants, to facial recognition systems, to self-driving cars. But how, exactly, does it work? The answer has actually not been well understood, which is a problem. After all, not fully understanding the “how” means we don’t fully understand why and when it fails. That can have big implications, particularly where safety is concerned—think of not knowing why a self-driving car is able to do so.

Most current AI is built on neural networks, which are series of algorithms that recognize patterns in massive datasets, similar to how a human brain works. Our brains use billions of cells called neurons that form networks of connections with one another, processing information as they send signals to and from each other, hence the name neural network.

These networks are “trained” by being given examples of inputs and desired outputs. For example, by repeatedly training the networks on input-output pairs, they find “features” in their input that help them perform their task—such as detecting cat ears or cat tails or simply identifying whether an image is of a cat or not.

While we use AI more and more, the artificial intelligence research community doesn’t necessarily have a complete understanding of what neural networks are doing; they give us good results, but we don’t always know how or why.

And that is what we’re looking to solve.

Our team at Los Alamos National Laboratory has developed a novel approach for comparing neural networks that looks within the black box of artificial intelligence to help us better understand neural network behavior.

Read the rest of the story as it appeared in the RealClearScience.

LA-UR-22-31987