Skip to main content

An ‘Introspective’ AI Finds Diversity Improves Performance

Ai image of brain

An artificial intelligence with the ability to look inward and fine tune its own neural network performs better when it chooses diversity over lack of diversity, a new study finds. The resulting diverse neural networks were particularly effective at solving complex tasks.

“We created a test system with a non-human intelligence, an artificial intelligence (AI), to see if the AI would choose diversity over the lack of diversity and if its choice would improve the performance of the AI,” says William Ditto, professor of physics at North Carolina State University, director of NC State’s Nonlinear Artificial Intelligence Laboratory (NAIL) and co-corresponding author of the work. “The key was giving the AI the ability to look inward and learn how it learns.”

Neural networks are an advanced type of AI loosely based on the way that our brains work. Our natural neurons exchange electrical impulses according to the strengths of their connections. Artificial neural networks create similarly strong connections by adjusting numerical weights and biases during training sessions. For example, a neural network can be trained to identify photos of dogs by sifting through a large number of photos, making a guess about whether the photo is of a dog, seeing how far off it is and then adjusting its weights and biases until they are closer to reality.

Conventional AI uses neural networks to solve problems, but these networks are typically composed of large numbers of identical artificial neurons. The number and strength of connections between those identical neurons may change as it learns, but once the network is optimized, those static neurons are the network.

Ditto’s team, on the other hand, gave its AI the ability to choose the number, shape and connection strength between neurons in its neural network, creating sub-networks of different neuron types and connection strengths within the network as it learns.

“Our real brains have more than one type of neuron,” Ditto says. “So we gave our AI the ability to look inward and decide whether it needed to modify the composition of its neural network. Essentially, we gave it the control knob for its own brain. So it can solve the problem, look at the result, and change the type and mixture of artificial neurons until it finds the most advantageous one. It’s meta-learning for AI.

“Our AI could also decide between diverse or homogenous neurons,” Ditto says. “And we found that in every instance the AI chose diversity as a way to strengthen its performance.”

The team tested the AI’s accuracy by asking it to perform a standard numerical classifying exercise, and saw that its accuracy increased as the number of neurons and neuronal diversity increased. A standard, homogenous AI could identify the numbers with 57% accuracy, while the meta-learning, diverse AI was able to reach 70% accuracy.

According to Ditto, the diversity-based AI is up to 10 times more accurate than conventional AI in solving more complicated problems, such as predicting a pendulum’s swing or the motion of galaxies.

“We have shown that if you give an AI the ability to look inward and learn how it learns it will change its internal structure – the structure of its artificial neurons – to embrace diversity and improve its ability to learn and solve problems efficiently and more accurately,” Ditto says. “Indeed, we also observed that as the problems become more complex and chaotic the performance improves even more dramatically over an AI that does not embrace diversity.”

The research appears in Scientific Reports, and was supported by the Office of Naval Research (under grant N00014-16-1-3066) and by United Therapeutics. John Lindner, emeritus professor of physics at the College of Wooster and visiting professor at NAIL, is co-corresponding author. Former NC State graduate student Anshul Choudhary is first author. NC State graduate student Anil Radhakrishnan and Sudeshna Sinha, professor of physics at the Indian Institute of Science Education and Research Mohali, also contributed to the work.

-peake-

Note to editors: An abstract follows.

“Neuronal diversity can improve machine learning for physics and beyond”

DOI: 10.1038/s41598-023-40766-6

Authors: Anshul Choudhary, Anil Radhakrishnan, John F. Lindner, William L. Ditto, North Carolina State University Nonlinear Artificial Intelligence Laboratory; Sudeshna Sinha, Indian Institute of Science Education and Research Mohali
Published: Aug. 21, 2023 in Scientific Reports

Abstract:
Diversity conveys advantages in nature, yet homogeneous neurons typically comprise the layers of artificial neural networks. Here we construct neural networks from neurons that learn their own activation functions, quickly diversify, and subsequently outperform their homogeneous counterparts on image classification and nonlinear regression tasks. Sub-networks instantiate the neurons, which meta-learn especially efficient sets of nonlinear responses. Examples include conventional neural networks classifying digits and forecasting a van der Pol oscillator and physics-informed Hamiltonian neural networks learning Hénon–Heiles stellar orbits and the swing of a video recorded pendulum clock. Such learned diversity provides examples of dynamical systems selecting diversity over uniformity and elucidates the role of diversity in natural and artificial systems.

This post was originally published in NC State News.