Tech

How can AI be more energy efficient? Researchers look to human brain for inspiration

Share
Share
How can AI be more energy efficient? Researchers look to human brain for inspiration
Sambandamurthy Ganapathy of the University at Buffalo Department of Physics leads a team developing neuromorphic computer chips, which aim to mimick the complex structure of the human brain in order to gain energy efficiency. Credit: Douglas Levere/University at Buffalo

It’s estimated it can take an AI model over 6,000 joules of energy to generate a single text response. By comparison, your brain needs just 20 joules every second to keep you alive and cognitive.

That’s why University at Buffalo researchers are taking inspiration from the human brain to develop computing architecture that can support the growing energy demands of artificial intelligence.

“There’s nothing in the world that’s as efficient as our brain—it’s evolved to maximize the storage and processing of information and minimize energy usage,” says Sambandamurthy Ganapathy, Ph.D., professor in the UB Department of Physics and associate dean for research in the UB College of Arts and Sciences.

“While the brain is far too complex to actually recreate, we can mimic how it stores and processes information to create more energy-efficient computers, and thus, more energy-efficient AI.”

This brain-inspired approach is known as neuromorphic computing. Its origins go back to the 1980s but it has taken on more relevance in recent years as computing tasks have become more energy intensive and complex, especially tasks that require AI.

While neuromorphic computing can relate to both brain-inspired hardware and software, Ganapathy’s team is focused on hardware. Their research is a blend of quantum science and engineering that involves probing the unique electrical properties of materials that can be used to build neuromorphic computer chips.

The team’s goal is to ultimately develop chips and devices that are not only more energy efficient, but also just better at completing tasks—perhaps even in a more human-like way.

“The computers of today were built for simple and repetitive tasks, but with the rise of AI, we don’t want to just solve simple problems anymore,” Ganapathy says. “We want computers to solve complex problems, like human beings do every day. Neuromorphic computing may provide the structure to allow computers to do this.”

Computers already share similarities with brains

A computer that mimics the human brain is not as much of a leap as you might expect.

Computers encode all their information in binary (ones and zeros) by using billions of transistors, tiny switches that either conduct electricity (one) or block it (zero). Our brains encode information in a surprisingly similar way. Instead of transistors, we have billions of neurons that either fire off electrical signals or stay silent.

“Neuromorphic computing simply aims to move beyond the binary framework and closer to the far more complex system given to us by nature,” says Nitin Kumar, a graduate student in Ganapathy’s lab.

Memory and processing in the same place

One of the ways the brain is more complex—and energy efficient—than a computer is that information is stored and processed in the same place.

“It’s not as if the left side of the brain holds all the memories and the right is where all learning happens,” Ganapathy says. “It’s intertwined.”

Information storage and processing are separated in traditional computers, and thus, a lot of energy is used simply transporting data along tiny circuits between its memory unit and its processing unit. This can become even more energy-intensive when the computing architecture is supporting an AI model.

“Of course, the question then becomes how close we can place memory and processing together within a computer chip,” Ganapathy says. “This is known as in-memory computing and it’s a major advantage of neuromorphic computing.”

Artificial neurons and synapses

Memory and processing are intertwined in the brain thanks to an intricate system of neurons.

Neurons send electrical signals to each other through the synapses that connect them, effectively carrying information throughout a vast network. In computer terminology, synapses store memory and neurons do the processing.

So Ganapathy’s team is developing artificial neurons and synapses designed to mimic their biological counterparts’ electrical signaling of information.

“We essentially want to recreate those rhythmic and synchronized electrical oscillations you may see in a brain scan,” Kumar says. “To do this, we need to create our neurons and synapses out of advanced materials whose electrical conductivity can controllably be switched on and off with precision.”

Finding the right materials

The advanced materials that fit this bill are known as phase-change materials (PCM).

PCMs can switch back and forth between their conductive and resistive phases when hit with controlled electrical pulses, allowing scientists to synchronize their electrical oscillations.

PCMs can also retain their conductive or resistive phase even after the applied electrical pulse has ended. In other words, they essentially hold the memory of their previous phases.

“This allows for their level of conductivity to gradually change in response to repeated electrical pulses—similar to how a biological synapse is strengthened through repeated activation,” Ganapathy says.

Some of the PCM materials the team has published studies on recently include copper vanadium oxide bronze, niobium oxide and other compounds known as metal-organic frameworks. Their work is published in the Journal of the American Chemical Society, Advanced Electronic Materials and the arXiv preprint server, respectively.

“Our experiments use voltage as well as temperature to switch the materials’ conductivity. We then examine this effect down to the materials’ electrons,” Kumar says.

“In order to incorporate these materials into neuromorphic chips as artificial neurons and synapses, we need to understand them at the atomic scale. That’s why we’re currently working with their collaborators to achieve atomic-level control over material structures, enabling precise tuning of electrical switching properties.”

“Our next goal,” Ganapathy adds, “is to synchronize the oscillations of multiple devices to construct an oscillatory neural network capable of emulating complex brain functions such as pattern recognition, motor control and other rhythmic behaviors.”

More human-like computers?

Ganapathy stresses that neuromorphic computers mimic the brain on a purely phenomenological level. Neuromorphic computing aims to recreate the brain’s functional behaviors and benefits—not consciousness.

However, it’s possible that neuromorphic computers will solve problems less like computers and more like human beings.

The computers of today follow linear logic—the same input will always lead to the same output. The human brain is profoundly nonlinear—present the same situation to a person 10 times and they may respond 10 different ways.

Computers of today also don’t do well with limited or ill-defined data—for example, give AI a vague prompt and it’s unlikely to give you the output you’re looking for. Humans, on the other hand, often respond well to limited or even confusing information.

“So it is possible that giving a computer a more complex architecture like the human brain may allow it to process in a more nonlinear fashion and adapt better to limited data,” Ganapathy says.

Researchers think this could be especially helpful in applications like self-driving cars, where AI does well in most road situations but still underperforms humans when it comes to more complex scenarios with no easy solution; think of deer jumping in front of your car while someone is tailgating directly behind you.

In fact, self-driving cars may be among the best applications for neuromorphic chips, given that real-time decisions are made on the device itself, not thousands of miles away on a remote server.

“Neuromorphic chips may not be in your smartphone anytime soon, but I do think we will see them in highly specific applications, like self-driving cars. Perhaps even one chip to respond to the road and another to find the best possible route,” Ganapathy says. “There likely won’t be one large neuromorphic computer that solves all problems. Instead, you’ll see many different neuromorphic chips that each solve a problem.”

More information:
John Ponis et al, Atomistic Origins of Conductance Switching in an ε-Cu0.9V2O5 Neuromorphic Single Crystal Oscillator, Journal of the American Chemical Society (2024). DOI: 10.1021/jacs.4c11968

Nitin Kumar et al, Noise Spectroscopy and Electrical Transport In NbO2 Memristors with Dual Resistive Switching, Advanced Electronic Materials (2025). DOI: 10.1002/aelm.202400877

Divya Kaushik et al, Reconfigurable Filamentary Conduction in Thermally Stable Zeolitic Imidazolate Framework (ZIF-8) Resistive Switching Devices, arXiv (2025). DOI: 10.48550/arxiv.2501.01822

Provided by
University at Buffalo


Citation:
How can AI be more energy efficient? Researchers look to human brain for inspiration (2025, July 1)
retrieved 1 July 2025
from

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

Share

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles
Most companies are diving into AI without a plan, and it’s going to explode in their faces soon
Tech

Most companies are diving into AI without a plan, and it’s going to explode in their faces soon

Report finds business AI adoption is exploding, but most companies are skipping...

This home NAS with 32TB, 4K HDMI, and AI photo sorting sounds too wild to ignore
Tech

This home NAS with 32TB, 4K HDMI, and AI photo sorting sounds too wild to ignore

Streams 4K video, backs up your phone, and still skips cloud storage...

How loudness and acoustic cues help us judge where a speaker is facing
Tech

How loudness and acoustic cues help us judge where a speaker is facing

Researchers at Sophia University discover that both loudness and frequency-based acoustic cues...