Skip to main content

Verified by Psychology Today

Artificial Intelligence

Neuromorphic AI Hardware Created With Brain Organoids

The human brain inspires living AI hardware that can solve nonlinear equations.

TheDigitalArtist/Pixabay
TheDigitalArtist/Pixabay

​​​​​​As capable as artificial intelligence (AI) deep learning algorithms have become, the human brain still vastly outperforms silicon-based neural networks when it comes to energy efficiency. In efforts to create more energy-efficient AI hardware, a prototype for basic speech recognition and nonlinear equation prediction called Brainoware has been developed using living brain organoids.

“Brain-inspired computing hardware aims to emulate the structure and working principles of the brain and could be used to address current limitations in artificial intelligence technologies,” wrote Indiana University Bloomington researchers Feng Guo, Ken Mackie, Zhuhao Wu, Chunhui Tian, Zheng Ao, and Hongwei Cai, in collaboration with University of Cincinnati School of Medicine researchers Mingxia Gu and Jason Tchieu, and University of Florida researcher Hongcheng Liu.

Why search for computing hardware that functions more closely to the biological brain? The AI renaissance and amazing feats by large language models are largely achieved via the rise of deep learning, a subset of machine learning, and the parallel processing capabilities of Graphics Processing Units (GPUs).

Deep learning consists of artificial neural networks whose architecture consisting of many layers of artificial neurons is somewhat inspired by the biological brain. Classical computing hardware is structured in a manner that is not optimized for AI artificial neural network processing. The more data-intensive the computing process, the greater the potential bottleneck.

AI deep learning requires learning from massive training datasets, a process that is energy inefficient on computing hardware with Von Neumann architecture. In 1945, Hungarian-born American polymath, mathematician, and physicist John Von Neumann (1903-1957) introduced computing architecture that almost all digital computers use today. (It's also known as Princeton architecture.) Von Neumann architecture separates computations from memory. It consists of an input unit, a central processing unit (CPU) containing registers, a control unit and arithmetic unit, a memory unit that stores data and program instructions, and an output unit. The registers in the CPU function as temporary storage areas to perform various functions such as temporarily holding data before sending it to or from memory. The memory has several names, including Random Access Memory (RAM), primary memory, main storage, internal storage, and main memory. Its main purpose is to temporarily hold data during program execution and it is not to be confused with longer-term, secondary data storage which today is often transferred to Cloud-based servers. Other examples of secondary data storage include hard disks, DVDs, compact discs (CDs), and flash memory devices such as USB drives and SD cards. The Von Neumann bottleneck is due to the data bus (also known as a system bus), a shared link that facilitates communication between the memory and the processor. When a program is being executed, processing data is transferred back and forth between the system bus and the RAM that serves as temporary short-term memory. No matter how fast a processor runs, it can only move as fast as the rate of data transfer of the system bus; hence the bottleneck.

Today, instead of CPUs, most AI models are trained using computers with GPUs for their parallel processing capabilities. Even with GPUs, AI computing is energy inefficient and costly.

According to a University of Massachusetts Amherst study published in 2019 by Emma Strubell, Ananya Ganesh, and Andrew McCallum, the cost of training one AI transformer model with neural architecture search via a GPU results in over 626,000 lbs. of CO2 emissions. In comparison, this is the equivalent to the CO2 emissions generated during the entire life spans of five average cars, including the fuel, which is roughly 26,000 lbs. of CO2 emissions per car, per Strubell et al.

AI large language models (LLMs) are notoriously costly to train and operate. OpenAI’s popular large language model ChatGPT computing hardware costs over $694,000 USD per day according to cost estimates by Dylan Patel and Afzal Ahmad at SemiAnalysis LLC. This has sparked an ongoing quest for a new form of computing hardware architecture that is better optimized for artificial neural networks.

“Here we report an artificial intelligence hardware approach that uses adaptive reservoir computation of biological neural networks in a brain organoid,” wrote Guo et al. In the current study, the scientists mounted a living brain organoid on a multielectrode array (MEA). The three-dimensional (3D) self-assembled cortical organoid was created in a laboratory dish (in vitro) using human pluripotent stem cells (hPSCs). The living AI was able to solve non-linear equations and perform basic speech recognition via training using spatiotemporal sequences of electrical stimulation according to the researchers.

The scientists reported: “Brainoware has the flexibility to change and reorganize in response to electrical stimulation, highlighting its ability to learn and adapt over training, necessary for developing AI systems. As living brain-like AI hardware, this approach may naturally address the time-consuming, energy-consuming, and heat production challenges of current AI hardware.”

Copyright © 2024 Cami Rosso All rights reserved.

advertisement
More from Cami Rosso
More from Psychology Today
More from Cami Rosso
More from Psychology Today