A brain-inspired computer chip can run AI-powered image recognition operations 22 times faster than comparable commercial chips, and with 25 times the energy efficiency.
The IBM NorthPole chip intertwines its computational capability with associated memory blocks that store information. This allows it to bypass the so-called von Neumann bottleneck – named after computing pioneer John von Neumann – which describes how modern computers slow down while waiting on information exchanges between more separated compute and memory units.
The melding of computation and memory was inspired by the way the human brain works. IBM had previously built a chip based on this idea called TrueNorth. But NorthPole transforms the technology into a digital architecture that is compatible with the silicon chip technology used in contemporary computers.
“It’s a new way to look at computer architecture,” says Dharmendra Modha at IBM Research.
Modha and his colleagues designed NorthPole as a 2D array of intertwined computing cores and memory blocks. The chip’s digital architecture is also innovative in allowing each computing core to access distant memory blocks as easily as neighbouring memory blocks, write Subramanian Iyer and Vwani Roychowdhury at the University of California, Los Angeles, in a commentary article.
The IBM researchers demonstrated how the new chip can run a common image recognition AI faster and more efficiently than any commercial chip on the market, beating even the latest chips from NVIDIA, the leading manufacturer of graphics processing units. They also showed that NorthPole can economically run AIs for speech recognition and natural language processing.
Additionally, NorthPole delivers better performance per transistor – the tiny electronic switches at the heart of computing technology – than other chip architectures. The chip packs 22 billion transistors into a space of just 800 square millimetres.
However, its design, which is specialised for running AI processes, comes at a cost. NorthPole cannot perform other tasks – AI training, for instance – and it also cannot easily run larger AI models. But Modha’s team plans to demonstrate how multiple NorthPole chips could support large language models.
While recognising the value of the NorthPole chip, Hava Siegelmann at the University of Massachusetts Amherst thinks it would be even more useful if allowed AIs to keep learning by simultaneously training and performing tasks. “Their contribution to AI may be everlasting if they included such lifelong adaptation on an otherwise frozen architecture,” she says.
The NorthPole chip prototype is unlikely to be commercialised immediately. But such digital architecture redesigns will be important for enabling AIs to run efficiently on the computing hardware of self-driving vehicles and aircraft, write Iyer and Roychowdhury.