Google is using AI to design processors that run AI more efficiently

Server farm

Google has a vast amount of computing hardware

Google

Engineers at Google have tasked an artificial intelligence with designing faster and more efficient processors – and then used its chip designs to develop the next generation of specialised computers that run the very same type of AI algorithms.

Google operates at such a large scale that it designs its own computer chips rather than buying commercial products. This allows it to optimise the chips to run its own software, but the process is time-consuming and expensive. A custom chip usually takes two to three years to develop.

One stage of chip design is a process called floorplanning, which involves taking the finalised circuit diagram of a new chip and arranging the millions of components into an efficient layout for manufacturing. Although the functional design of the chip is complete at this point, the layout can have a huge effect on speed and power consumption. For chips in smartphones, the priority may be to cut power consumption in order to increase battery life, but for a data centre, it may be more important to maximise speed.

Floorplanning has previously been a highly manual and time-consuming task, says Anna Goldie at Google. Teams would split larger chips into blocks and work on parts in parallel, fiddling around to find small refinements, she says.

But Goldie and her colleagues have now created software that turns the floorplanning problem into a task for a neural network. It treats a blank chip and its millions of components as a complex jigsaw with a vast amount of possible solutions. The aim is to optimise whatever parameters the engineers decide are most important, while also placing all the components and connections between them accurately.

The software began by developing solutions at random that were tested for performance and efficiency by a separate algorithm and then fed back to the first one. In this way, it gradually learned what strategies were effective and built upon past successes. “It started off kind of random and gets really bad placements, but after thousands of iterations it becomes extremely good and fast,” says Goldie.

The team’s software produced layouts for a chip in less than 6 hours that were comparable or superior to those produced by humans over several months in terms of power consumption, performance and chip density. An existing software tool called RePlAce that completes designs at a similar speed fell short of both humans and the AI on all counts in tests.

The chip design used in the experiments was the latest version of Google’s Tensor Processing Unit (TPU), which is designed to run exactly the same sort of neural network algorithms for use in the company’s search engine and automatic translation tool. It is conceivable that this new AI-designed chip will be used in the future to design its successor, and that successor would in turn be used to design its own replacement.

The team believes that the same neural network approach can be applied to the various other time-consuming stages of chip design, slashing the overall design time from years to days. The company aims to iterate quickly because even small improvements in speed or power consumption can make an enormous difference at the vast scale it operates at.

“There’s a high opportunity cost in not releasing the next generation. Let’s say that the new one is much more power efficient. The level of the impact that can have on the carbon footprint of machine learning, given it’s deployed in all sorts of different data centres, is really valuable. Even one day earlier, it makes a big difference,” says Goldie.

Journal reference: Nature, DOI: 10.1038/s41586-021-03544-w

More on these topics:

Source link

Leave a Comment