/* ---- Google Analytics Code Below */

Saturday, June 19, 2021

Google's AI-Designed Chip

Designing the next generation of Tensor processing units.  Sounds like a kind of layout game. 

What Google’s AI-designed chip tells us about the nature of intelligence

By Ben Dickson   @BenDee983  in Venturebeat

In a paper published in the peer-reviewed scientific journal Nature.    last week, scientists at Google Brain introduced a deep reinforcement learning technique for floorplanning, the process of arranging the placement of different components of computer chips.

The researchers managed to use the reinforcement learning technique to design the next generation of Tensor Processing Units, Google’s specialized artificial intelligence processors.

The use of software in chip design is not new. But according to the Google researchers, the new reinforcement learning model “automatically generates chip floorplans that are superior or comparable to those produced by humans in all key metrics, including power consumption, performance and chip area.” And it does it in a fraction of the time it would take a human to do so.

The AI’s superiority to human performance has drawn a lot of attention. One media outlet described it as “artificial intelligence software that can design computer chips faster than humans can” and wrote that “a chip that would take humans months to design can be dreamed up by [Google’s] new AI in less than six hours.”

Another outlet wrote, “The virtuous cycle of AI designing chips for AI looks like it’s only just getting started.”

But while reading the paper, what amazed me was not the intricacy of the AI system used to design computer chips but the synergies between human and artificial intelligence.

The paper describes the problem as such: “Chip floorplanning involves placing netlists onto chip canvases (two-dimensional grids) so that performance metrics (for example, power consumption, timing, area and wirelength) are optimized, while adhering to hard constraints on density and routing congestion.”

Basically, what you want to do is place the components in the most optimal way. However, like any other problem, as the number of components in a chip grows, finding optimal designs becomes more difficult.

Existing software help to speed up the process of discovering chip arrangements, but they fall short when the target chip grows in complexity. The researchers decided to draw experience from the way reinforcement learning has solved other complex space problems, such as the game Go.

“Chip floorplanning is analogous [emphasis mine] to a game with varying pieces (for example, netlist topologies, macro counts, macro sizes and aspect ratios), boards (varying canvas sizes and aspect ratios) and win conditions (relative importance of different evaluation metrics or different density and routing congestion constraints),” the researchers wrote.... ' 

No comments: