▲
Click on
the blue words
above
to follow us and never miss any valuable articles!
Although major EDA companies have been actively introducing AI into their chip design tools in recent years, as early as 2020, Google released a preprint paper titled "Chip Placement with Deep Reinforcement Learning", introducing its new reinforcement learning method for designing chip layout. Then in 2021, Google published a paper in Nature and made it open source.
Recently, Google detailed its reinforcement learning method for chip design layout and named the model "AlphaChip". It is said that AlphaChip is expected to greatly speed up the design of chip layout planning and make them more optimized in terms of performance, power consumption and area. AlphaChip has been released on Github to share with the public, and Google has also opened a checkpoint pre-trained on 20 TPU modules. It is reported that AlphaChip played an important role in the design of Google's tensor processing unit (TPU) and has been adopted by other companies including MediaTek.
Google chief scientist Jeff Dean said that after opening the pre-trained AlphaChip model checkpoint, external users can more easily use AlphaChip to start their own chip design.
Typically, chip design layout or floor plan is the longest and most labor-intensive stage of chip development. In recent years, Synopsys has developed AI-assisted chip design tools that can speed up development and optimize chip layout planning. However, these tools are very expensive. Google hopes to democratize this AI-assisted chip design approach to a certain extent.
Today, it takes about 24 months for humans to design the floorplan for a complex chip like a GPU. Floorplanning for less complex chips can take at least several months, which can cost millions of dollars, as it usually costs a lot to maintain a design team.
Google says AlphaChip speeds up that timeline, creating chip layouts in just a few hours. In addition, its designs are said to be superior because they optimize power efficiency and performance. Google also showed a chart showing that the average wirelength of various versions of TPU and Trillium has been reduced compared to human developers.
△The figure shows the average wirelength reduction of AlphaChip in three generations of Google Tensor Processing Units (TPUs), and compares it with the positions generated by the TPU physical design team.
How does the AlphaChip work?
Chip design is not easy, in part because computer chips are made up of many interconnected blocks with multiple layers of circuit elements, all connected by extremely thin wires. Chips also have many complex and intertwined design constraints that must all be met simultaneously. Because of these complexities, chip designers have been working to automate the chip floorplanning process for more than 60 years.
Similar to AlphaGo and AlphaZero, when Google built AlphaChip, it also considered the layout planning of the chip as a game.
AlphaChip starts with a blank grid and places circuit elements one at a time until all elements are placed. It is then rewarded based on the quality of the final layout. Google proposed a novel "edge-based" graph neural network that enables AlphaChip to learn the relationship between interconnected chip elements and generalize it throughout the chip, allowing AlphaChip to continuously improve in every layout it designs.
△Left: AlphaChip places circuit elements of the open source processor Ariane RISC-V CPU without any experience; Right: AlphaChip places the same circuit elements after practicing on 20 TPU-related designs.
AlphaChip also uses a reinforcement learning model, in which an agent takes actions in a preset environment, observes the results, and learns from those experiences to make better choices in the future. In the case of AlphaChip, the system treats chip floorplanning as a game, placing one circuit component at a time on a blank grid. The system improves as it solves more layouts, using graph neural networks to understand the relationships between components.
Google TPU and MediaTek have both adopted
AlphaChip has been used to design Google's own TPU AI accelerators, which drive many of Google's large-scale AI models and cloud services, since 2020. These processors run Transformer-based models and power Google's Gemini and Imagen.
To design TPU layouts, AlphaChip first practices on various chip blocks from previous generations, such as on-chip and inter-chip network blocks, memory controllers, and data transfer buffers. This process is called pre-training. Google then runs AlphaChip on current TPU blocks to generate high-quality layouts. Unlike previous methods, AlphaChip solves more instances of chip layout tasks and therefore gets better and faster, just like human experts do.
It can be said that AlphaChip has improved the design of each generation of TPU, including the latest 6th generation Trillium chip, ensuring higher performance and faster development. Despite this, Google and MediaTek currently only rely on AlphaChip to make a limited number of blocks in the chip, and human developers still undertake most of the design work. However, with the continuous iteration of AlphaChip, the number of blocks it undertakes is increasing, from 10 blocks in TPU v5e to 25 blocks in Trillium.
△The number of chip blocks designed by AlphaChip in Google’s three recent generations of tensor processing units (TPUs), including v5e, v5p, and Trillium
So far, AlphaChip has been used to develop a variety of processors, including Google's TPU and MediaTek's flagship Dimensity 5G SoC chips, which are widely used in various smartphones. In addition, it also includes Google's first Arm-based general-purpose data center CPU-Axion. Therefore, AlphaChip is able to generalize in different types of processors.
Google says it has pre-trained it on a variety of chip modules, which allows AlphaChip to generate increasingly efficient layouts as it practices more designs. While human experts can learn, and many learn quickly, machines learn orders of magnitude faster.
Expanding the use of AI in chip development
Google said the success of AlphaChip has inspired a new wave of research to apply artificial intelligence to different stages of chip design. This includes extending AI technology to areas such as logic synthesis, macro selection and timing optimization, which Synopsys and Cadence already offer, although it costs a lot of money. According to Google, researchers are also exploring how to apply AlphaChip's methods to further stages of chip development.
“AlphaChip inspires a new line of research in reinforcement learning for chip design, spanning the design flow from logic synthesis to floorplanning, timing optimization, and more,” reads a statement from Google.
Looking ahead, Google sees the potential of AlphaChip to revolutionize the entire chip design lifecycle: from architecture design to layout to manufacturing, AI-driven optimization may lead to faster chips, smaller (i.e. cheaper) and more energy-efficient chips. While Google's servers and MediaTek Dimensity 5G-based smartphones currently benefit from AlphaChip, future applications may expand to almost all areas.
Future versions of AlphaChip are currently under development, so perhaps in the future, driven by AI, chip design will become simpler.
Source: Comprehensive content from Xinzhixun and other networks
You are welcome to set us as “
Star
” so that you can receive push messages as soon as possible.
Scan the QR code to follow:
Automotive Development Circle
, reply "
Auto
"
Get
the Autosar introductory and practical information package
for free
!
Scan the QR code to add the assistant and reply "join the group"
Exchange experiences with electronic engineers face to face