Meeting traditional Moore's Law scaling challenges requires new approaches to chip routing and integration
Response Vision
Applied Vision | Bringing you an Applied Materials blog post. As Moore's Law scaling slows, new approaches to chip routing and integration are needed to accelerate PPACt. Read Kevin Moraes' latest article to see Applied Materials' new achievements in chip routing and integration.
Since the early days of the computer industry, chip designers have had an insatiable demand for transistor counts. Intel launched the microprocessor revolution in 1971 with the 2,300-transistor 4004 microprocessor; today, mainstream CPUs have tens of billions of transistors.
Over the past years of development, the technological change has been about how to convert higher transistor budgets into better chips and systems . In the Dennard scaling era in the early 2000s, shrinking transistors drove simultaneous improvements in chip power, performance, and area cost, or PPAC.
Today, we are in an era characterized by new architectures—computing performance determined by cores and accelerators, driven by increased transistor budgets and larger chip sizes. However, as I will explain later in this blog, new limits are approaching.
EUV is here, what should we do now?
EUV lithography has arrived, making it possible to print smaller transistor features and wiring on chips. But these practitioners are also facing new challenges. At the International Electron Devices Meeting (IEDM 2019) roundtable "The Future of Logic: EUV is here, what now?", industry experts said that this technology simplifies patterning, but it is not a panacea. I listed several challenges discussed by the participants, and the solutions they proposed are now being gradually implemented in the new roadmap of the semiconductor industry.
▲ Swipe left to view the next challenge
Applied Materials further explored the above three topics in the "New Ways to Wiring and Integrating Chips" master class on May 26, and also demonstrated innovations in materials engineering and heterogeneous integration to solve the resistance problem of EUV miniaturization; new ways to achieve miniaturization of logic chips without changing lithography technology; and provide designers with a nearly unlimited transistor budget. The following is an overview of the content of this master class.
Cabling Innovations Required to Increase Power and Performance
The advent of EUV has simplified patterning by enabling manufacturers to print features within 25nm pitches with a single exposure. Unfortunately, making chip wiring smaller doesn’t make it better. The resistance challenge of EUV scaling exists in the smallest transistor contacts, vias, and interconnects, and this is where innovation in materials engineering is needed.
To create the wiring, trenches are etched into the dielectric material and then the wiring is deposited using a metal stack that typically includes a barrier layer to prevent the metal from mixing with the dielectric material; a liner layer to promote adhesion; a seed layer to facilitate metal fill; metals such as tungsten or cobalt for transistor contacts and copper for interconnects. Applied Materials has been working to develop new technologies that are reshaping the way chip wiring is designed and manufactured.
Using backside power distribution networks to facilitate logic circuit scaling
The transistors are powered by a network of wires that carry voltage from an off-chip regulator through all the metal layers of the chip to each logic cell. At each of the chip's 12 or more metal layers, wiring resistance drops the supply voltage.
The design margin of the power delivery network can tolerate a 10% voltage drop between the regulator and the transistor. Further scaling of the lines and vias using EUV will result in higher resistance and routing congestion. Therefore, we may not be able to scale below 3nm using existing power delivery technology without tolerating up to 50% voltage reduction, resulting in serious transistor stability issues.
Within each logic cell, power supply lines (also called "tracks") need to be a certain size in order to provide enough voltage for the transistors to switch. They cannot be scaled as well as other logic cell components such as transistor structures and signal lines. As a result, power supply rails are now about three times wider than other components, posing a major obstacle to logic density scaling.
The solution is a simple but beautiful idea: why not move all the power lines to the back, thereby solving the voltage reduction problem and the logic cell scaling challenge and significantly increasing the value.
This is where Applied Materials’ innovation, based on its leading technology in front-side wiring, comes in. The “back-side power distribution network” will bypass 12 or more wiring layers of the chip to reduce voltage by up to 7 times. Removing the power rails from the logic cells can shrink logic density by up to 30% at the same lithography pitch— equivalent to two EUV generations of shrinking at the same lithography pitch.
According to public information, chipmakers are evaluating three different backside power distribution architectures, each with design tradeoffs. Some approaches will be easier to manufacture, while other more complex approaches can maximize area.
Heterogeneous integration drives PPACt at the chip and system level
As transistor counts continue to grow exponentially while 2D scaling slows, chip sizes are increasing and pushing up the “reticle limit.” When Moore’s Law scaling plateaued, designers could fit a large number of high-performance PC and server chips, or a small number of very high-performance server chips, in that space. Today, designers of servers, GPUs, and even PC chips want more transistors than the reticle area can accommodate. This has forced and accelerated the industry’s transition to heterogeneous design and integration using advanced packaging techniques.
Conceptually, if two chips can be connected using their back-end interconnects, then heterogeneous chips can perform as one chip, overcoming reticle limitations.
In fact, the concept exists: it’s called hybrid bonding, and it’s showing up on the roadmaps of leading chipmakers. One promising example is combining large SRAM cache chips with CPU chips to simultaneously overcome reticle limitations, speed development time, boost performance, reduce chip size, improve yield, and reduce cost. SRAM caches can be built using older, depreciated manufacturing nodes to further reduce costs. In addition, using advanced substrate and packaging techniques, such as through-silicon vias, designers can bring other technologies that don’t scale well, such as DRAM and flash, analog, power, and optical chips, closer to logic and memory caches, improving system design flexibility, cost, and time to market, and increasing system performance, power, size, and cost.
To accelerate the industry's transition from the system-on-chip era to the system-in-package era, Applied Materials is working to develop hybrid bonding solutions.
The above content is an excerpt from the English content. Please click "Read Original Text" to view the full text of this blog.
Welcome to light up "Watching" to let more people see it
About the Author
Kevin Moraes
Kevin Moraes is Vice President of Products and Marketing for the Semiconductor Division at Applied Materials. He leads the team in product strategy, investment priorities, and product line management. Dr. Moraes holds a Ph.D. in Materials Science and Engineering from Rensselaer Polytechnic Institute and an MBA from the Haas School of Business at the University of California, Berkeley.
Related Reading
1
END
1
About Applied Materials
Applied Materials, Inc. (NASDAQ: AMAT) is a leader in materials engineering solutions that are behind nearly every new chip and advanced display produced worldwide. With technologies that can transform materials at the atomic level at scale, we enable our customers to realize what’s possible. At Applied Materials, we believe that our innovations enable a better future. For more information, visit www.appliedmaterials.com