Article count:25311 Read by:103629709

Featured Content
Account Entry

Transistors to be one-sixth smaller by 2030

Latest update time:2020-03-01
    Reads:

Source: Content translated from " Semiwiki ", author Stephen Crosher , thank you.


As the pace of silicon technology development begins to slow, there are very real challenges to the speed of product development. Today, we are taking full advantage of transistor physics, which is essentially derived from 60-year-old CMOS technology. To keep pace with Moore's Law, it is estimated that by 2030, we will need transistors to be one-sixth the size of today.

Reducing the size of transistors increases density, which itself becomes problematic when you consider that the relative power per given area of ​​silicon increases, as described by Dennard Scaling. If you combine the limitations of parallelism with multi-core architectures, our ability to develop increasingly energy-efficient silicon is simply going in the wrong direction!

As we drill down into silicon geometries, we find that the variability of manufacturing processes at advanced nodes is expanding. The increasing challenges posed by our loosening of control over the thermal environment mean that we cannot simply reap the dividends of reduced energy consumption by moving to the next new node. Dynamic fluctuations in voltage supply levels across the chip threaten the operation of the digital logic that underpins the chip’s functionality. These factors, combined with the growing urgency to reduce power consumption in extremely large-scale data systems and to seek efficiencies in lowering the world’s carbon emissions, including in the manufacture and use of electronics, mean that we must think smartly and seek new approaches.

I am not the first to report on the impending technology downturn we are facing, nor will I be the last. The “pessimism” of the silicon industry has been around since the beginning of the silicon industry.

As a species, we can get smart. We know that if we can see and understand something, we have a better chance of controlling it. The more data we have, the more efficient we are.

There are two phases to the nature of surveillance systems, reflecting our inherent curiosity as humans. First is “awakening.” This discovery brings enlightenment, but also opportunity. Second, there is the “evolution” phase. Once data is collected from the system (which has been invisible until now), we seek to improve the quality, accuracy, and granularity of the data. Adding “data intelligence” to the information we are collecting, we correlate dynamic circuit conditions, aiming to identify trends and extract signatures or patterns in the vast amounts of data.

What's next?


Information of any value needs to be good to be effective. I have had many conversations outlining that the perfect embedded monitoring system must be infinitely accurate, infinitely small, zero latency, and zero power! Although as a provider of embedded monitoring subsystems at advanced nodes, we are not there yet, but we are working on it! Until a panacea is found, SoC developers need to be aware of the area overhead of sensor systems. While sensors are relatively small, their cores are often analog designs, and unlike adjacent logic circuits, they do not necessarily scale with decreasing geometry.

Therefore, for this reason, we must understand and seek circuit topologies and schemes that can reduce the silicon area occupied by the sensor itself. In order to minimize the area impact and make the most of the on-chip sensors in terms of layout, such issues are often best discussed and considered during the architectural design phase of SoC development rather than as an afterthought in ground planning. Increasingly, sensor subsystems are becoming a critical foundation for chip power management and performance optimization, as getting it wrong can lead to existential device stress and can cause great reputational damage to companies in the technology food chain that create larger products or systems used in today's automotive, consumer and high-performance computing products.

So, as we try to continue to follow Moore’s Law and limit Dennard scaling, we need to innovate, and we will certainly innovate. But this innovative solution will come from a clearer understanding of the dynamic conditions deep inside the chip, rather than how the chip itself implements its core functions.


*Disclaimer: This article is originally written by the author. The content of the article is the author's personal opinion. Semiconductor Industry Observer reprints it only to convey a different point of view. It does not mean that Semiconductor Industry Observer agrees or supports this point of view. If you have any objections, please contact Semiconductor Industry Observer.


Today is the 2235th issue of content shared by "Semiconductor Industry Observer" for you, welcome to follow.

Recommended Reading

Semiconductor Industry Observation

" The first vertical media in semiconductor industry "

Real-time professional original depth


Scan the QR code , reply to the keywords below, and read more

“Core” Epidemic |Power Semiconductor |TWS |Huawei Storage Fab|MCU|Xiaomi



Reply Submit your article and read "How to become a member of "Semiconductor Industry Observer""

Reply Search and you can easily find other articles that interest you!

 
EEWorld WeChat Subscription

 
EEWorld WeChat Service Number

 
AutoDevelopers

About Us Customer Service Contact Information Datasheet Sitemap LatestNews

Room 1530, Zhongguancun MOOC Times Building,Block B, 18 Zhongguancun Street, Haidian District,Beijing, China Tel:(010)82350740 Postcode:100190

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号