Nvidia explains that the RTX 4060 Ti uses 128-bit video memory: the L2 cache is increased by 16 times, which greatly improves the hit rate.

Publisher:星光小狐狸Latest update time:2023-05-22 Source: IT之家Keywords:Nvidia Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

According to news on May 20, NVIDIA has now released the RTX 4060 Ti 8G graphics card, whose memory width is only 128 bits. In comparison, NVIDIA's old RTX 3060 Ti is 256bit wide, and the RTX 3060 is also 192bit wide.


NVIDIA responded to this in a blog post, saying that compared with the previous generation of GPUs with 128-bit video memory width, the storage subsystem of the new NVIDIA Ada Lovelace architecture increases the size of the L2 cache by 16 times, which is a significant increase. Improved cache hit rate.


As shown in the figure above, L2 cache bandwidth has increased significantly in Ada GPUs. This makes it possible to transfer more data between the processing cores and the L2 cache.

In addition, Nvidia engineers tested a special test version of the RTX 4060 Ti with 32 MB of L2 cache and one that only used 2 MB of L2.


Across a variety of games and synthetic benchmarks, 32 MB of L2 cache reduces memory bus traffic by more than 50% on average compared to the performance of 2 MB of L2 cache. This 50% reduction in traffic allows the GPU to use its memory bandwidth more efficiently, increasing efficiency by up to nearly 2x. So, in this case, isolating memory performance, the performance of an Ada GPU with 288 GB/sec peak memory bandwidth is similar to the performance of an Ampere GPU with 554 GB/sec peak memory bandwidth. Across a range of games and synthetic tests, the greatly improved cache hit rate boosted game frame rates by up to 34%.


Nvidia said that historically, memory bandwidth has been used as an important metric to determine the speed and performance level of new GPUs. However, memory bitwidth alone is not a sufficient indicator of storage subsystem performance. Instead, it's helpful to have a more complete understanding of storage subsystem design and its overall impact on gaming performance.


Due to the advantages of the Ada architecture, including new RT and Tensor cores, higher clock speeds, the new OFA engine and Ada's DLSS 3 capabilities, the GeForce RTX 4060 Ti is 256-bit more powerful than the previous generation GeForce RTX 3060 Ti and RTX 2060 SUPER The graphics card is faster while using less power.


 RTX 4060 Ti and RTX 4060 graphics card parameter information is as follows:

RTX 4060 Ti has 4352 CUDA cores, equipped with 8GB / 16GB 128bit GDDR6 video memory, TGP power consumption 160W / 165W, using PCIe 4.0*8 connection, priced from 3199 yuan, and will go on sale on May 24.


The RTX 4060 graphics card has 3072 CUDA cores, is equipped with 8GB GDDR6 128bit video memory, consumes 115W, uses PCIe 4.0*8 connection, priced from 2,399 yuan, and will be available in July.


Keywords:Nvidia Reference address:Nvidia explains that the RTX 4060 Ti uses 128-bit video memory: the L2 cache is increased by 16 times, which greatly improves the hit rate.

Previous article:AMD patch doesn't help? Ryzen 7000 may still burn out due to high voltage!
Next article:i9/i7/i5 disappeared, Intel Core processor naming changed

Recommended ReadingLatest update time:2024-11-22 20:01

With Nvidia and Qualcomm eyeing them covetously, can Huawei and Horizon Robotics break through in the field of smart driving chips?
As the concepts of the "new four modernizations" such as intelligence and electrification become more popular, the track of smart driving chips has also become more and more popular along with the hot new energy vehicle market. As the name suggests, smart driving chips are the basic hardware used for vehicle intelli
[Automotive Electronics]
With Nvidia and Qualcomm eyeing them covetously, can Huawei and Horizon Robotics break through in the field of smart driving chips?
Full of guests and friends! NVIDIA officially announced that DRIVE Thor will use the latest BlackWell architecture GPU
It’s all about generative AI. When DRIVE Thor was released in 2022, ChatGPT was not yet popular, and large language models (LLM) were not well known. Now that Transformers, large language models and generative AI have become industry standards, cars have also begun to adopt the route of large models. Therefore, at G
[Automotive Electronics]
Full of guests and friends! NVIDIA officially announced that DRIVE Thor will use the latest BlackWell architecture GPU
TrendForce: If Nvidia acquires Arm, the United States will dominate the global chip industry
Regarding Nvidia's proposed acquisition of Arm, the global IP leader, for USD 40 billion, TrendForce's TrendForce Research Institute said that if Nvidia successfully acquires SoftBank's Arm, the United States will control the ecosystems of the two major camps of x86 and Arm, thereby establishing an unshakable position
[Mobile phone portable]
AEye and Continental collaborate to launch adaptive lidar simulation suite based on NVIDIA DRIVE Sim
According to foreign media reports, AEye, a supplier of adaptive, high-performance lidar solutions, announced that Continental's HRL131 long-range lidar based on AEye's patented architecture is now available for testing and development on the NVIDIA DRIVE Sim™ platform. Continental and AEye will use the NVIDIA DRIVE S
[Automotive Electronics]
AEye and Continental collaborate to launch adaptive lidar simulation suite based on NVIDIA DRIVE Sim
Nvidia's market value exceeds Intel's, the American big brother has changed
According to foreign media reports, on Wednesday US time, Nvidia surpassed Intel and became the chip company with the highest market value in the United States for the first time.                                                   As shown in the figure, at the close of Wednesday, Intel (NASDAQ: INTC) stock price rose
[Semiconductor design/manufacturing]
Nvidia's market value exceeds Intel's, the American big brother has changed
Developers build robots using NVIDIA Omniverse and Isaac Sim
Antonio Serrano-Muñoz, a PhD student in applied sciences, created an Omniverse extension to use Robot Operating System software in NVIDIA Isaac Sim. As a graduate student, Antonio Serrano-Muñoz co-authored papers on topics ranging from planetary gravity to diagnosing rheumatoid arthritis and robots that can
[robot]
Advantech Releases AIR-020 Ultra-Compact AI Intelligent Inference System
Equipped with NVIDIA Jetson series core modules Recently, Advantech announced the launch of the AIR-020 series of artificial intelligence inference systems, an ultra-compact edge intelligent machine equipped with NVIDIA Jetson series core modules and driven by edge artificial intelligence systems. The
[Embedded]
Advantech Releases AIR-020 Ultra-Compact AI Intelligent Inference System
NVIDIA releases major Omniverse upgrade supporting generative AI and OpenUSD
The latest platform updates, access to Adobe Firefly, the introduction of OpenUSD to RealityKit and the Ada architecture system will jointly accelerate various interoperable 3D workflows and industrial digitalization processes LOS ANGELES - SIGGRAPH - August 8, 2023, Pacific Time - NVIDIA today released an importa
[Industrial Control]
NVIDIA releases major Omniverse upgrade supporting generative AI and OpenUSD
Latest Home Electronics Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号