112Gb/s: What kind of connector is suitable for such a high data transmission rate?
↑ Click on " Amphenol ICC " above to follow us
Data usage is increasing every year, and the communications industry is working hard to support the growing demand. This article explores why we need more data, the physical layer architecture of data centers, what changes are needed to support higher data rates, and how Amphenol is actively responding to such high-speed demands and supporting higher data rate systems.
Due to the COVID-19 pandemic, many people around the world are working and studying from home, and this remote mode has led to a surge in Internet usage, especially the surge in data used for video conferencing, remote access to servers, large file transfers, online games, and social media.
Today, there are about 5 billion users of social media platforms worldwide, and by 2021, the total time humans spend on these social media platforms will reach 420 million years. New mobile phones released this year will be equipped with many new features, such as 8k and 360° video, and these big data contents will be shared and broadcasted on social media platforms.
In 2020, the average household used 350GB of data per month, and many households used 1TB or more, which is already the data cap for most internet providers, but data usage will continue to increase.
The rise of 5G has inspired many new technologies. Precision agriculture uses 5G to connect sensors, drones, and automation hardware to reduce waste and increase production. Self-driving cars send updates to data centers via 5G every two feet while traveling at typical highway speeds. Drones are used to deliver goods. This is only possible because of the 5G network. Finally, 5G augmented reality technology can make shopping at home a whole new experience.
In order to fully realize the capabilities of 5G and its powerful derivatives, we need to upgrade the infrastructure, including the transmission of 112Gb/s per differential signal pair.
The changes required for data centers are that servers and switches must be upgraded to meet the IEEE 400GBASE-KR4 and 400GBASE-CR4 protocols.
To support the above services, data centers and edge data centers need to transition to higher speed architectures. Currently, most data center servers follow 100GBASE-CR4 and 100GBASE-KR4 described in IEEE 802.3 Clauses 92 and 94. These protocols were released in 2014 and utilize 25.78125Gbaud/s signal speed and NRZ modulation.
Now we are moving to 200GBASE-KR4. This protocol runs at 26.5625Gbaud/s and uses PAM4 modulation. The rate per symbol (BAUD) has not changed significantly, but each symbol now carries two bits instead of one. This means that each bit takes up less signal, and with less signal, the system signal-to-noise ratio is reduced. This change in modulation mode means that each bit takes up less signal, and with less signal, the system signal-to-noise ratio is reduced.
To illustrate the difference, let's take a 100GBASE-KR4 backplane as an example.
At 25.78125Gbaud/s Nyquist frequency (12.89GHz), the backplane has an insertion loss of about 25dB (blue line) and a signal-to-noise ratio of about 25-35dB, depending on the wiring pattern. If we plot the equalized eye diagram of just the channel at 25.78125Gbaud/s and NRZ modulation with no crosstalk, we see a wide open eye with an eye height of about 40mV and an eye width of almost a full unit interval. If we do the same at 26.5625Gbaud/s, using PAM4 modulation, the situation is worse. The eye height is about 13mV and the eye width is only about 50% of the unit interval.
Although the signal levels based on the 200GBASE-KR4 protocol are significantly worse than 100GBASE-KR4, it is clear that the bandwidth can still be doubled using the same interconnect system. This is good news for integrators and data center owners who want a simple upgrade path.
Let’s see what happens when we consider the next generation of high-speed data center protocol, 400GBASE-KR4. This protocol runs at 53.125Gbaud/s (26.56GHz Nyquist frequency). This protocol is synonymous with the OIF 112G standard.
The eye closure is completely collapsed. This means that the current hardware cannot meet the 400GBASE-KR4 protocol standard. Another research method is to use the industry standard to measure the working channel, namely the Channel Operating Margin, or COM. COM comes from the influence of the chip and expresses the signal-to-noise ratio in decibels of voltage. In most cases, a COM greater than 3dB can pass the interoperability requirements.
As the eye diagram shows, the backplane easily passes the 100GBASE-KR4 electrical requirements, passes the 200GBASE-KR4 requirements by a small margin, but falls far short of the 400GBASE-KR4 requirements.
It's time to upgrade , but what do we need to do?
The first obvious problem is that 400GBASE-KR4 requires high frequencies. The protocol is designed to build a 26.56GHz channel with a loss of 28dB. The current channel loss at this frequency is about 52dB. Obviously, the backplane architecture must change. This can be achieved by shortening the channel length, using a cable solution to replace the traditional backplane, or replacing the traditional backplane with a wired solution.
Amphenol is ready to ship with ExaMAX ® 2, ExtremePort™ Swift , Paladin ® , and micro-LinkOVER™ cable backplanes .
We start by reducing the losses in the backplane, which can be achieved by slightly reducing trace lengths and using the best PCB materials. We can now see that the losses remain within the limits of the 400GBASE-KR4 specification: 21dB at 26.56GHz.
If we analyze this backplane with COM, it still fails, why? Digging deeper, we find that there is too much noise in the system. To pass COM, the signal-to-noise ratio needs to be higher than 1.41, and the signal-to-noise ratio is just the opposite.
Looking one level deeper, we see that reflections and crosstalk are equally important. However, crosstalk seems to come mostly from NEXT, with very little from FEXT.
If we instead use a connector system designed for 400GBASE-KR4 transmission, we can see that the noise is down and the signal is up after removing the radiation. This results in a valid 400GBASE-KR4 channel! We found that systems operating at these frequencies must have an insertion loss of less than 28dB at 26.56GHz and use an interconnect solution with low reflections and near-end crosstalk.
In order to achieve higher transmission rates, Amphenol recognizes that it must have higher-speed connectors and provide complete high-speed interconnect solutions. Whether it is electrical or mechanical properties, Amphenol can assist customers in meeting future needs and has prepared a full range of connectors to achieve 112G integration.