A brief analysis of the working principle and test results of the bit error meter

Publisher:ArtisticSoulLatest update time:2022-10-11 Source: elecfans Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

The bit error meter is a commonly used instrument for testing high-speed digital (including optical communication) devices and systems.

A brief analysis of the working principle of the bit error meter

Figure 1 Structural block diagram of traditional bit error detector

The traditional bit error meter consists of 2 parts:

1) Pattern generator.

Including: clock source (can use internal clock or external clock), pattern generation component (generates the required pattern format: PRBS or custom format), signal conditioning front-end (output level control, etc.), clock signal front-end (output clock level control, etc.).

2) Error receiver:

It includes: clock recovery circuit (some BERTs do not have CDR), pattern judgment circuit (to judge the pattern data from the signal), error pattern detection circuit (to judge whether the pattern data is correct), and the receiving pattern generation circuit (to generate received pattern as a reference), error counter, etc.


In order to measure the bit error rate of a digital system, a test pattern is generally used to excite the input terminal. Usually, the test pattern uses pseudo-random binary sequence (PRBS). Of course, other stimulus modes with protocols (user-defined modes) can also be used to examine the performance limits.


For telecommunications or data communication transmission systems, the purpose is to simulate the random traffic experienced under normal operating conditions. The problem with using a truly random signal is that the error detector has no means of knowing the actual location of the transmission, and therefore has no way of detecting errors. Instead of using a pseudo-random signal, it means that it has all the statistical characteristics of a true random signal. It seems to be the item being measured, but in fact it is completely certain and can therefore be predicted by an error detector. To this end, the range of the maximum length PRBS pattern has been specified. At the error detector, the output of the system under test is compared bit by bit with a locally generated error-free reference pattern, and the bit error rate is calculated.


The probability of an error for any transmitted bit is statistical in nature and must be treated as such: any measure of the probability of an error over a given period of time can be expressed in different ways. The most commonly used representation is:

Bit error rate (BER) = number of error bits read during the average interval/total number of bits transmitted during the average interval

Figure 2 Connect the pattern generator to the input of the system under test, and connect the error detector to the output.

Obviously, the results have the statistical properties of a long-term average bit error rate, and the bit error rate is related to the sample size taken from the population. There are three commonly used methods for calculating BER.


The first method (commonly used in early test installations) is a clock cycle only technique to provide a time base or averaging interval. This is easily accomplished using discrete logic decimal dividers. Since microprocessors are now available, more convenient gating cycles have been used.

The second method uses a timing gating period of, say, 1 second, 1 minute, or 1 hour, and has a cumulative total to calculate the bit error rate. The advantage of this approach is that it provides results that are consistent with the error performance criteria discussed later.

The third method is to determine the gating period by counting enough errors (usually 100 or more) to obtain reliable statistical results. Likewise, the processor calculates the bit error rate based on the accumulated total. However, when the bit error rate is very small, this method may result in a very long gating period. For example, for a system running at 100Mbps, when the bit error rate is 10^-12, it will take nearly 12 days to accumulate 100 bit errors.


The most commonly used method is to select a fixed repetition period and then calculate the bit error rate. Method 2. In this case, the variance in the results will change continuously. Therefore, it is common practice to provide some type of warning if the variance exceeds a generally acceptable level. The most widely accepted level is 10%, which means an error count of at least 100 errors. In actual digital transmission systems, especially digital transmission systems utilizing wireless propagation, the bit error rate may vary significantly with time. In this case, long-term averages provide part of the picture. Communications engineers are also concerned with the percentage of time that the performance of the system under test degrades unacceptably. This is called "error analysis" or "error performance".


Test patterns usually require a choice between PRBS for traffic simulation and special patterns for examining system effects at pattern-related trends or critical points. When utilizing PRBS, it is important to select the binary sequence and the resulting spectral characteristics and operating characteristics. These characteristics can be summarized as:

1) Sequence length (bits);

2) The feedback configuration of the shift register that determines the binary operating characteristics;

3) Spectral line spacing depending on bit rate.


PRBS patterns have been standardized by CCITT for testing of digital transmission systems (recommendations 0.151, 0.152 and 0.153). In general, the higher the operating rate, the longer the sequence required to simulate actual data traffic. For testing in the Gbit/s range, current test equipment can provide 2^n-1 sequence lengths.


Modern jitter bit error meters add jitter generation capabilities to traditional bit error meters, making it easy to test receiving sensitivity. Figure 2 is a jitter error meter evolved from the traditional bit error meter. It integrates a variety of instruments and is calibrated to achieve the purpose of generating a more accurate jitter signal.

Figure 3 The evolution of modern jitter error detectors

As can be seen from the figure: SJ and SSC are generated by the IQ modulator; PJ and BUJ are generated by a 500ps or 200ps controllable delay line; RJ is generated by a 200ps controllable delay line. Figure 5 is the generation circuit of ISI jitter and sine wave interference, which consists of a single transmission line with switchable length.

A brief analysis of the working principle of the bit error meter

Figure 4 Structural diagram of jitter bit error meter

A brief analysis of the working principle of the bit error meter

Figure 5 Principle block diagram of ISI jitter and sine wave interference generation

Jitter error meter for jitter tolerance testing:

In order to test the jitter tolerance, the instrument needs to have the ability to generate jitter. The general requirements for calibrated and integrated jitter generation capabilities are:

1. Periodic jitter, single tone and double tone

2. Sinusoidal jitter

3. Random jitter and random jitter distributed over the frequency spectrum

4. Bounded uncorrelated jitter

5. Inter-symbol interference (ISI)

6. Sinusoidal interference

7.SSC and remaining SSC

8. External jitter injection: Connect an external signal source to the delay control input.


The user can easily set a combination of jitter type and jitter amplitude on the instrument screen, so a calibrated "stressed eye" with more than 50% eye closure for receiver testing can be set. Additional jitter can be injected through interference channels. It increases ISI and differential/single-ended sinusoidal interference.


The automatic jitter tolerance characterization test should select the start/stop frequency, step, accuracy, BER level, and confidence according to the requirements to automatically scan the SJ on the frequency. Typically the green dot indicates where the injected jitter can be tolerated by the receiver. The red dot indicates the location where the BER setting threshold is exceeded. By selecting test points, jitter setup conditions can be restored for further analysis. This consistency curve can be displayed on the test results screen so the user can immediately interpret the results. This automatic characterization feature can significantly save programming time.


The automatic jitter tolerance conformance test can automatically test the compliance of the DUT with the standard or user-specified receiver jitter tolerance curve limits. Jitter tolerance curves are defined for many commonly used serial bus standards, such as: SATA, Fiber Channel, FB-DIMM, 10GbE/XAUI, CEI 6/11G, and XFP/XFI. Pass/Fail results are displayed on the graphical results screen and can be saved and printed. It is also required to generate a comprehensive compliance report (including jitter settings and total jitter results for each test point), and to generate and save a simple jitter tolerance test document in html file format.

A brief analysis of the working principle of the bit error meter

Figure 6 Automatic jitter tolerance characterization. Green circles indicate where the DUT meets the required BER


Reference address:A brief analysis of the working principle and test results of the bit error meter

Previous article:The testing principle, function and difference between functional tester and online testing
Next article:Introduction to product features of water vapor transmission rate tester

Latest Test Measurement Articles
Change More Related Popular Components
Guess you like

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号