10917 views|19 replies

108

Posts

0

Resources
The OP
 

The Secret of Semiconductor Testing (Reprinted) [Copy link]

The Secret of Semiconductor Testing (Reprinted)


This series mainly introduces the concepts and knowledge related to semiconductor testing. Please read it first. Please indicate the author information when reprinting, thank you.

The original text comes from the Internet, and the copyright belongs to the original author.
2006 Gong Yi compiled and all rights reserved
. Email: code631@gmail.com


Contents:
1. Measurement Repeatability and Reproducibility (GR&R)
2. Electrical Test Confidence
3. Guardband of Electrical Test
4. Electrical Test Parameter CPK
5. Electrical Test Yield Model
6. Wafer Level Test and Burn-in
7. Boundary-Scan Test/JTAG Standard
8. Built-in Self Test
9. Automatic Test Pattern Vector Generation (ATPG)

This post is from Test/Measurement

Latest reply

I can tell at first glance that it is a forced post, awesome, how can I not reply - purely for the sake of easier finding in the future,,,,,  Details Published on 2009-11-9 18:42
 
 

108

Posts

0

Resources
2
 
Measurement Repeatability and Reproducibility (GR&R) GR&R is a parameter used to evaluate the ability of a test equipment to obtain repeatable readings on the same test object repeatedly. In other words, GR&R is an indicator used to describe the stability and consistency of the test equipment. This indicator is particularly important for semiconductor test equipment. From a mathematical point of view, GR&R refers to the deviation of the actual measurement. Test engineers must minimize the GR&R value of the equipment as much as possible. Excessive GR&R values indicate instability of the test equipment or method. As the name GR&R suggests, this indicator includes two aspects: repeatability and reproducibility. Repeatability refers to the ability of the same test equipment to repeatedly obtain consistent test results under the operation of the same operator. Reproducibility refers to the ability of the same test system to repeatedly obtain consistent test results under the operation of different operators. Of course, in the real world, no test equipment can repeatedly obtain completely consistent test results. Usually, there are five factors that affect the results of each test: 1. Test standard 2. Test method 3. Test instrument 4. Test personnel 5. Environmental factors All of these factors will affect the results of each test. The accuracy of the test results can only be guaranteed when the influence of the above five factors is minimized. There are many ways to calculate GR&R. The following is one of them, which is recommended by the Automotive Industry Action Group (AIAG). First, calculate the deviation caused by the test equipment and personnel, and then calculate the final GR&R value from these parameters. Equipment Variation (EV): represents the repeatability of the test process (method and equipment). It can be calculated by the results obtained by the same operator repeatedly testing the test target. Appraiser Variation (AV): represents the reproducibility of the test process. It can be calculated by the data obtained by different operators repeatedly testing the same test equipment and process. The calculation of GR&R is a combination of the above two parameters. It must be pointed out that the test deviation is not only caused by the above two, but also affected by Part Variation (PV). PV represents the test deviation caused by different test targets, which is usually calculated by testing the data obtained from different targets. Now let's calculate the total deviation: Total Variation (TV), which includes the influence of R&R and PV. TV = sqrt((R&R)**+ PV**) In a GR&R report, the final result is often expressed as: %EV, %AV, %R&R, and %PV. They represent the percentage of EV, AV, R&R and PV relative to TV respectively. Therefore, %EV=(EV/TV)x100% %AV=(AV/TV)x100% %R&R=(R&R/TV)x100% %PV=(PV/TV)x100% If %R&R is greater than 10%, the test equipment and process are good; %R&R is between 10% and 30% and is acceptable; if it is greater than 30%, the engineering staff needs to improve the equipment and process.
This post is from Test/Measurement
 
 
 

108

Posts

0

Resources
3
 
Electrical Test Confidence Many test engineers will find that the test results are often unpredictable, and even the most advanced ATE cannot guarantee the correctness of the test results. In many cases, the product must be retested, wasting a lot of time. In short, electrical test confidence is an indicator to measure the correctness of the test results provided by a test device to the user. A test device with high electrical test confidence does not need to be retested repeatedly, thus saving a lot of valuable test time. If the failed devices (rejects) that were tested the first time are retested, some of them may pass the test because the original error may be caused by the test equipment rather than the device itself. Such failures are called "invalids", and test confidence can be calculated by measuring the number of these "invalids". There are many reasons for abnormal failures: 1. Poor contact between DUT and test head 2. Hardware problems of test equipment 3. Unreasonable hardware construction 4. Oxidation or contamination of metal contact surface leading to contact failure 5. Too high humidity in test environment 6. Too high GR&R The first one is a common problem faced by many test engineers. The reasons are: 1. Misalignment of DUT pins and contact surface 2. Aging of contact components 3. Oxidation and contamination of contact components 4. Too much humidity on contact surface Many companies try to solve this problem. After all, other problems can be solved before the product test is officially released: 1. Test program debugging and design 2. Correctly set test limits 3. Use test equipment with excellent performance 4. Use reliable contacts 5. Optimize the test environment, etc. It can be seen that the reliability of electrical testing depends largely on the reliability of electrical contact. Specifically, it is the probability of correct and good contact of each component in electrical testing. 90% electrical test reliability means that on average 90 out of 100 devices under test have good contact and the other 10 have electrical contact problems. These abnormal failure devices can be turned into good devices through several rounds of retesting, so the number of abnormal failure devices obtained by retesting is also determined by the reliability of electrical testing. Assuming that the initial test yield is Y1, then the actual yield of this batch of products is Y=Y1/C, where C is the electrical test reliability of the system. If the number of retested finished products of this batch of products is R2, R2=Q(1-Y1), where Q is the total number of products. The retest yield is YY, YY = Rinvalid/R2 and the increased yield after retest is Y2 = (Rinvalid/R2) x C. By calculation, we can get: C = 1 - [Y2(1-Y1) / Y1] C: test system test credibility Y1: initial test yield Y2: yield after retest Someone has observed the results of RFIC testing and drawn the following conclusions: 1. Tests with C less than 85% are unreasonable and should be rewritten 2. Tests with C greater than 95% do not need to be retested 3. Tests with C between 85% and 95% need to be retested Of course, the above data may not be suitable for all companies and all products. The company's test manager should calculate the test principles that are suitable for their own products. This is a challenge for test professionals.
This post is from Test/Measurement
 
 
 

108

Posts

0

Resources
4
 
Guardband of electrical testing Many test engineers believe that the guardband of electrical test parameters is the allowable range set when performing parameter testing. The use of electrical test limits is to be more conservative than the product electrical standard parameters when measuring products, thereby reducing the probability of problems when customers use products. In most semiconductor test processes, two versions of test procedures are often used: 1. Product measurement procedure 2. Quality confirmation procedure (QA) The former is used on the product measurement line, and the latter is used for sampling testing. QA testing is used to ensure that the products that pass the measurement are truly problem-free. Since the devices under test have passed the measurement procedure, in theory they should pass the QA test 100%. Therefore, devices that fail QA will be investigated in detail. The QA test procedure is designed according to the product parameter standard, while the measurement procedure uses more stringent test limits. Many tests have both upper and lower limits. In this case, it must be ensured that both use more stringent limits. So why is there a guardband between measurement and QA? The answer is that no two test systems are completely consistent, and the two systems will always give different test results. This can result in different results for a device when tested on different systems, and in fact, even multiple tests on the same system may produce different results. There are many reasons for inconsistency in the test system, and it is difficult to solve all of them, which is why margins are left between measurement and QA testing.
This post is from Test/Measurement
 
 
 

108

Posts

0

Resources
5
 
Electrical Test Parameters CPK CPK = process capability index. The performance of a process can be measured by the concentration of the results and the deviation from the standard. For a process whose results can be represented by a normal distribution, its performance can be represented by CpK. The CpK index of a process represents the concentration and deviation of the output results of the process between the upper and lower limits. In fact, CpK represents the ratio between the distance between the mean of the output results and the closer standard limit. (That is, 3 sigma) If the mean of the results is closer to the lower limit (LSL), assuming the standard deviation is Stdev, then Cpk = (Mean-LSL) / (3 Stdev). If the mean of the results is closer to the upper limit (USL), then Cpk = (USL-Mean) / (3 Stdev). The ideal situation is that the output values are always in the middle of the distribution, then Stdev=0, CpK=infinity. As the output values move further and further away from the middle value, CpK will decrease. A decrease in CpK means that the probability of the process producing results outside the standard limits has increased. Therefore, each CpK value can represent the corresponding failure rate (PPM). The following table lists the CpK and the corresponding PPM values. In the semiconductor industry, the standard value of CpK should be around 1.67, and the minimum should not be lower than 1.33.

110.jpg (53.77 KB, downloads: 12)

110.jpg
This post is from Test/Measurement
 
 
 

108

Posts

0

Resources
6
 
CpK is used in semiconductor testing to describe the stability of the test process. It is only applicable when the test results are normally distributed. CpK measures two indicators: 1. The distance of the test results from the median value 2. The distribution of the test results The higher the CpK, the better the test process. In electrical testing, CpK can only be used for test results with quantitative readings that can form a normal distribution. A low CpK implies three things: 1. The average value of the results is far from the median value 2. Stdev is too large 3. Both Test engineers should be able to find ways to improve CpK by observing the changes in CpK. Recommended solutions are: eliminate invalid data, repair faulty test equipment, debug the test program, and redefine the upper and lower limits.
This post is from Test/Measurement
 
 
 

108

Posts

0

Resources
7
 
Electrical test yield model (test yield) Electrical test yield model The yield is the ratio of the number of devices that pass the electrical test to the total number of devices, which is usually expressed as a percentage. All semiconductor manufacturers try their best to improve the yield. Low yield means increased costs. There are many reasons for low yield, including process problems, product design problems, etc. The following examples illustrate the process problems that lead to low yield: 1. Uneven oxide layer thickness 2. Uneven doping concentration, resulting in increased resistance in some areas 3. Mask offset 4. Ion contamination 5. Uneven polysilicon layer thickness Design errors can also lead to low yield. Devices that are overly sensitive to the process cannot withstand normal parameter changes in the production process. Even if there are no problems with the device design and manufacturing process, some product batches will encounter low yields. This may be caused by the "bad spot area" of the silicon wafer. Because during the production process of silicon wafers, it is easy to be contaminated by dust, and a certain area of the silicon wafer cannot work properly. We must understand the reasons for low yields to reduce production costs. This can be obtained through the mathematical method "yield model", which converts the failure density (defect density) into an expected yield. Usually we use the Poisson model, Murphy model, exponential model and Seeds model for calculation. Semiconductor manufacturers often choose the appropriate mathematical model based on actual data. For example, the yield data of a wafer factory may be obtained by comparing other mathematical models based on the wafer size. A simple yield mathematical model assumes that the cause of yield loss is the average failure density and the random distribution of failure points. If there are many wafers (N) on the wafer and many randomly distributed failed wafers (n), then the probability that a wafer has k failed wafers can be estimated based on the Poisson distribution: Pk = em (mk/k!) where m = n/N. Assuming Y is the yield, then Y is the probability that a wafer has no failure, that is, K=0, Y = em. Let D be the failure density of the wafer, then D=n/N/A=n/NA, where A is the area of each wafer, and m=n/N, where m is the average number of failed wafers per wafer, i.e. AD. Therefore, Y = e (-AD), which is the Poisson yield model. Many experts have suggested that the yield estimated by the Poisson distribution is too low because failed wafers are usually not randomly distributed on the wafer, but are generally concentrated in a certain area. This phenomenon causes the estimated yield to be much lower than the actual yield. Another simple mathematical model assumes that the failed wafers are unevenly distributed, in which case the yield Y= 0∫∞ e (-AD) f(D) dD, where f(D) is the failure density distribution function. Assuming there is a triangular failure density distribution function as shown in Figure 1 below, then Y = [(1-e(-AD))/(AD)]2, and the model is called the Murphy model. If the failure density distribution function is rectangular (Figure 2), then Y = (1-e(-2AD))/(2AD), and many experimental data are consistent with this model. Another mathematical model is called the exponential yield model, which assumes that extremely high failure densities are only concentrated in a small area. Therefore, it is very suitable for situations where high failure densities are concentrated, and Y=1/(1+AD). Finally, the equation given by the Seeds model is Y = e – sqrt(AD).

111.jpg (3.72 KB, downloads: 11)

111.jpg
This post is from Test/Measurement
 
 
 

108

Posts

0

Resources
8
 
Wafer level test and burn-in Wafer level test and burn-in (WLTBI for short) refers to the electrical testing and burn-in of semiconductor devices before packaging. Burn-in refers to the aging of semiconductor devices by pressurizing and heating to distinguish devices with poor reliability. WLTBI usually uses a wafer probe station to connect the tiny pins on the wafer, and the probe station also provides the temperature required for testing and burn-in. WLTBI can not only provide early testing, but also is suitable for 1. Bare die devices (KGD, know good die) 2. Wafer level packaged devices. The ideal situation is that all tests can be completed at the wafer level, so that final testing is not required, which can save a lot of costs. However, the current WLTBI is just a back-end extension of traditional wafer manufacturing. The basic principle of WLTBI is no different from the final test of ordinary semiconductor devices. Both of them judge the quality of the device by adding stimulation to the DUT and observing its output function. The difference lies in how to stimulate the device. During the final test, the current and voltage enter the device through the ATE connecting the device pins. During the burn-in, the device is placed in the oven and the required voltage and current are provided by the burn-in board. In WLTBI, current and voltage are directly input into the circuit through the device contact pins. One of the challenges of WLTBI is how to obtain reliable probe and pin contact. If poor contact occurs during the test and aging process, it will cause many problems: low yield, incomplete aging, voltage overload (EOS), etc.
This post is from Test/Measurement
 
 
 

108

Posts

0

Resources
9
 
Boundary-Scan 测试 / JTAG 标准 Boundary-Scan 测试 / JTAG 标准 Boundary-Scan 测试,也就是JTAG标准指的是IEEE 1149.1号规范。这个规范规定了一系列的设计规范,用于定义半导体器件在器件级,电路级和系统级的测试,编程和调试。JTAG是“Joint Test Action Group”的缩写,该组织联合也为世界上大部分电子厂商所支持。 在过去的20年发展而来的眼花缭乱的半导体表面贴装技术(SMT)形成了复杂而高密度的电路板,对电路板上的元件进行调试变得非常困难,主要原因是由于缺乏对每个元件的单独访问的可能性。 现代半导体器件的引脚数和封装形式使得传统的单独测试变得几乎不可能。为了解决这个问题,1985年JTAG成立,并定义了Boundary-Scan 测试的标准。 Boundary-Scan测试主要采用了在器件电路中加入了特殊测试电路,通过这个电路可以在电路板级测试中同时测试器件和电路板。这个特殊电路允许输入信号从器件的输入脚进入并从输出脚串行导出,使得对该器件的测试可以由仅仅4个引脚完成。这项技术已经成为当今最流行的DFT技术之一。 这样做的好处是显而易见的: 1, 显著地减少板上的物理引脚数 2, 提高器件的密度 3, 减少测试设备成本 4, 缩短测试时间 5, 提高测试效率 一个标准JTAG器件具备: 1, 在每个输入输出脚都有一个Boundary Cell 2, 传输路径(或传输链)用于连接Boundary Cell 3, 4到5个引脚用于控制JTAG信号 4, Test Access Port(TAP)用于在测试过程中的控制信号 5, 16态 TAP控制器或State Machine用于控制测试状态 在正常工作状态下,Boundary Cell没有什么作用。在测试模式下,这些Cell将被激活并捕捉在每个输入输出脚的信号流,绕过正常模式下的输入输出脚。Boundary Cells基本上是由Multiplexer和移位寄存器构成。 TAP只是一个简单的接触口,它的标准由IEEE1449.1所定义:至少由4或5个脚组成,这些引脚被用于实现JTAG串行协议: 1, TCK:时钟信号,用于同步内部TAP控制和State Machine工作状态 2, TMS:模式选择,在时钟上升沿触发并决定State Machine的下一个状态 3, TDI:数据输入 4, TDO:数据输出 5, TRST:(可选)异步重置 JTAG标准器件的Boundary Scan Logic的属性和容量是由一个外部文件定义的,名叫“Boundary-Scan Description Language”(BSDL)。BSDL文件由器件生产商提供,通过它来提供该器件进行Boundary Scan所需的算法机制。 在用Boundary Scan对器件测试时,必须遵循下面步骤: 1, 外部测试设备提供调试输入信号给DUT的输入脚 2, 该输入脚的Boundary Cell捕捉输入信号 3, 输入数据通过TDI脚串行输入到Core中 4, 输出数据由TDO脚串行输出 5, 外部测试设备接受输出数据并比较结果 电路板上的故障如断路,器件缺失,器件反向等都可以由此检测。
This post is from Test/Measurement
 
 
 

108

Posts

0

Resources
10
 
Built-in Self Test (BIST) Built-in Self Test (BIST) is a technology that implants related functional circuits in the circuit during design to provide self-test functions, thereby reducing the dependence of device testing on automatic test equipment (ATE). BIST is a DFT (Design for Testability) technology that can be applied to almost all circuits and is therefore widely used in the semiconductor industry. For example, the BIST technology commonly used in DRAM includes implanting test pattern generation circuits, timing circuits, mode selection circuits and debugging test circuits in the circuit. The rapid development of BIST technology is largely due to the high ATE cost and high complexity of circuits. Now, highly integrated circuits are widely used, and high-speed mixed-signal test equipment is required to test these circuits. BIST technology can reduce the demand for ATE by implementing self-test. BIST technology can also solve the problem that many circuits cannot be directly tested because they do not have direct external pins, such as embedded flash. It can be foreseen that in the near future even the most advanced ATE will not be able to fully test the fastest circuits, which is one of the reasons for adopting BIST. The advantages of using BIST technology are: 1. Reduce test costs 2. Improve error coverage 3. Shorten the time required for testing 4. Facilitate customer service 5. Ability to test independently Disadvantages 1. Additional circuits occupy valuable area 2. Additional pins 3. Possible test blind spots Problems with using BIST: 1. Which tests need to be completed by BIST? 2. How much additional area is allowed at most? 3. What kind of external stimulus is required? 4. The time required for testing and efficiency? 5. Is BIST fixed or programmable? 6. What impact will the use of BIST have on existing processes? BIST technology can be roughly divided into two categories: Logic BIST (LBIST) and Memory BIST (MBIST) LBIST is usually used to test random logic circuits. Generally, a pseudo-random test pattern generator is used to generate input test patterns for application to the internal mechanism of the device; and a multiple input register (MISR) is used as an output signal generator. MBIST is only used for memory testing. A typical MBIST contains test circuits for loading, reading and comparing test patterns. There are currently several MBIST algorithms commonly used in the industry, such as the "March" algorithm. Checkerboard algorithm, etc. Another less common BIST is called Array BIST, which is a type of MBIST and is specifically used for self-testing of embedded memories. Analog BIST is used for self-testing of analog circuits. BIST technology is becoming an alternative to high-priced ATE, but BIST technology cannot completely replace ATE at present, and they will coexist for a long time in the future.
This post is from Test/Measurement
 
 
 

108

Posts

0

Resources
11
 
Automatic Test Pattern Generation (ATPG) Automatic Test Pattern Generation (ATPG) is the process of automatically generating test pattern vectors used in semiconductor electrical testing. Test vectors are sequentially loaded on the input pins of the device, and the output signals are collected and compared with the budgeted test vectors to determine the test results. The effectiveness of ATPG is an important indicator for measuring the test error coverage. An ATPG cycle can be divided into two stages: 1. Test generation 2. Test application In the test generation process, a test model for the circuit design is generated at the Gate or Transistor Level so that the faulty circuit can be detected by the model. This process is basically a mathematical process and can be achieved through the following methods: 1. Manual method 2. Algorithmic generation 3. Pseudo-random generation - software generates test pattern vectors through complex ATPG programs. When creating a test, our goal should be to execute efficient test pattern vectors within a limited storage space. Therefore, ATPG must generate as few test vectors as possible while meeting a certain error coverage. The following factors are mainly taken into consideration: 1. The time required to establish the minimum test group 2. The size of the test graphic vector, software and hardware requirements 3. The length of the test process 4. The time required to load the test graphic vector 5. External equipment? The ATPG algorithms widely used now include: D algorithm, PODEM algorithm and FAN algorithm. Any algorithm requires a technology called "path sensitization", which refers to finding a path in the circuit so that the errors in the path can be manifested at the output end of the path. The most widely used algorithm is the D algorithm, D represents 1 and D' represents 0, D and D' are complementary, and the specific method will not be repeated here. The ATPG generation process includes the following steps: 1. Error selection, select the error to be tested 2. Initialization, find a suitable input vector set 3. Transmission vector set 4. Compare the results
This post is from Test/Measurement
 
 
 

108

Posts

0

Resources
12
 
[:D]Haha, OVER!
This post is from Test/Measurement
 
 
 

293

Posts

0

Resources
13
 
Very detailed, thank you for your hard work!
This post is from Test/Measurement
 
 
 

3

Posts

0

Resources
14
 
Thank you for your information
This post is from Test/Measurement
 
 
 

18

Posts

0

Resources
15
 
Thank you.
This post is from Test/Measurement
 
 
 

19

Posts

0

Resources
16
 
I've learned a lot, thank you!!!
This post is from Test/Measurement
 
 
 

89

Posts

0

Resources
17
 
Good stuff, thank you.
This post is from Test/Measurement
 
 
 

1

Posts

0

Resources
18
 
Is there a complete version available for download? Thanks in advance
This post is from Test/Measurement
 
 
 

1014

Posts

0

Resources
19
 
Thank you for sharing a good article. Thank you very much.
This post is from Test/Measurement
 
 
 

7815

Posts

57

Resources
20
 
I can tell at first glance that it is a forced post, awesome, how can I not reply - purely for the sake of easier finding in the future,,,,,
This post is from Test/Measurement
Personal signature

强者为尊,弱者,死无葬身之地

 
 
 

Guess Your Favourite
Find a datasheet?

EEWorld Datasheet Technical Support

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京B2-20211791 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号
快速回复 返回顶部 Return list