Yield Analysis in Design for Manufacturability
[Copy link]
Design for manufacturability has become increasingly important in the semiconductor industry's nanometer design process methodology. In the past, designers could only determine the manufacturing yield after their designs were taped out . However, due to the existence of other defect mechanisms, the yield has a decreasing trend with the evolution of process nodes and the increase in design complexity, so the yield issue must now be considered during the design phase. After years of definition and analysis, the main yield loss mechanisms in modern process nodes include random, systematic and parameterized mechanisms. However, the random defect model based on yield loss has existed since the beginning of manufacturing history. Random defects At larger process nodes, random defects (dust) are the main yield loss. Since it is impossible to foresee where the dust will attach to the wafer, random dust can cause catastrophic failures: such as short circuits (that is, the presence of additional metal between two metal lines) or open circuits (missing metal), or cause problems such as reduced parameter indicators (such as reduced resistance and additional coupling effects). For smaller process nodes, initial yield issues in the early process stages are mainly dominated by new system failure modes. But as these new processes mature, yield will still be limited by random defects. As performance continues to improve in advanced processes, random defects in manufacturing processes are gradually decreasing, allowing chips to achieve similar yields after process shrinks. At the same time, as the functional integration increases at smaller nodes, these designs are more susceptible to dust defects, which ultimately limits the improvement of yield. This enhanced functional integration is not conducive to the improvement of manufacturing processes, thereby reducing the mature yield level that can be achieved in modern designs. Due to these challenges, different EDA vendors, foundries and design companies are developing a variety of DFM application tools and methods. There are currently two general DFM methodologies that can be well applied to general design flows. DFM Recommended Rules Analysis Method Traditional physical verification, including design rule checking (DRC) and layout-to-schematic comparison (LVS), is a verification process that must be performed before a design enters tapeout. The DRC rules set by manufacturing inform designers of the process manufacturing constraints on the design. Most of these constraints represent real process limitations, and if they are not followed, the silicon produced will either not work properly or have a low yield. At smaller process nodes, the yield problem is becoming more and more complex, and from a statistical point of view, the process-induced limitations depend on a series of variables and areas (that is, the more likely a certain defect mechanism is to occur, the more likely the chip will fail). Today, DFM recommendations are introduced in addition to DRC rules. Designers must now carefully consider DFM rules and indicate to manufacturing what simplifications are achieved with the new rules compared to designs with standard DRC rules. In other words, designers can predict the yield of a design before manufacturing. In fact, DFM rules are as simple and easy to implement as standard DRC rules, except that they have yield prediction information or different constraints. Following these recommendations helps compensate for variations introduced in the manufacturing process. The problem is, when designers use DFM rules on a DRC-clear design and see millions of errors, how can they determine whether the DRC rule or the DFM rule is the correct one for yield prediction (Figure 1)? Figure 1: DRC error flag result using DRC rules and DFM rules Rather than just looking at “DRC-like errors,” designers must leverage layout/yield statistics from DFM rules to determine the impact on yield. This analysis can be done with a single rule or a combination of rules (relying on area, cell utilization, chip level, histogram/hotspots, or a mix of both) to identify the factors that have the greatest impact on yield and determine the most effective solutions (Figure 2). Figure 2: Example results of DFM RRA using histograms and distribution hotspots For example, if the total yield calculated from the layout statistics is 90%, and the DFM RRA indicates that single vias contribute about 40% of the total yield loss, the designer can choose to make some modifications, such as inserting double vias on non-critical timing path networks that do not need to be considered in the manufacturing process. DFM recommendations can also be used to analyze systematic and parametric yield loss mechanisms due to lithography, chemical mechanical polishing (CMP), and stress, but for these applications, the focus is on yield loss based on random defects only. DFM Key Area Analysis DFM recommendations provide a very familiar method for identifying areas that are prone to random defects. However, to more accurately estimate dust sensitivity, a more complex mathematical model is required. Critical Area Analysis (CAA) mathematically defines the areas in the design where circuit failures are most likely to occur due to the effects of various dust sizes. No matter how hard we try to improve the process environment, dust will still fall on the chip and mask. This dust will cause a series of defects: 1. Short circuit (metal dust particles fall between two metal lines, causing electrical short circuits on different signal paths); 2. Open circuit (an electrical break in the conductor, disconnecting the signal path); 3. Parameter issues (reduced resistance, additional coupling effect). There are two ways that random dust can cause circuit failures, depending on the type of dust and/or the point in the process flow where the failure occurs. If metal dust lands just in the right place to connect two or more isolated electrical nets, it will form an electrical short between the nets; if insulating dust lands in the cross-section of a conductor and covers the width of the cross-section, it will cause an electrical open fault on the net. The degree to which these "critical areas" are affected depends on the design pattern and the dust size. For a given design layout, the larger the dust size, the larger the critical area becomes. In general, the denser the design layout, the more critical areas there are in the design. The yield limit (the maximum yield that can be achieved for a specific failure mechanism) is a function of the critical area (over all dust size ranges) and the defect density distribution (caused by the manufacturing process for the same defect size). For a specific layer and failure mechanism (short or open), the critical area yield model can be calculated using the following formula:
Where: D(r) is the defect density when the defect radius is r; C(r) is the critical area The overall yield is then the product of the yields of each layer/defect mechanism model. In other words, for each mask layer (active, poly, conductive, metal, via, etc.), λ must be calculated under both short and open conditions, and the resulting constrained yields are then multiplied together to get the final predicted yield. In the case where a single dust particle is enough to cause a short or open in the design, the designer can use the critical area analysis method to clearly see how the critical area changes as the dust size changes (Figure 3). At the same time, based on the yield statistical model (similar to DFM RRA), the designer can predict the impact of dust size on the design before manufacturing. This method allows the design to be modified before tape-out, such as widening the wire, which is impossible during the manufacturing process. Figure 3: Short circuit and open circuit display results in key areas
|