6.4.2 INTERCONNECT LAYOUT ADJUSTMENTS

With interconnects being so critical for DSM VLSI chip performance, it is clear that their characteristics must be carefully analyzed. However, as we have seen in Chapter 3, analyzing interconnects is also very complicated because of their distributed electrical characteristics. It took years to develop the understanding that led to the relatively simple and yet powerful methodologies for analyzing timing characteristics of DSM VLSI interconnects as shown in Chapter 3.

Of course, time delays are only part of what we would like to know about interconnects. The continued, rapid push towards smaller layout geometries keeps designers guessing about electromigration, larger than expected IR (voltage) drops (see [ 10] for some enlightening information), questions on signal integrity due to cross-coupling with the resulting increase in power dissipation, and the lowering in the speed of chips. The worst thing about these issues is that they tend to show up unexpectedly after a chip has been fabricated, because the manufacturing technology progresses faster than EDA tool development

How about a tool that works more or less without any analysis? Could capacitive coupling between interconnects with all the resulting benefits for speed, power and signal integrity be reduced without much analysis? Could manufacturing yield be improved without much analysis? There is such a tool called XTREME from Saganlec.

Fig. 6.3 Routing Before and After Spreading With XTREME

XTREME pushes interconnects apart, spreading them evenly within the available space, resulting in smaller capacitive coupling and better manufacturing yield. We have seen the results after using XTREME in Figure 3.8, Chapter 3. Another example is shown above in Figure 6.3, a small section of routing before and after using XTREME.

Clearly, interconnects that were spaced closely along parallel lines before using XTREME are spaced farther apart after the process. They are no longer tightly coupled as they were before. Also, on average, they are farther apart without increasing the overall routing area. Actual application of XTREME in the field support what is visually evident: a substantial improvement in performance and yield.

Why is a tool such as XTREME needed? Couldn't routers just do what we see on the right side in Figure 6.3 as opposed to what we see on the left?

The answer has to do with the complexity of the task. Routing is already very compute-intensive, and one of the primary, difficult tasks of a router is the interconnect of all the points of connection in the layout. Finishing a route is often no easy feat. If the router were given all these additional constraints, the task would simply get too overwhelming. It is easier to do the spreading as a postroute step with XTREME. We all know that hindsight is easier than foresight.
It has been a general and desirable trend to constantly increase the layout density on chips and the space occupied by interconnects has been watched particularly carefully because it is a large portion of a chip layout. To minimize this space, design automation tools often place interconnects as closely together as possible (and often closer than necessary, as we will see). This increases the chances of cross-coupling between neighboring interconnects. Additionally, interconnects are made as narrow as possible in order to further maximize the packing density. All without violating any process layout rules, of course.

Placing interconnects closely together will also increase the probability of shorts between metals through “bridging”. One mechanism causing this is defects, the probability of which depends on the defect density of the particular process. We will examine this further below. Although bridging is only one of many possible failures, it is an important one. However, others can be caused by keeping layout dimensions as small as possible, even in places where no performance benefits result. We discussed possible alternatives in Chapter 5 when we talked about DfM. Well, XTREME allows the routing at least to be done in such a manner as to optimize performance and yield.

6.4.3 YIELD ENHANCEMENTS WITH XTREME

It is well known that one of the key culprits in lowering the yield in chip manufacturing is the much-talked-about defect density. This is talked about only in generalities in public, however. When it comes to actual, real numbers, it is one of the more carefully guarded secrets about a particular process because disclosure would reveal pricing and profit information for chips that companies wish to keep to themselves. In the present discussion, it is perfectly fine to talk in generalities about numbers and trends. Qualitatively, it is “intuitively obvious” (don't you love this phrase?) that spreading interconnects apart (as with XTREME) will increase manufacturing yield. The following figures, however, will lend the intuition a more quantitative character. The goal is to show a relationship between interconnect spacing, defect density, defect sizes and yield.

The general relationship between yield, chip area and defect density is well known and can be found in many textbooks about VLSI chip design, as well as in some very recent publications containing much more than just the yield equation. Semiconductor manufacturing equipment is now capable of monitoring defects, defect density and their size during wafer processing. The following information will serve to show the value of interconnect spreading. Readers interested in this recent and fascinating subject on yield monitoring should consult [19]. The equation describing yield is as follows:

Yield = e-Acrit*D

ACRIT: TOTAL CRITICAL AREA WITH POSSIBLE DEFECT OCCURRENCE OF A LAYER.
D: DEFECT DENSITY.

The critical area is not just the total chip area. It is an area in which a defect would do harm. This was explained in Chapter 5 and in more detail in [18]. Clearly, in empty space in a layout, nothing can be harmed by a defect.

In Figure 6.4, we see a qualitative curve for the probability of a certain defect size to occur. It shows that, as defect size increases from zero up to a certain defect size, the probability of it occurring on a chip increases until it reaches a maximum value. Then as we look for larger and larger defects, the chances for them to occur decreases.

Fig. 6.4 Defect Density as a Function of Defect Size

In Figure 6.4, we can see that if we make the minimum design rule for the separation between interconnects larger than the one indicated by the arrow in Figure 6.4, we substantially lower the probability of a defect being big enough to cause a short.

In Figure 6.5, we illustrate how a defect of a certain sized would cause problems while a smaller defect would not.

Fig. 6.5 Defect Size Compared With a Typical Interconnect Separation

The critical area of a given defect size is shown in Figure 6.5. The concept is very simple. If the defect is smaller than the separation, no short will occur. This demonstrates the value of spreading interconnects with XTREME.

Thus, to summarize, XTREME only does the spreading to take advantage of the available space. Of course, XTREME; could benefit from some intelligent interconnect spacing analysis tool. It seems that some of these tools are on the horizon. But again, it was impossible to obtain sufficient and credible information to describe their virtues here.

Thus, XTREME does postlayout reengineering and has the following features:

  1. It can handle any number of interconnect layers (one at a time) and supports a variety of routing styles.
  2. It preserves hierarchy and, as is always the case with compaction, connectivity.
  3. It accepts shape, symbolic data and LEF/DEF as inputs. DEF contains routing data. LEF contains technology and library data.
  4. It provides full control of spacing and wiring trade-offs.
  5. It automatically determines which wires are the most critical for cross-talk.

XTREME has been used extensively in the industry and has shown reductions in cross-talk of more than 50 percent.