2009 Issues

Designers’ fight for proper recognition has lasted 20 years.

The question was a familiar one. “Pete, what can we do to raise the visibility of designers and acknowledge the role they play in product development?”

A marketing guru at one of the major PCB EDA companies asked me this. It sounded familiar because it is the same question I asked 20 years ago when I first went to work for Printed Circuit Design magazine. Fresh off a design job, I had been hired as the green editor with a new cause and a head full of ideas about what the magazine could do to focus on PCB design and designers.

Not long after, I met some folks who were trying to start a global organization for designers. Within a short time, the IPC got involved, and thanks to Gary Ferrari and Dieter Bergman, among others, the IPC Designers Council was born. The excitement was palpable. Soon, local chapters were springing up all over the world. Printed Circuit Design ran ads in the magazine and did mailings to our subscriber list to encourage participation. When time allowed, I attended meetings all across North America. Even my home chapter in Atlanta had around 50 members, and 30 to 40 designers and engineers regularly attended monthly meetings.

Looking for the glue that would hold the organization together, and a feather designers could put in their caps, the organization developed the Certified Interconnect Designer program. The CID program was intended to certify a designer’s core competencies in design based on IPC standards.

Many designers signed up and took the basic certification test, and the Council directors moved to add more modules to the program based on advanced technologies and techniques. Gary and Dieter were very involved in the effort, and poured their time and energy into the Council with gusto. For a time, all was right with the world. The council grew, and before long, there were close to 50 local chapters in North America and Europe holding regular meetings.

Then slowly, but perhaps inevitably, the air went out of the balloon. Attendance fell and chapters began to go inactive. Today the IPC lists 23 local chapters. I recently sent emails to the email addresses of all 23 chapter heads, asking whether the chapter was still functioning and what activities they had planned. I received answers from nine, although I know that at least two or three more remain active. Five said they had regular meetings, and the other four described their chapters as active, but in their words, “on life support.” (Funny how all four used the same words.)

What happened to the excitement? What about all the hard work the local folks put into building the organization and the dreams of an active, global organization for PCB designers? Apathy, workload and lack of leadership are the comments I hear most. Many local chapter officers served several consecutive terms and grew tired of handling the load. Some said that they had trouble finding speakers for the meetings, or felt there was no compelling message or reason for attending.

On the positive side, several chapters are very active. North Carolina holds a conference and trade show every year and has been very successful. The Austin (Texas) chapter holds a Vendor Night every August. But, even if all 23 chapters were active, it would not be a ground swell.

Getting back to the original question, I’m not sure what we can do. Let’s estimate an average 50 members for each chapter. With 23 chapters, that’s 1,150 members. No one has hard numbers on the size of the PCB designer community. At one time we estimated there were 100,000 designers and engineers involved in PCB design in North America alone. Even if the number were half that today (it’s probably more), then Designer Council membership would be 2% of the audience. Maybe that is a good number, but it does not sound that good to me.

Editor-in-chief Mike Buetow has come up with a couple ideas that we plan to use in the coming months to highlight significant contributions designers have made to a project. We’re also looking for ideas from you. Whether you are involved in design or fab, we want to hear what you think PCD&F can do to increase the visibility of designers and acknowledge the role they play in product development. Email me at This email address is being protected from spambots. You need JavaScript enabled to view it. or go to our blog at pcdandf.blogspot.com and make yourself heard.

Pete Waddell is publisher and design technical editor of Printed Circuit Design & Fab/Circuits Assembly; This email address is being protected from spambots. You need JavaScript enabled to view it..

Understanding the new standard for predicting temperature rise.

Determining appropriate trace sizes for current requirements is an important aspect of board development. Since copper is not a perfect conductor, it presents a certain amount of impedance to current flowing through it, and some of the energy is lost in the form of heat. For many applications, it is necessary to predict the temperature rise caused by this loss, which traditionally has been accomplished using a chart created over 50 years ago by the National Bureau of Standards (IPC-2221, Figure 6-4), or by using one of several calculators based on it. The chart shows the relationship between current, conductor temperature rise and conductor cross-sectional area. If any two of these are known, the third can be approximated.

About 10 years ago, Mike Jouppi, then of Lockheed Martin, began performing experiments to examine the accuracy of this chart, because the predicted temperature obtained from it did not correspond to the data he was measuring on actual product. From these new data, he developed several new charts, which have been verified by a parallel study performed by the Naval Surface Warfare Center, Crane Division.

The results of both studies were compiled into IPC-2152, “Standard for Determining Current Carrying Capacity in Printed Board Design.”1 From the data in this new document, a method can be established to predict the temperature rise of a board trace more accurately, taking into account the effect of several variables such as board thickness and material, internal vs. external traces, still air environments vs. vacuum, and proximity to heat-sinking planes.

Estimations using these new data can be far more reliable than previously possible, without the use of more sophisticated thermal analysis tools.
IPC-2152 contains over 90 new charts in addition to the now infamous historical chart, but before exploring the new data, a few aspects of the historical chart should be understood. When the chart was created in the 1950s, there were no multilayer boards, and all the data were taken from surface traces. No one is sure where the internal chart came from, but it is thought that when multilayer board constructions became practical, the external values were merely doubled to get values for internal traces. This has since proven inaccurate. The thermal conductivity of FR-4 is better than air, so in a still-air environment, internal traces run cooler than external ones. The internal values are so conservative, however, that designers haven’t experienced problems using them, except for the large amount of board real estate needed to implement the recommendations.

An interesting result of the new study is that, although the values for internal traces were not scientific, by coincidence they very closely approximate the behavior of traces in free air. The new data also show the external trace chart only was safe for boards greater than 3 x 3˝ and with planes, so it was decided to remove the historical chart for external traces from IPC-2152. Recommendations for external traces can be easily obtained with the new charts and adjusted for the proximity to heat-sinking planes.

Because the internal chart values approximate the temperature rise of a trace in free air (which can be considered a worst-case scenario), it was decided to use it for the most conservative chart (Figure 1). Values obtained from this chart are very safe and will work in any circumstance except vacuum, regardless of other variables.



For example, let’s say a trace in a very thin flexible dielectric must carry 4 A continuously, and you want to limit the temperature rise to 10°C over ambient. To use the conservative chart, follow the 4.0 A current line across until it intercepts the 10°C temperature curve. Follow that intersection down to find the recommended cross sectional area of 0.300˝2. This cross-sectional area can be refined further by modifying it with known design constraints. These adjustments are described in Modifying the Chart Value.

After the recommended cross-sectional area is determined, it can then be converted to appropriate trace widths (based on the copper thickness used in the design) by using a chart presented later in this article.

Figure 1 provides very safe estimates for most applications. For a more precise estimate based on specific design constraints, new charts have been developed. Since board development often depends on common laminate materials with common copper weights, separate sets of charts are provided based on 0.070˝ polyimide material with 3, 2, 1 and 0.5 oz copper in still air. For each of these, there is a primary logarithmic chart (Figure 2). Figure 2 is the baseline chart for 3 oz. copper, which is the universal chart used in IPC-2152, Figure 5-2, and recommended for both internal and external traces.


Since logarithmic charts are difficult to read, three additional charts that show temperature curves at successively finer resolution follow each of these primary charts. This has been duplicated by another complete set of charts for vacuum environments, and all these are duplicated again to provide versions for inch and metric units. (There are also charts in the appendix for internal vs. external traces.)

(For the purposes of this tutorial, we will look up a starting value from the universal chart and then modify it for our particular design constraints. If you have the extra charts in IPC-2152, you can select a chart more specific to your application, and skip the corresponding modification outlined below.)

Obtaining the Appropriate Chart Value

The first step in estimating appropriate trace widths is determining the acceptable temperature rise. This is an important point, because most board designers are familiar with “ambient temperature,” a simple term that describes the environment in which the electronics assembly operates. For our application, “ambient temperature” can be misleading, because the temperature rise of the trace is going to be higher than all the contributing factors combined. This is not just the ambient temperature; it is the ambient temperature plus all the other heat sources of nearby components and traces.

For this reason, the new standard prefers the term “local board temperature” to “ambient temperature.” The local board temperature can be significantly higher than the surrounding environment, and the temperature rise of a single trace is added on top of that. For example, the product may be required to operate in an ambient temperature of 125°C, and the area of the trace evaluated may have power devices and other high current traces in close proximity. The local temperature already may be approaching the maximum continuous operating temperature of the board material itself, and a lower temperature rise may need to be defined for the single trace. (Parallel traces are a critical factor that should not be ignored. The added temperature from surrounding traces can have a significant effect on the local board temperature, and should be considered in every evaluation. IPC-2152, Appendix A.3.3, discusses this in detail.)

In general, traces operating at high temperature waste power and add thermal stress that may lead to early failure, so a low temperature rise should be a design goal whenever possible.

Once you have settled on an acceptable temperature rise for the trace being evaluated, it is a simple matter to find the cross-sectional area required for the current requirement, using either the conservative chart or the new universal one. (Use the new one only if your board is greater than 3x3˝!)

Modifying the Chart Value

If the conservative chart (Figure 1) was used to obtain the starting chart value (CV), it can be used as is, without additional analysis. If the universal chart (Figure 2) was used to obtain the CV, keep in mind that it is based on a board that was constructed with 3 oz. copper on polyimide 0.070˝ thick. To get a more accurate estimate, several modifications may be performed to the CV. The easiest way is to start with a modifier of 1.00, then add or subtract based on the following paragraphs to get the final modifier, then multiply the modifier and the CV to get the modified cross-sectional requirement.

Start with a modifier of 1.00, and go through the following steps:

1. Copper thickness modifier. If the universal chart was used to obtain the CV, and something other than 3 oz. copper is used, take advantage of the fact that for the same cross-sectional area, thinner copper has more surface area and is therefore better at dissipating heat. Thicker copper will have thinner trace width for the same cross-sectional area, less surface area, and will operate at a higher temperature (Figure 3).

2. Plane modifier. Because the proximity to heat-sinking planes has such a drastic impact on the temperature, the presence of planes will cause the most significant adjustment to the CV. For boards with a 1 oz. plane that is at least a 3 x 3˝ area (directly under the trace being evaluated!), use Figure 4 to determine the modifier:

For 2 oz. planes, subtract another 0.04 from the modifier.

For planes greater than 5 x 5˝, subtract another 0.04 from the modifier.



3. Board material modifier. The polyimide material used in the study has a thermal conductivity of 0.0138˝, and FR-4 is slightly worse at 0.0124˝, which makes a difference of about 2% in the CV. Materials with different thermal properties may influence the recommendation. For FR-4 boards, add 0.02 to the modifier (Figure 5).



4. Board thickness modifier. The new charts were constructed using data from 0.070˝-thick boards. Thinner boards will be hotter, and thicker boards will be cooler (IPC-2152 A.4.2) (Figure 6).



5. Altitude modifier. This one may be refined in the future, but knowing that air is thinner at higher altitude, and traces run 35 to 55% hotter in a vacuum, either the conservative chart should be used or the CV should be increased for high-altitude designs.

6. Derating modifier. Keep in mind the charts have no derating applied, but many variables may affect the CV prediction and should be considered for marginal designs. For example, the planes modifier is based on the assumption that the trace sits over a solid plane, but in actual practice may be located near a board edge or over a plane that has clearances in it for through-holes or plane splits, which will be less effective in dissipating heat. Process variations that affect the trace geometry may also influence the results (in the form of voids, nicks, overetching, final plated conductor thickness, etc.). These variations all have acceptable tolerances in the finished product, but may affect the estimated temperature rise. It is advised that some amount of standard derating be applied to the CV – 5%? 6%? – but a full examination of this modifier is beyond the scope of this article.

7. Environmental modifier. The new data describe traces in still air (which assumes some amount of natural convection), so these recommendations should be valid even for applications inside a sealed enclosure. But for many other applications, airflow will be present, and this additional heat transfer may allow a reduction in cross-sectional area. This is a complicated subject, and recommendations related to airflow are beyond the scope of this article.
Some thermal analysis may be needed if the designer needs to use thinner traces than what the available data suggest.

Obtaining the Final Trace Width

At this point you have selected a value from one of the charts and modified it for your specific design parameters. Multiply the starting CV with the final modifier to get the final recommended cross-sectional area for the application.

The final step is converting the resulting cross-sectional area to the final trace width, based on the thickness of copper used in the board construction. This step should not be confused with the modification based on copper thickness to account for varying surface area. Figure 7 is a direct correlation between cross-sectional area and (copper thickness x trace width):

For example, assume the design of a four-layer FR-4 board that will be 3 x 5˝ by 0.063˝ thick, and the two internal layers are power and ground planes that are 0.020˝ away from the surface layers. We need a trace that can accommodate 4 A continuously, and we want to limit the temperature rise to 10°C above the local board temperature.



We could use the conservative chart to get a 0.300˝2 cross-sectional area, and use the final conversion chart for 1 oz. copper to get:

Trace width = 0.220˝.
That’s the easy answer, and if the board design allows, it can be used as is. Because we have good information about our design, and the board is larger than 3 x 3˝, the universal chart can be used instead. Following the 4 A line across to the 10°C temperature curve, we see that it intersects with the line for a starting value of 0.200˝ cross-sectional area.
CV = 200, starting modifier = 1.00

Next, we can modify based on 1 oz. copper:
Modifier = 1.00 – 0.14 = 0.86

Because we have a 1 oz. plane 0.020˝ away, using the Planes Chart, we get:
Modifier = 0.86 – 0.53 = 0.33

Our board thickness of 0.062˝ is less than 0.070˝, thus using the Thickness Chart:
Modifier = .033 + 0.10 = 0.43

We are using FR-4 instead of polyimide, so using Material Chart:
Modifier = 0.43 + 0.02 = 0.45

And as a judgment call, we add 3% for derating:
Modifier = 0.45 + 0.03 = 0.48.

Now we modify our starting 0.200˝2 chart value:
200 x 0.48 = 96

Using the final chart to convert 0.096˝2 to a 1 oz. trace width, we get:
0.070˝ trace width.

The historical chart would have recommended a 0.095˝ trace width for a 4 A, 1 oz. external trace, which illustrates how the new data can be used to push the envelope. In many cases, the design can have less board space, using appropriate parameters to derive the estimate.

References
1. IPC-2152, “Standard for Determining Current Carrying Capability in Printed Board Design,” August 2009.

Acknowledgments

Thanks to Borko Bozickovic for help in analyzing data and developing charts, and to Mike Jouppi for making it all happen.

Jack Olson, CID+, is a circuit board designer at Caterpillar Inc. (caterpillar.com); This email address is being protected from spambots. You need JavaScript enabled to view it..

High frequency current wants to concentrate near conductor edges. How to compensate for it.

Skin effect is the tendency of high-frequency current to concentrate near the outer edge, or surface, of a conductor, instead of flowing uniformly over the entire cross-sectional area of the conductor. The higher the frequency, the greater the tendency for this effect to occur.

There are three possible reasons we might care about skin effect:

  1. The resistance of a conductor is inversely proportional to the cross-sectional area of the conductor. If the cross-sectional area decreases, the resistance goes up. The skin effect causes the effective cross-sectional area to decrease. Therefore, the skin effect causes the effective resistance of the conductor to increase.
  2. The skin effect is a function of frequency. Therefore, the skin effect causes the resistance of a conductor to become a function of frequency (instead of being constant for all frequencies.) This, in turn, impacts the impedance of the conductor. If we are concerned about controlled impedance traces and transmission line considerations, the skin effect causes trace termination techniques to become much more complicated.
  3. If the skin effect causes the effective cross-sectional area of a trace to decrease and its resistance to increase, then the trace will heat faster and to a higher temperature at higher frequencies for the same level of current.


Faraday’s Law (of magnetic induction) is the fundamental principle behind EMI and crosstalk. It is also the fundamental principle behind a motor or a generator. Simply stated, Faraday’s Law says A changing current in one wire causes a changing magnetic field that induces a current in the opposite direction in an adjacent wire.

But here is the step that is not particularly intuitive. If a changing current in wire A can cause a changing magnetic field, and that changing magnetic field can induce a current in the opposite direction in an adjacent wire B, then that changing magnetic field can also induce a current in the opposite direction in wire A, itself. This is the fundamental nature of inductance.

Consider a wire, A. Assume the current in wire A suddenly increases. At the very first instant of time, there is a changing magnetic field around A that induces a current in the reverse direction in A that cancels out the original current. The net change in current in A is zero. At the very next instant of time, the changing magnetic field around A begins to decay slightly, and a small amount of net current begins to flow. At the very next instant of time, the changing magnetic field decays a little more, and a little more net current begins to flow. This process continues during successive instances of time until the full increment of current is flowing along the wire. This is the nature of inductance and describes the effect of inductance on current flow.

Suppose the process described above is interrupted. Let’s say halfway through, the original current suddenly changes direction. Now the process starts over, but this time in the opposite direction. Every time the original current changes direction, the process starts over in the reverse direction. The number of times the original current changes direction each second is the frequency of the current. If the frequency is high enough, the full current never gets to flow across the entire cross-section of the wire.

Now, during this process, here is the question: During the period of time in which the magnetic field is decaying, where does the current flow? It flows where the magnetic field is weakest. And the magnetic field weakens the further it is away from its source. Its source is along the center of the conductor. So the current that does flow tends to flow strongest the further the distance from the center of the conductor – that is, along the outer surface. This is the skin effect.

Steady-state currents flow uniformly across the entire cross-sectional area of the conductor. When we think about the skin effect, we tend to think in terms of the current flowing only at the outer surface. This is not really true. The issue really is current density. Normal currents have a current density that is uniform and equal everywhere over the cross-sectional area. But currents impacted by the skin effect have a current density that is highest at the surface of the conductor, decaying exponentially between the surface and the center of the conductor.

If current density is represented by J, then:


where

Jo = current density at the surface of the conductor
e = base of the natural logarithm (2.718).
d = distance measured from the surface toward the center of the conductor
sd = skin depth

Figure 1 illustrates the two cases of uniform current density, and current density impacted by the skin effect.



Skin depth. Skin depth is defined as the point where the current density is the current density at the surface (Jo) divided by e, or



The skin depth defines a cylindrical shell at the circumference of a wire or a rectangular shell around a trace. We tend to think in terms that the current flows uniformly through that shell, and not anywhere else along the conductor. Therefore, the effective cross-sectional area of the conductor is that shell, and the effective resistance the current sees is the resistance defined by that shell. But if current density follows an exponential function from the surface to the center of the conductor, then this can’t be the case. The true effective cross-sectional area only can be determined by using calculus; i.e. integrating the area under the current density curve.

Here is the interesting thing. It can be shown mathematically that multiplying the current density at the surface (Jo) by the cross-sectional area defined by the skin depth will result (at least approximately) in the same answer as if calculus techniques were used. Therefore, using the effective cross-sectional area defined by the skin depth works, even though it does not represent the actual truth.

We say the skin depth defines the effective cross-sectional area of the conductor not because this is true, but because it works!
Skin depth is inversely proportional to the square root of the frequency (in hertz):


Two very important things should be noted here: First, skin depth does not depend on the shape of the conductor. Skin depth is a distance measured in from the surface of the conductor toward the center of the conductor. Second, if skin depth is deeper than the center of the conductor, the current is not limited by the skin effect, and the current is flowing uniformly throughout the entire cross-sectional area of the conductor. Therefore, a thicker conductor is limited by the skin effect at a lower frequency than is a thinner conductor.

Crossover frequency. Consider a rectangular trace. For simplicity we will consider it to be much wider than it is thick. At low frequencies, the skin depth is deep enough that it extends further than half the trace thickness, and therefore, the skin effect does not come into play. At higher frequencies, the skin depth is smaller than half the thickness, so the effective cross-sectional area is limited by the skin effect. There is a unique frequency where the skin depth is just equal to half the thickness of the trace. This is the frequency where the skin effect just starts to come into play. This frequency is called the crossover frequency.

Calculating the crossover frequency can be difficult. For a rectangular trace, I estimate it as:



re

f = crossover frequency
w = trace width
t = trace thickness

A graph of crossover frequency as a function of trace thickness is shown in Figure 2. Skin effect calculations can be difficult. Calculators are available that simplify the calculations by determining several skin effect parameters, including the crossover frequency of a trace, and then frequency adjusted resistance values at user defined frequencies.



Proximity and ground effects. If frequency is high enough so that the effective cross-sectional area of a conductor is limited by the skin effect, there are two other effects that may also need to be considered. When a signal is close to its return, a mutual inductance exists that may further distort the current flow. Consider a rectangular trace routed directly over and close to a reference plane. If the frequency harmonics are high enough, the skin effect results in the current flowing through an effective cross-sectional area that is a rectangular shell around the edges of the conductor. At the same time, the mutual inductance between the signal current (on the trace) and its return (on the plane) causes the return current to locate itself as close as possible to the signal – i.e., on the plane directly under the trace. This same effect causes the signal current in the rectangular shell to locate more on the planar side of the shell than on the outer side of the shell. This effect is called the “ground effect” (Figure 3).



A similar effect occurs when a signal and its return are on closely spaced wires or traces – as is the case with differential signals. The mutual inductance between the two traces causes the signal currents to distort to the side of the effective cross-sectional area between the two traces. This is termed the “proximity effect.”

Since the proximity and ground effects distort the current path of the signals through the conductors, they further distort the effective cross-sectional area of the conductor, further increasing the effective resistance of the conductor. These effects are very difficult to quantify. Howard Johnson suggests they might be on the order of 30% or so, and he has a good discussion of this on his website.2

Lossy transmission lines. The skin effect, by changing the effective cross-sectional area of a conductor, causes the effective resistance of the conductor to change with frequency. This is of little consequence for most designers most of the time. But there is one circumstance where it becomes very important to PCB designers. The skin effect is one of the two primary causes of losses in lossy transmission lines (the other is dielectric losses.)3

When signals flow down a wire or trace, they reflect back again. The issue is whether we care about this reflection. And we do care if the line is “long” relative to the rise time of the signal.4 The solution to the reflection problem is to design our trace to look like a transmission line and then terminate the line in its characteristic impedance. Figure 4 shows a model of an ideal transmission line and the formula for the characteristic impedance. Of importance is the fact that the characteristic impedance has no phase shift at any frequency, and therefore, it is purely resistive at every frequency.

Figure 5, however, shows what happens to a transmission line model at high frequencies when the skin effect (and dielectric losses) come into play. Of particular significance is the fact that there is no single-valued resistor that can properly terminate such a line at every frequency.



The classic symptom of a lossy transmission line is an eye diagram that starts closing. Properly terminating lossy transmission lines is much more complicated than terminating “lossless” lines.5 When frequencies become high enough that the skin effect becomes a factor in traces, the resulting transmission line losses become one of the more significant signal integrity challenges for board and system design engineers.

References

1. Ultracad.com/ucadpcb.htm.
2. Howard Johnson, “Signal Effect Calculations,” sigcon.com/Pubs/news/skineffect.htm.
3. For a good discussion of these two effects, see Howard Johnson and Martin Graham, High-Speed Signal Propagation, Prentice Hall, 2003, pp. 185-216.
4. Douglas Brooks, Signal Integrity Issues and Printed Circuit Board Design, Prentice Hall, 2003, chapter 10, pp. 175-203.
5. Ibid., chapter 17, pp. 311-320.

Douglas Brooks, Ph.D., is president of UltraCAD Design Inc. (ultracad.com); This email address is being protected from spambots. You need JavaScript enabled to view it..

  

Pinning down FPGA/PCB integration.

Present day field-programmable gate arrays (FPGAs) are fast and provide excellent ways to develop advanced products. They have interface speeds reaching many Gbps and can replace traditional application specific integrated circuit (ASIC) technology by incorporating embedded processors, digital signal processor (DSP) cores and memory systems.

As its complexity increases, the challenge of implementing an FPGA on a printed circuit board (PCB) has become a daunting task. Some of the larger devices have nearly 2000 pins in a fully populated ball grid array (BGA) package. The smaller devices are often used in clusters of multiple FPGAs, which also pose a challenge. Frequent use of advanced (complex) memory standards, such as DDR3, further complicates the FPGA/PCB integration.
Traditional pin swapping, as well as swapping defined pin groups or banks, is not practical as the swappability of an FPGA is both device- and application-dependant. At the same time, it is frequently in flux during PCB design. Manual- or script-based methods of FPGA/PCB integration are seriously outdated. Companies can struggle for months with the pin assignment for a single FPGA/PCB integration, resulting in a critical bottleneck. What’s needed is an integrated FPGA/PCB co-design flow.

The over-the-wall flow needs to be revisited. By bringing FPGA and PCB designers together in a common environment, the impact of a change in one domain becomes visible in the other, and vice versa.

Nearly everyone uses the FPGA vendors’ design tools in one form or another, but these tools are not necessarily PCB-aware. Inevitably, every FPGA will end up on a PCB, resulting in an overcomplicated design with excessive layers– as well as a struggle to reach acceptable signal integrity.

FPGA pins must be assigned in a way that is optimal to the PCB in terms of the ability to break out the FPGA signals on a few layers and the number of vias, interconnect length and resulting signal quality. Unfortunately, we cannot randomly assign signals. The swappability of pins is device- and application- dependant.

We could manually set up swap rules, but this would be a monumental exercise, especially when considering the multitude of pin and bank assignment rules that come with the latest circuits. For example, moving a signal from one pin to the next may look harmless but can actually cause FPGA timing failures. We need to follow a large set of FPGA vendor and device rules and create corresponding place and route and synthesis constraint files for the FPGA. In addition, the pin assignment process includes selecting a proper I/O standard, sufficient drive strengths and termination requirements. Because the FPGA/PCB integration is in the critical path for the design, we need to look for methods to change the serial process into a concurrent process that saves as much time as possible.

We can start the pin assignment process as soon as the FPGA designer can supply an HDL entity (VHDL) or module (Verilog) declaration file. We can use this data to create a preliminary signals list and automatically create a schematic symbol and start connecting the FPGA. As this takes place, the FPGA can be simulated and synthesized, allowing it and the system design to progress in parallel. Both designers can cooperate in creating pin assignments.

There will likely be a few iterations where the pin assignment will be altered, so this process step can also be used to update the symbols and the schematic to save you from manual clean up. A key element in the process is to monitor all the data files involved and manage synchronizing both FPGA and PCB as data changes. A form of data management that monitors changes in combination with a data synchronization wizard is used to manage consistency between the FPGA and the PCB flow.

We cannot just look at the design’s component pins during assignment and optimization. If you read BGA Breakouts and Routing, Effective Design Methods for Very Large BGA by Charles Pfeil, you understand how the complex breakout patterns result in a completely different pin ordering. Therefore, we must also take into account breakouts and pre-routes. Also, further pin swapping might be needed as the design progresses. In an integrated FPGA/PCB flow, such operations will be made true to the device and the application-specific swapping rules.

As many modern FPGAs operate with extremely high data rates, it is vital to have fast access to signal and power integrity (SI/PI) tools that are easy to set up and can be used by the everyday designer to validate the optimization result in the SI/PI domain.

Finally, the FPGA designer must run a final simulation and place and route operation to complete the design. While manual methods are still common, it’s clear that today, a completely integrated design flow is required to be able to reach design closure within a realistic timeframe and to hit performance and cost targets. 

Per Viklund is director of IC Packaging & RF at Mentor Graphics; email This email address is being protected from spambots. You need JavaScript enabled to view it..

  

The risks of via failure must be balanced against those of laminate failure based on material choice, via size and grid, and other factors.

Plated through-hole reliability is influenced by high-density interconnect (HDI) designs and lead-free and mixed-technology (leaded and lead-free solder assembly). HDI adds several new structures to the PTH family, including microvias and buried vias. Microvias can connect to one, two or three layers, can be stacked or unstacked, and filled or unfilled. The subcomposite structures can come with one, two or three subs per composite, often with mixed laminate materials. There are buried vias of various sizes, and these also come in many different forms: blind, core, or sub; filled or unfilled; capped or non-capped.

HDI vias bring with them at least two new failure mechanisms that are reflow-induced and very difficult to screen: base or pad separation of the microvias,1 and eyebrow cracking of the laminate, which occurs near buried vias. Not only are there more failure mechanisms to monitor than ever, but they also compete. As an example, determining if PTH or laminate mechanisms dominate in a failure is a strong function specific to individual HDI designs. Surviving lead-free and mixed assembly reflow significantly magnifies aforementioned failure mechanisms, giving new life to former mechanisms and introducing a unique PTH-driven internal delamination mechanism2,3 referred to as “invisible” delamination.

The PTH test used at Endicott Interconnect Technologies (EI) for the past 18 years is the current induced thermal cycling (CITC) test. It is a version of other known current induced thermal cycling.4-9 The tester uses proportional control algorithms to continuously adjust the current for each coupon in each cycle, to achieve a precise and repeatable temperature cycle with a prescribed linear ramp and dwell time. The typical ramp rate, as used for all the data in this article, is 3°/sec. The high-temperature dwell time is typically between 30 sec. to 40 sec., which has been shown by modeling and measurements to be sufficient to achieve thermal equilibrium.8 Fans are then turned on to start the cooling phase. FIGURE 1 illustrates the cycle and also outlines the three main uses for the test, including PTH life curves, rapid product monitoring or evaluations, and real-time video recording of PTH failure mechanisms during a coupon heat cycle.

 

The place to start a discussion of PTH and laminate problems is with a review of the standard through via, including barrel life as a function of temperature. FIGURE 2 depicts select recent and vintage life curves to review the problem that contributed to all other problems, that of CTE mismatch between copper barrel and the surrounding laminate at reflow temperatures. The CTE mismatch can produce deformation of such magnitude that even well-plated vias can survive only a few assembly passes without cracking.

 

Note the other material curves in Figure 2. The purple curve shows why polyimide, though expensive and a challenge to process at high aspect ratios, is a popular choice with the military for assembly robustness. The green curve is a high performance, low loss, polyphenylene oxide-filled resin that consistently passes all lead-free testing and has the best overall PTH performance of any laminate tested. (It is not a universal solution because its price/performance likely exceeds most applications.) Note that the blue curve is a cost-effective, high-Tg phenolic epoxy with excellent PTH life in all regions including 220ºC assembly (40 cycles), but at a lead-free assembly temperature of 260ºC, it lasts only 10 cycles.  

While Figure 2 shows the reason for assembly-driven via failures, FIGURE 3 illustrates why they can be a serious reliability concern as latent opens in the field. At reflow temperatures above the Tg of the laminate material, the z-axis expansion of the laminate is an order of magnitude greater than that of the copper, which forces the copper in tension where it plastically deforms (for example, the copper barrel becomes permanently longer than it was). Similarly, the laminate sees significant compressive forces in the zone around the PTH barrel because the barrel acts as a rivet to constrain it from expanding, as it would away from the PTH. These considerable compressive forces create a pressure gradient that causes the laminate, now well above Tg, to “flow” away from the barrels. Ironically, cross-linked thermoset polymers are not supposed to flow as other polymers, but some form of movement or reshaping is indicated by the permanent deformation seen in laminate x-sections after reflow or solder shock – the laminate is now longer between vias than in the zone directly next to them (Figure 3c).  

 

The combination of these permanent deformations (longer barrel, shorter laminate) means that any crack formed at the peak temperature will be forced in slight compression on return to room temperature. In addition to the illustration, Figure 3 includes ESEM photos of a barrel crack open at 230ºC and the same crack closed tightly again after cooling to ambient. It also illustrates electrical measurements of eight coupons (left axis) with intentionally very weak barrels through a single simulated reflow cycle in a convection oven. They are shown completely open during heating (temperature, red line and right axis) between T = 140º to 185ºC, and solidly closed to original resistance value starting between T =1 75º to 110ºC.  

FIGURE 4a plots the hot resistance by coupon and reflow cycle. A 3% failure criterion was used in this case, instead of 5% or 10%, because of the difficulty of reading all the coupons simultaneously while still hot, and it appears to best fit the sharp slope change of the data. The final results are plotted in FIGURE 4b. The curve for cycles to fail from actual 220ºC reflow passes is almost identical to the 220ºC CITC cycles to fail curve.

As if the lead-free situation were not complex enough, the magnitude of deformations at lead-free temperatures are so significant that they often trigger different and competing failure mechanisms along the life curve. While this fact does not invalidate the curves, it could have design implications, and it certainly sheds light on the lead-free challenge. For example, FIGURE 7 is a life curve for a high-Tg epoxy constructed with two different and independent CITC coupons – one with the daisy chain stitch external (top and bottom surfaces) and one with the stitch on the nearest two internal planes, top and bottom. The two coupon types yield exactly the same cycles to failure at 150º and 175ºC, and the same failure mode (center barrel crack). But they diverge slightly at higher temperatures – the external coupon with lower life in this case, fails at the external rim, while the internal coupon lasts a little longer but fails at the inner plane connection, at least at 260ºC.

  

 

As a second example, FIGURE 6 compiles results of the same experiment for a high Tg-filled resin. The same three failure mechanisms compete again, this time with the inner plane failing slightly earlier than the rim at 260ºC, but both different at the center barrel crack than at conventional reflow temperatures. Note that none of the curves differs significantly, and a repeat of either test could yield a different result by material, depending on a number of factors, but these examples are otherwise provided as one more case for the complexity of ensuring laminate boards for lead-free assembly.

  

 

‘Invisible’ Delamination

Thermally induced internal delamination is one of the original and ever-present failure mechanisms for laminate substrates. The root cause is generally linked to the explosive vaporization of entrapped moisture at high temperatures, especially during reflow. The alternate names (blistering, popcorning) not only reflect the mechanism but also hint at another fortunate attribute – when they occur within laminates of any thickness, they create a measureable opening within the substratealmost always visible at the surface as a raised or discolored area.

However, as lead-free evaluations continued, users and suppliers began to discover another form of delamination with clear distinctions from this classic, visible form. The new type appears only between vias, not open areas; occurs within resin, not at glass interfaces; and is highly dependent on the aspect ratio and grid of those vias. That is, there is no visible difference between a board module site that has this delamination versus one that does not. While most believe that both mechanisms are due to expansion/vaporization of trapped moisture with temperature, the unique nature of “invisible delamination” is so striking as to demand a greater understanding. Figure 7 shows a unique photo of delamination found by x-ray that vividly answers why this mechanism is invisible – despite the large rupture between the two vias, there is otherwise little or no change to the surrounding material, planes or dimensions. But this answer only raises new questions – how does the material within the rupture disappear, seeming to violate “conservation of mass,” and how does this happen with little effect to surrounding laminate?

Whatever the root cause, the important question is, How does one control this unique lead-free failure mechanism given that the severity is a function of material selection, peak reflow temperature, board thickness, hole size, hole grid, process history including moisture content, and board design/construction? Clearly, such a multidimensional problem demands an evaluation approach that is also multidimensional. That is, classifying a material as lead-free compatible based on a single coupon at 5 x 260ºC reflow test is no longer a viable method.  

The real surprise and complexity of HDI is not only how many different via structures and combinations it adds to the design mix today, but also how strongly dependent the link is between specific designs and reliability. In BGA technology of the 1980s and ’90s, two different products may have had a different number of wiring vias on a different thickness board, but the failure modes and overall reliability were quite predictable, even if challenging at high aspect ratios. And once a design space was qualified, specific products did not have to be requalified to know they would work. But the large number of via types and constructions available, often with mixed materials within the composite, lead to a unprecedented complexity, especially when combined with the narrow margins of lead-free assembly. The key to quality and reliability assurance is to know the weakest link for any specific product, but for HDI designs, that is often difficult to predict: Will it be the microvias, through vias, or a laminate failure mode induced by the vias?

Fortunately, there are some rules of thumb that apply to this design versus reliability question.

Conclusions

If the past few decades have seen such an increase in complexity and failure mechanisms, both laminate and copper related, what will be the expectation for future technology? All via structures of future technology will be smaller and less through-via-like. Interestingly, for all the failure mechanisms reviewed, smaller is better! For example, FIGURE 8 shows an example of a leading-edge board constructed with z-interconnect, which in some cases is the only available means to meet electrical (no via stubs) or wiring requirements . While the z-interconnect itself is a challenge, the product shown so far appears to be resistant to the mechanisms discussed. There are no through vias, and the through-via-like nature of what is small and not dense enough to escape both invisible delamination and eyebrow cracks, at least with the materials and parameters evaluated so far.  

The comfortable qualification margins of past products may longer be possible with today’s HDI designs and lead-free reflow. Reflow and via life requirements may need to be tailored for a specific product with the smart and innovative use of coupon tests. Design for reliability today involves balances the potential risks of via failure versus laminate failure based on material choice, via size and grid, and the mix of through vias versus compliant HDI structures.

 

Kevin Knadle is a senior advisory reliability engineer with Endicott Interconnect Technologies Inc.; This email address is being protected from spambots. You need JavaScript enabled to view it..

Acknowledgments

The author acknowledges the contributions of colleagues including Ron Lewis, Bob Japp, Bob Harendza, John Lauffer, Bill Rudik, Voya Markovich, Roy Magnuson, Anish Bramhandkar, Jim Stack (EI); Wayne Rothschild (IBM); Binghamton University IEEC.

References
1.    Andrews, P., Parry, G., Reid, P., “Learning from Microvia Failure in Lead-Free Assembly,” Printed Circuit Design and Manufacture, June and July 2006.
2.    Rothschild,W., Kuczynski, J., “Lessons Learned about Laminates during Migration to Lead-Free Soldering,” Apex, March 2007.
3.    Ehrler, S., “The Compatibility of Epoxy-based Printed Circuit Boards with Lead-free Assembly,” Circuitree, June 2005.
4.    Knadle, K.T, Ferrill, M.G., “Failure of Thick Board Plated Through Vias with Multiple Assembly Cycles—The Hidden BGA Reliability Threat”, SMTA Journal of Surface Mount Technology, vol. 10, no. 4, October 1997.
5.    Knadle, K, “Proof is in the PTH – The Critical Link between PTH Processes and PCB Reliability,” Endicott Interconnect Technical Symposia, July - October 2003.
6.    Knadle, K.T., Jadhav, V.R., “Proof is in the PTH – Assuring Via Reliability from Chip Carriers to Thick Printed Wiring Boards, ECTC Conference, 2005.
7.    Knadle, K.T. “Analysis of Transient Thermal Strains in a Plated Through Hole using Current Induced Heating and Transient Moire Interferometry,” master’s thesis, Binghamton University, 1995.
8.    Ramakrishna, K., Sammakia, B., “Analysis of Strains in Plated Through Holes During Current Induced Thermal Cycling of a Printed Wiring Board”, 7th Symposium on Mechanisms of Surface Mount Assemblies, 1995.
9.    IPC-TM-650, Method 2.6.26, “DC Current Induced Thermal Cycling Test.”

How to calculate PCB trace width and differential pair separation, based on the impedance requirement and other parameters.

Impedance control is about accuracy. If one or two parameters are not carefully determined, we can lose the advantage of the more accurate parts of the process (for example, the usage of the expensive field solver programs). With simulators and calculators, we can determine trace width impedance (used for analysis) or impedance trace width (used for design). The calculations also depend on other parameters, but the role of width and impedance determines the usage. To get the best accuracy, perform full frequency-dependent impedance control. Even if we use the simplier and less expensive frequency independent design flow, it is worth it to understand the simplifications we make.

The proposed design flow: Calculate PCB trace width and differential pair separation based on the impedance requirement and other parameters (FIGURE 1).



Input parameters needed:

  • Impedance requirement
  • Materials and thicknesses (PCB fab stock)
  • Dielectric Dk and Df on a given frequency
  • Soldermask data, Dk and Df on a given frequency
  • Etching compensation data from PCB fab
  • Signal knee frequency
  • Copper and plating thicknesses.

We need a frequency-dependent 2D field solver program, like the Polar Instruments Si9000 or Appcad RLGC. A frequency-independent (less expensive) field-solver, like the Plar Si8000 or TNT-MMTL, can get 1% to 5% error.

The field solver program itself is not enough. We also need to do additional calculations such as Dk and Df on the signal’s knee frequency (this can be done in an Excel calculator) and the CAD_width and CAD_separation from the field-solver results.

All parameters have tolerances, including the calculation itself, the manufacturing and the measurements. To achieve a reasonable accuracy, we need to minimize all the tolerances so that in total, they fall within specified limits (usually 10% or 15%). Even if a few parameters have loose tolerances, keep tight tolerances where possible. For example, for a worst-case calculation: Parameter A has 1% (simplified case, truncated distribution) tolerance, Parameter B has 1% to 10% (10% is with less expensive equipment) tolerance, Parameter C is fixed at 10%. If we choose 10% for Parameter B, then the total worst case tolerance will be 21%. If we choose 1%, then the total will be only 12%, which is much lower. If we specify our nominal impedance with a loose tolerance (choosing 90-ohms nominal for a 100-ohms diffpair) manufactured at +/-10%, together they will result in a 90 ohms +/-10% (81 to 99 ohms) range, which is outside of our original specification (100+/-10% = 90 to 110 ohms).

The PCB manufacturer measures the test coupons on each panel, not each trace on each board. The dielectric and copper thicknesses are not perfectly equal everywhere on the panel, so the real traces will also have a deviation from the test coupon measurement results. This can be minimized by providing similar copper pattern and density on the coupons as the board design.

It is common for manufacturers to have an error in one parameter (for example, under etching) during calculations, so they can modify another unrelated parameter (for example, the Dk) to push the calculation results close to the measured values. If we try to correct the modified parameter, we will get a result different from the measurements, which means our calculations will be incorrect.

Dielectric materials. Materials have a part number (for example, Isola IS410), but that number does not specify the material for an impedance calculation. An exact material specification example would be: Isola IS410, 2116 glass style, 125-um finished thickness prepreg, 67% resin content. The materials are available in a few specific thicknesses, with each thickness variant having a different dielectric constant and loss tangent; therefore, we need to know the exact Dk and Df. A common mistake is to use the Dk and Df data from the material datasheet, which can have up to 20% difference to the value of the chosen thickness variant. It is worth buliding a material library, since it is hard to get those material Dk and Df values for each thickness. Normally, these are measured or calculated based on the resin content.

Dk is a function of frequency, and the material manufacturers specify the Dk and Df data on a certain fixed frequency (usually on 1 MHz, 1 GHz or 5 GHz). Both strongly change over the frequency, so derive their values on the signal’s knee frequency. The slope of the Dk curve depends on the Df value, a higher Df leads to a bigger change in Dk over frequency. The compensation can be done with analitical equations (in an Excel sheet) based on the wide band Debye model.

Etching compensation. The etching compensation has to be done at two points in the calculation. For both, we need some values from the PCB manufacturer, since it is a manufacturer-dependent value.

With some simplification, the PCB signal trace cross-section can be modeled as a trapezoid. Before ething, the manufacturer creates an acid-resistant pattern on the copper surface, where the track widths are exactly the same as in the PCB artwork files. Let’s refer to the trace width in the design files as CAD_width. During etching, the final trace will be narrower than the CAD_width, and the topside width (Top_width) will be narrower than the bottomside (Bottom_width). This has the opposite effect on the trace separation, which is important for differential pairs. Which one is the top/bottom? During etching, the copper foil is already applied to the surface of one of the dielectric layers. The other dielectric layer will be added after the etching. So, the side of the trapezoid touching the existing dielectric layer (core or earlier prepreg) will be the wider one, and it will be the usually called Bottom_width (FIGURE 2).



We need to get two values from the manufacturer for a given copper thickness:

We need the difference between the Top_width and the Bottom_width, referred to as “lower trace width etch factor” in Polar Instruments’ terminology. We will call it Etch_Factor_1 and provide this to the field-solver program.5

We need the difference between the Bottom_width and the CAD_width; let’s call it Etch_Factor_2.

These are statistical-measured average values. If the manufacturer doesn’t provide these numbers, then we can assume that:

Etch_Factor_1 = Etch_Factor_2  = copper_thickness 0.6, so it is dependent on the copper thickness.

Polar Instruments Si8000 software deals only with Etch_Factor_1, so we have to calculate the final CAD_width and CAD_separation manually:

CAD_width = Bottom_width + Etch_Factor_2

CAD_separation = Bottom_separation - Etch_Factor_2

Some manufacturers calculate the final CAD_width, some don’t. For this reason, before setting up trace widths in the CAD design program, ask the fab house if it does this final compensation. If it does, set up the Bottom_width for the CAD program; if it doesn’t, then set up the CAD_width (or photomask width) for the layout.

Buildup order. The copper layer is always between two dielectric layers (inner) or ontop of a dielectric layer (outer). For the inner layers, before etching, the copper foil is already on the surface of one of the dielectrics, which is hard already (core, or earlier prepreg in a buildup-type microvia stackup). After the circuit is etched the next prepreg (dialectric) layer is applied and cured. The result is that the copper pattern is embedded into the second (soft, prepreg) layer. The wider part of the trace cross-section is on the surface of the hard layer. The upper and lower dielectric in the structure view is not based on the board orientation or layer number, but on the core prepreg or buildup order (FIGURE 3).



Copper coverage. The ratio of the remaining copper to the removed copper on a given layer is the copper coverage. It has an effect on the final thickness of the prepregs. If there is less copper remaining, less copper will be embedded into the prepreg and less volume will be added to the prepreg’s volume. The prepreg flows slightly during the lamination process and fills the gaps between the traces horizontally. The board surface area is constant, so if one layer’s volume increases then its thickness also increases.

Naming conventions. The finished thickness is the thickness of a prepreg layer when it is laminated between 100% fully covered copper layers. So in case of no copper embedding, it can be used for the prepreg volume calculation. Because this name is already occupied, name the resulting prepreg thickness in the stackup as “final thickness.” Note that we measure the prepreg final thickness not from the top of the traces, but from hard/full layer surface (core, or earlier prepreg, or ground/power plane) to another hard/full layer. The Polar software uses the “isolation distance” naming for this.6

Finished prepreg thickness for buildups:

Final_thickness = finished_thickness + coverage * copper_thickness

For core-prepreg sandwiches:

Final_thickness = finished_thickness + coverage1 * copper_thickness1 + coverage2 * copper_thickness2

Both sides of the prepreg have embedding copper patterns. FIGURE 4 illustrates copper patterns for different stickups.



Plating. For outer copper layers where any drilled holes are ending, the manufacturer increases the copper thickness with copper and other metal plating to create the plated through-holes (PTH) and to make the outer surfaces easily solderable. This increases the layers’ thickness  and must be taken into account for impedance calculations. These thickness measurement data can be requested from the manufacturer. If we change manufacturers in the product’s lifetime, then the plating thicknesses also change, so all impedances must be recalculated. For some PCB manufacturing processes, the plating also results in a more complex shaped cross-section, resembling a mushroom rather than a trapezoid.

Frequency dependence of the characteristic impedance. The characteristic impedance is defined in the well-known equation:

The “j” is the complex-number constant, f is the frequency and the R, L, G and C parameters are per-unit-length parameters (each also frequency dependent) derived by solving the electromagnetics differential equations (this is what a field-solver does). The result Z0 is a complex number on every frequency. We take the magnitude of this complex number, so that is why we say “impedance magnitude.”

The trace impedance depends on the frequency, because of the following effects: Dk is frequency dependent (effects C, G), skin-effect (effects L, R).

At very low frequencies, around 30 MHz, the impedance rapidly increases with decreasing frequency. Above that region, the slope can be positive or negative, based on the cumulative result of all of the parameters. In most of the digital circuits, we don’t care about the impedance at that level (below 30 MHz) frequencies. Above 10 GHz to 100 GHz, it increases rapidly again. FIGURE 5 illustrates impedance magnitude vs. frequency for a 100-μm wide microstrip on an 100 μm-thick dielectric.



Frequency dependence of the digital signals. Digital signals are wide-band signals. However, the rest of the signal’s energy is located in a not too wide frequency band. For 8b10b encoded signals, like the PCI-express and SATA, the frequency band has a lower limit, which is one-tenth of the data rate (FIGURE 6). The highest significant frequency component of a digital signal is at the knee frequency. F_knee = 0.5 / Rise_time. To minimize signal integrity problems, it would make sense to provide the best matched termination at the knee frequency.



The rise time is normally slower (lower knee frequency) at the receiver circuit than at the transmitter, due to losses and attenuation in the interconnect. If we have to choose where to have better match, at the transmitter or at the receiver, then we have to answer this question before we start the impedance calculation. Because of the different rise times, the two ends of the signal trace will see different impedance as well. Ideal termination is obtained when the terminating resistors resistance is equal (neglecting the complex nature of the impedances) to the local characteristic impedance.

Surface roughness. The surfaces of the copper and dielectric layers are not perfectly flat and smooth. They have a roughness usually a few micrometers deep. For signal frequencies where the skin depth (skin effect) is at least as low as the surface roughness, it increases the effective trace length and the series resistance of the trace [R(f)], too (FIGURE 7). This way, it has an effect on characteristic impedance.6

Soldermask. The soldermask must be taken into account for outer layer microstrip calculations. It has a Dk and Df similar to the other dielectrics in the stackup. These parameters can be obtained from the soldermask datasheet.

Soldermask thickness. We deal with multiple soldermask thicknesses and provide the thickness ontop of the copper traces and the thickness between the traces. We can also provide the conformal coating thicknesses, if they exist.

Other effects. There are several other aspects of the characteristic impedances what we could analyze, but there is no developed method to take them into account in our calculations in the design process.

Fiber wave effect. The PCB dielectric materials are not homogenous; they are a mix of glass fibers and resin not perfectly distributed. These two components have very different dielectric constants. If a PCB trace is parallel to a glass fiber direction, the location of the trace relative to the glass fiber determines the effective Dk around the trace, and the impedance will depend on the relative location. The imperfect glass-resin distribution pattern provides the same dielectric constant offset (and impedance offset) through the whole length of the trace, which is the worst case. However, if the traces are in some angle to the glass thread direction, then the error appears as a fluctuation over the length with a mean value of the nominal dielectric constant, which is the desired way. FR-4 materials have glass fiber threads in two directions, just like the yarns in a fabric. To be able to have the traces in angle to both thread directions, route the traces to be 45˚ to both fiber directions.8

Geometry roughness. The trace width, dielectric thickness and all edges and surfaces in the geometry are neither perfect nor smooth. On a longer trace, we can observe the mean value of these otherwise statistically natured geometry parameters/dimensions.

Resin flow. Since the PCB dielectrics are made of glass fiber and resin, the resin flows slightly and fills gaps between traces after the lamination process.This way between the traces on the same layer there will be almost only resin. There will be areas filled more with resin and other areas filled more with glass, this way the dielectric constant will vary from area to area. The FR4 dielectric constant is a result of the dielectric constants of the glass fibers and the resin, but in areas where it is no longer a mix but just resin, the dielectric constalt is a lot different than the nominal value as provided for the material.9

References
2.    “Frequency Domain Characterisation of Power Distribution Networks”, Istvan Novak, Jason R. Miller, p.106.
3.    “Modeling frequency-dependent dielectric loss and dispersion for multi-gigabit data channels,” Simbeor Application Note #2008_06, September 2008.
4.    The following is an Excel sheet for recalculating the Dk data to different frequencies: www.buenos.extra.hu/iromanyok/E_r_frequency_compensation.xls. It is based on equations from Istvan Novak’s demonstrating caclulator: “causal frequency dependent dielectric constant and loss tangent dielectric model parameters,” http://electrical-integrity.com.
5.    Polar Instruments, www.polarinstruments.com.
6.    AP507, “Calculating dielectric height with Speedstack,” www.polarinstruments.com/support/stackup/AP507.html.
7.    “Surface roughness effect on PCB trace attenuation / loss,” Polar Instruments Application Note AP8155.
8.    Altera application note 528: “PCB Dielectric Material Selection and Fiber Weave Effect on High-Speed Channel Routing,” www.altera.com/literature/an/an528.pdf.
9.    AP148, “The Effect of Etch Taper, Prepreg and Resin Flow on the Value of the Differential Impedance,” www.polarinstruments.com/support/cits/AP148.pdf.

Istvan Nagy is a hardware design engineer with Concurrent Technologies; This email address is being protected from spambots. You need JavaScript enabled to view it..

Page 2 of 22