The "founder" of TMI asks, How much is more analysis worth?

Just because something *can* be done doesn’t mean that it *should* be done.

A few customer encounters this past month caused an issue to ricochet around in my mind like a 1970s pinball machine. I’m referring to a trap we’ve all fallen into: analysis paralysis.

Three interrelated definitions I have for analysis paralysis are worth enumerating:

- The condition of being indecisive while overanalyzing alternatives. (Classic analysis paralysis.)
- Allowing a project to mushroom into something bigger than it needs to be to get the job done. (This column is a good example.)
- Using data from the most expensive tools you own just because you have the tools or the data (e.g., it’s expensive and took a lot of time, so it must be good).

It’s not that analysis or expensive tools aren’t good, but their employment is an optimization process.

Relative to the above, I can’t and won’t lecture on trying to rely on overanalyzing things or using “too much information” as if I have a solid handle on it. When non-engineers say to me, “That’s TMI,” I say, “I invented TMI.”

{guest}

*To continue reading, please log in or register using the link in the upper right corner of the page.*

{/guest}

]]>A methodology for selecting the right material and the right price point.

When I started writing this column a couple years ago, I wondered how much I’d have to say. An experienced media guy told me to watch my inbox for topics and questions that may be of general interest. That turned out to be excellent advice. Here’s one such example.

“What is the best laminate for a loss budget of x dB for y inches? I was thinking in terms of Panasonic Megtron 6 or something like it.”

Megtron 6 is an excellent material, but it’s not cheap and it’s not the only horse in the race. My response was to focus on a loss and material-planning *methodology* rather than making a firm material recommendation.

Why we care. Everything that improves material performance – in particular, reductions in loss – comes at a price. *Loss* versus *cost* is a classic optimization problem. Designers want to pay just enough to meet loss requirements, but not more than they need to.

In the past, speeds were slow, layer counts were low, dielectric constants (aka Dk or Er) and loss tangents (aka dissipation factor, or Df) were high, design margins were wide, copper roughness didn’t matter, and glass-weave styles didn’t matter. We called dielectrics “FR-4,” and their properties didn’t matter much.

{guest}

*To continue reading, please log in or register using the link in the upper right corner of the page.*

{/guest}

]]>The impedance implications of the trapezoidal trace.

Until recently I thought those who believed in rectangular traces were about as common as those who believe in square waves and a flat earth. Recently, though, I came to realize it’s not as clear as I thought, not only for newbies but in general. Over the past 25 years, I’ve acquired a good number of books on PCB design and signal integrity, and you wouldn’t know from reading most of the industry literature that traces were anything but rectangular. Interesting, right?

If you’ve read previous “Material Matters” columns, you may recognize the following cross-section from our Z-solver software. Among other things, it shows that the base of a trace, facing the core dielectric, is wider than the side of the trace that faces the prepreg. As such, the trace trapezoids face both up and down in a multilayer stackup. There’s no relationship to the layer number or whether the trace is on the top or bottom half of the board. For this reason, some including me – but not everyone – avoid using terms like “top” or “bottom” with regard to trapezoidal traces.

{guest}

*To continue reading, please log in or register using the link in the upper right corner of the page.*

{/guest}

]]>Weight is still used as a determinant for copper thickness. Why?

Sometimes my columns tie to issues or stackups that appear in my inbox each week. I’m occasionally asked why 0.6 mils (15µm) is often used for the thickness of 0.5-oz. copper, rather than 0.7 mils (18µm), and similarly why 1.2 mils (30µm) is often used for 1-oz. copper instead of 1.4 mils (36µm). If you’re curious about the details, or if none of these numbers seems familiar, here’s a quick primer. The thickness parameter “t” in FIGURE 1 shows the thickness we’re interested in here.

{guest}

*To continue reading, please log in or register using the link in the upper right corner of the page.*

{/guest}

]]>Trace separation; length parallelism; stackup: Does one stand out?

It’s been some time since I’ve seen an article on crosstalk, so I decided to take the opportunity to walk through the subject in a soup-to-nuts overview for those in the PCB design community who may be interested in why crosstalk-savvy PCB designers and hardware engineers use various design rules for controlling crosstalk. In the process of doing so, we’ll identify which design tweaks provide the most leverage for controlling far-end crosstalk.

Crosstalk is unwanted noise generated between signals. It occurs when two or more nets on a PCB are coupled to each other electromagnetically, (even though conductively they are not connected at all). Such coupling can arise any time two nets run next to each other for any significant length. When a signal is driven on one of the lines, the electric and magnetic fields it generates cause an unexpected signal to also appear on the nearby line, as shown in FIGURE 1.

{guest}

*To continue reading, please log in or register using the link in the upper right corner of the page.*

{/guest}

]]>Most boards will work just fine. But what if they don’t?

Over the past year, I’ve written a good bit about glass-weave skew (GWS) and next-generation loss requirements, using PCI Express guidelines as a means of tracking what higher frequencies do to eye patterns. This month, we’ll combine important elements of both these technology series, with just a bit of review in order to make this column one that can be read as-is.

The problem with human behavior is many of us wait for some sort of catastrophic event before we course-correct. When should we get serious about glass-weave skew, as opposed to ignoring it, while hoping it doesn’t turn around and bite us at some point in the field? (A near-worst-case scenario.)

When I was marketing signal-integrity software in the 1990s, many engineers would appear on my radar *reactively*, playing whack-a-mole after spinning multiple prototypes or field failures. Over time, the list of possible causes grew to include crosstalk, loss in all its forms, and eventually power integrity. I’ve noticed many of today’s hardware teams are sort of on cruise control relative to the “fiber-weave effect” as a design concern, so my objective here is to explore the concept of whether designers should worry about it *proactively*, given the potential impact of seemingly random field failure in production.

{guest}

*To continue reading, please log in or register using the link in the upper right corner of the page.*

{/guest}

{!guest}

Background. Practically speaking, glass-weave skew and the fiber-weave effect (FWE) are the same thing. Or, more accurately stated, the fiber-weave effect *causes* glass-weave skew. Semantics aside, the fiber-weave effect is caused by one signal in a differential pair seeing a different micro-environment than the other signal in the pair. This is caused by the fact that “E” glass has a Dk of around 6.8, and resin has a Dk of around 3.0, though it varies from one resin system to the next. FIGURE 1 shows if one signal is primarily adjacent to glass (P) in the resin-epoxy dielectric mixture, and the other differential signal is running across a mixture of glass and resin (N), the effective Dk is going to be different for the two signals, resulting in impedance and propagation-velocity variation as well.

Lee Ritchey provides TDR results showing these effects, while contrasting two different glass styles: 1080 and 3313 mechanically spread glass (FIGURE 2). Notice how much more uniform the impedance is as it travels across the glass weave when signals run parallel to both the weave (navy blue) and the fill directions (purple and green plots) compared to 1080 glass. This results in very low skew. Weaves known to cause skew in differential pairs include 106, 1080, 2116, and 7628 glass.

The 1080 glass exhibits a 5Ω swing. With the 3313MS glass, impedance variation is much less significant, with the variation parallel to the weave direction more significant than the signal running parallel to the fill.

Each differential serial-channel standard and speed has its own tolerance for skew. Most standards or chip manufacturers offer guidance on skew tolerance, but we can generically characterize that a channel’s tolerance for skew is described as roughly 25% of the bit stream’s unit interval (UI). For example, a 1Gbps (500MHz) signal would have a UI of 1000ps. Using 25% as a guideline, that represents a 250ps skew tolerance. That’s a pretty wide window, and why most engineers didn’t need to worry about GWS 20 years ago. Fast forward to designing at 10Gbps (5GHz). The unit interval will be 100ps, and the skew tolerance will decrease proportionally, to around 25ps.

PCIe example. We could talk about any number of bus standards, but I prefer to use PCI Express, since it’s one many of us are familiar with. Doing the same math as above, TABLE 2 shows that as you progress from PCIe 3.0 to PCIe 4.0 and PCIe 5.0, the tolerance for skew from all sources goes from 31ps, which is tight to begin with, down to 8ps. At any of these bus speeds, hardware designers can no longer ignore the prospect of glass-weave skew sneaking in and semi-randomly compromising otherwise well-planned designs!

Next, we’ll explore what actually happens to otherwise-pristine eye patterns.

Glass-Weave Skew’s Impact on Eyes

PCI Express 3.0. FIGURE 3 shows simulation results for an 8-Gbps signal using a material that was successful on platforms that used PCIe 3.0. The blue keep-out region doesn’t have any bits encroaching on it, which is what we want. This is simply intended as a high-level example of the interplay between frequency, eye mask, eye diagram and skew.

According to Table 2, if we introduce 45ps of fiber-weave-effect-induced skew into this differential signal – one that was already near the edge, as shown in Figure 3 – the eye pattern should be compromised. FIGURE 4 shows this is indeed the case. The 25% unit interval rule-of-thumb seems to hold true here. The actual amount of glass-weave skew can be significantly more, depending on the interconnect length and the semi-random alignment between the two lines in a differential pair compared to the underlying glass fabric.

PCI Express 4.0. Keeping with this theme, let’s consider a more expensive, lower-loss material for the next-generation requirement. We’ll use 16Gbps and the PCIe 4.0 eye mask. FIGURE 5 shows the result. Note the vertical scale was changed, adapting to the tighter keep-out requirements for PCIe 4.0 compared with 3.0.

Using guidance from Table 2, if we introduce just 16ps of fiber-weave-effect-induced skew into this differential signal – one that was already near the edge, as shown in Figure 5 – the eye pattern should be compromised, as FIGURE 6 shows. The same amount of skew that was perfectly acceptable for PCIe-3.0, 16ps, was absolutely *unacceptable* for PCIe-4.0.

PCI Express 5.0. The same simulation exercise could be performed for PCIe 5.0 and higher frequencies, although the task of producing acceptable eye patterns at the receiver gets much tougher and of course the tolerance for skew decreases, as expected.

Parting thoughts. One of the problems with the fiber-weave effect is prototypes may work just fine. And 95% of the signals on 95% of the boards may work just fine. Systematic elements tied to glass-weave skew result from most trace routing running parallel to the weave and fill directions in the adjacent glass weave. And there’s a systematic element tied to frequency effects, design margins and the pitch of the differential pairs as they relate to the adjacent glass. More of these effects are controllable than hardware designers realize, in my observation.

And mitigating the risk begins with the realization that this may be something you need to be concerned about as speeds increase. The frequency guidelines offered here hopefully answer the question I posed at the top.

For glass-weave skew mitigation tips, I covered the means by which glass-weave skewed can be controlled in a previous series in PCD&F. Part 2 surveyed the options, and parts 3 and 4 drilled into more detail, including glass styles and differential pitch. See pcdandf.com for more details.

References

1. Lee Ritchey, Speeding Edge, “Minimizing Skew in High Speed Differential Links,” December 2015.

has more than 25 years’ experience with signal-integrity software and PCB materials. He is director of everything at Z-zero (z-zero.com);*Register now for PCB West, the leading conference and exhibition for the printed circuit board industry! Coming this September to the Santa Clara Convention Center. pcbwest.com*

{/guest}

]]>Understanding key differences between time and frequency domains.

As March approaches each year, I can count on the bullfrogs around our neighbor’s pond to be out in force, memories of days coaching baseball and softball, my wife’s birthday, and on March 14, “Pi Day,” which has been celebrated by geeks around the globe since 1988. I take the day seriously due to pi’s prevalence in almost every field of science, ranging from astronomy, electromagnetics, physics, to probably several other fields I’m not even thinking about. How did pi find its way into so much science, and what are the implications for electromagnetics?

Before we go into details regarding the time and frequency domains, it’s beneficial to discuss the “unit circle” and radians. A unit circle is simply a circle with a radius of 1 (regardless of units). The circumference of a unit circle is 2π, meaning that one cycle would be 2π, and there would be 2 x 3.14 radians required to complete the circle. This is illustrated in FIGURE 1.

{guest}

*To continue reading, please log in or register using the link in the upper right corner of the page.*

{/guest}

{!guest}

2π is the period or circumference of the unit circle. Without the mathematical relationships tied to the circle and pi, it would be very difficult for signal-integrity simulation software to model the behavior of signals without building circuit boards and making physical measurements. In other words, we’d only know what was going to happen after it already happened.

Angular frequency’s connection to electromagnetics is discussed below. For now, just realize the number of times we sweep through the period per unit time is the angular frequency, typically represented in radians per second.

The time domain. We’re naturally more familiar with the time domain, which we deal with in everyday life, but the frequency domain can provide valuable insight into signal-integrity effects, impedance and loss. To understand this connection, we need to discuss both the time and frequency domains. Dr. Eric Bogatin^{1} does an excellent job of describing the differences and distinctions, so I’ll leverage that information with his permission.

Using relationships well-known in the time domain, the clock frequency shown in FIGURE 2, F_{clock}, represents the number of cycles per second the clock goes through, which is the inverse of the clock period, T_{clock}.

The math is easy if we represent frequency in gigahertz and the clock period in nanoseconds. The signal in Figure 2 has a frequency of 1GHz. Similarly, a 10GHz signal has a period of 0.1ns.

It’s important to note the time domain is the *only* domain that’s *real*. What we describe as the “frequency domain” below connects the time domain to the title of this article.

The frequency domain. As mentioned, the frequency domain provides valuable insight into signal-integrity effects, impedance and loss. If you’ve been around hardware design for long, you’ve no doubt heard about the “frequency domain,” which is where pi really starts to appear. Bogatin^{1} points out, “The most important quality of the frequency domain is that *it is not real*. It is a mathematical construct. The only reality is the time domain. The frequency domain is a mathematical world where very specific rules are followed.” (Emphasis mine.) This is the part of the world where you can use “imaginary numbers” and people won’t think you’re crazy.

In the frequency domain, everything’s a sine wave, and sine waves are everything. Any waveform in the time domain can be completely characterized by a combination of sine waves. And mathematically, we know all about sine waves.

Sine waves typically provide a straighter path to an answer because of the types of electrical problems we often encounter in signal integrity. Transmission lines, in a simplified sense, can be represented as networks of resistors (R), inductors (L), and capacitors (C). As Bogatin1 notes, if we “send any arbitrary waveform in, more often than not, we get waveforms out that look like sine waves and can more simply be described by a combination of a few sine waves.”

No new information is added when switching from the time domain to the frequency domain, but certain things, including but not limited to scattering parameters (S-parameters, a topic for a future column) live quite happily in the frequency domain. Results from an SI simulator can be viewed in either the frequency or time domain. To illustrate the simplicity of the frequency domain, compare the plots in FIGURE 3.

The figure is an excellent illustration of how much easier it is to describe things in the frequency domain. If the time-domain plot represents hundreds or thousands of data points, the same sine wave in the frequency domain can simply be described with a frequency (f) and amplitude (A).

“But what about pi?” you might ask. Well, sine waves have amplitudes, frequency (and periods), as well as phase. The frequency (f, measured in hertz) is the number of complete cycles per second made by the sine wave. Angular frequency is measured in radians per second. As shown in Figure 1, radians describe fractions of a cycle. Figure 1 shows there are 2 × π radians in one complete cycle. As a physicist-friend recently said, “We humans can devise other angular systems to our taste, like say 360°, because, hey, the number divides really well; but they are arbitrary. When we finally meet up with the guys from Zeta Reticuli, they are unlikely to have 360° circles, but they will agree that a circle or cycle is 2π.” That’s because the ratio between the circumference of a circle to its radius will be the same in Euclidean spaces.

The Greek letter omega ω is typically used to refer to the angular frequency, measured in radians per second. The sine-wave frequency and the angular frequency are related by where ω is the angular frequency in radians/second, and f is the sine-wave frequency in Hz. Once we’ve transformed the frequency into radians/second, the mathematical relationships of complex systems become much easier, as we will discuss further below. But rather than blowing past the relationship between angular frequency and your time-domain frequency, make sure you understand and memorize the relationship between angular frequency and time-domain frequency above, as ω is extremely prevalent in the frequency domain.

Impedance and reactance. As mentioned, transmission lines can be represented by a network of parasitic resistance (R), conductance (G), inductance (L), and capacitance (C). Each of these *impedes* current flow. We characterize these factors that impede current flow as follows:

- Resistance (R) is the impedance to current flow, represented by an ideal resistor, in ohms. Resistance has no relationship to frequency.
- Reactance (designated by X), also represented in ohms, is the impedance to current flow from an ideal inductor (L) or capacitor (C). Reactance depends on signal frequency, as noted in the relationships below.
- Impedance (designated by Z) is represented by the combination of resistance and reactance, making it a function of frequency through the reactance elements.

The impedance relationship is where G is conductance, and R and G represent the “real” part of impedance, while jωL and jωC represent the “imaginary” part of impedance – inductive and capacitive reactance in the frequency domain. Without going deep into what’s called “imaginary math,” the value of j is the square root of (-1). Obviously, the square root of (-1) doesn’t exist, hence the term “imaginary.”

Jenny, I’ve got your number. I memorize pi as either 22/7 (simple enough) or 3.14159-ee-ine, rhyming with the Tommy Tutone song, “867-5309/Jenny.” If you don’t know the tune, I recommend getting out more and Googling it. It’s a cool song!

We could go much further here to discuss conductor loss and dielectric loss in the frequency domain as well. Again, angular frequency, pi, and imaginary arithmetic are involved. I’m running long here, so I’ll save that discussion for another day, and will simply close by wishing everyone a happy Pi Day!

References

1. Eric Bogatin, *Signal and Power Integrity – Simplified*, Prentice Hall, first edition, 2004.

Bibliography

1. Eric Bogatin, Don DeGroot, Colin Warwick, and Sanjeev Gupta, “Frequency Dependent Material Properties,” DesignCon, 2010.

2. Douglas Brooks, *Signal Integrity Issues and Printed Circuit Board Design*, Prentice Hall, 2003.

*Register now for PCB West, the leading conference and exhibition for the printed circuit board industry! Coming this September to the Santa Clara Convention Center. pcbwest.com*

{/guest}

]]>The relative permittivity for FR-4 is just that: relative.

*Ed.*: This is Part 3 of a three-part series on preparing for next-generation loss requirements.

Last month, in Part 2 of this series, I outlined the means by which insertion-loss requirements are determined. Here, I’ll suggest a better method for obtaining accurate Df numbers without having to go to the trouble of building test boards.

A longtime PCB industry technologist asked me recently, “What’s a good Dk (dielectric constant) number for FR-4?” As the interest in signal integrity (SI) was growing roughly 25 years ago, it started to interest me that many SI practitioners considered FR-4 to have monolithic properties. The question reinforced that some still hold that view. One might say the relative permittivity (ϵ_{r}) of FR-4 is 4.3. Someone else would say 4.1. A third says they always use 4.0. As I read up on it, I realized it varies with frequency, resin content (as a percentage, with the inverse being the glass percentage), and the resin system. At lower frequencies, static numbers for vanilla FR-4 were probably fine for impedance calculations and signal integrity, but those days are far behind us at this point.

{guest}

*To continue reading, please log in or register using the link in the upper right corner of the page.*

{/guest}

{!guest}

Looking at the hundreds of materials in the Z-planner software library, there’s a clear relationship between Df and Dk values. FIGURE 1 summarizes this relationship, along with the bullet list below:

- “High-loss” materials – typically defined as materials with dissipation factors (Dfs) greater than 0.020 – have Dks greater than 4.0.
- “Standard-loss” materials, with Dfs between 0.015 and 0.020, typically have Dks greater than 4.0, with some standard-loss materials in the 3.75-4.0 range.
- “Mid-loss” materials, with Dfs between 0.010 to 0.015, follow a similar pattern to standard-loss laminates for Df and Dk.
- “Low-loss” materials, with Dfs between 0.005 to 0.010, typically have Dks between 3.5 and 4.0.
- “Ultra-low-loss” materials, with Dfs below 0.005, typically have Dks between 3.0 and 3.5.

So far, so good, but this is where the discussion gets interesting. I wrote a good bit about this, as published in this space between December 2018 and February 2019, so I’ll try to avoid duplication. As I’m unpacking from a pair of trade shows last month, I’m recalling a discussion with a laminate manufacturer that mentioned they use four different fixtures to measure Dk and Df up to 20GHz.

The values represented above come from eight different manufacturers, using a smattering of 12 different IPC test methods measuring Dk and Df. As a result, there’s no way to correlate Dk and Df values directly across the different laminate vendors that use different test methods.

To make matters even more confusing, laminate manufacturers may use:

- One test method for datasheets and another for Dk/Df tables.
- One test method at 1GHz and another test method above 1GHz.
- One test method for Dk and another test method for Df.

PCB fabricators typically use their own Dk fudge factors, based on actual circuit boards, in which they attempt to remove copper effects in order to backwards-engineer Dk and Df values. And multi-gigabit Serdes signals will often be planned using Dk values at 1GHz. And we haven’t even discussed temperature impacts or that half the Dk values from laminate manufacturers are in the x-y plane, which a signal will never see!

In this environment, Dr. Eric Bogatin introduced me to Dr. Don DeGroot of CCN Labs, and through meetings across multiple industry conferences we found we share an interest in developing what we called an “apples-to-apples” dielectric-characterization methodology that basically took the best practices from the most common IPC test methods for dielectric characterization, combined with a commercial stackup-design solution.

Measurement comparisons. As part of our journey toward the “ideal” dielectric-measurement system, our goal was to gain insight into how closely *published* copper-clad laminate (CCL) manufacturers’ table values for Dk(f) and Df(f) correlated to our *calibrated measurements* of the dielectric’s Dk and Df from 1 to 20GHz. What we found was that within specific CCL manufacturers, published Dks varied by +/-10% from our measurements – a 20% variation from minimum to maximum. For signal-integrity purposes, it would be advantageous to remove this additional source of uncertainty, both during new product introduction (NPI)/prototyping activities and in volume production.

Dk and impedance. To provide an idea of the impedance implications associated with accurate Dk values, consider a symmetrical differential stripline model with 5-mil thick dielectrics and 5-mil wide traces.

Published: Incorporating a *published* Dk of 3.74 produces a 50.8Ω simulated single-ended impedance and a 97.2Ω differential impedance on 12-mil spacing.

Measured: Using the *measured* Dk value at 10GHz for a laminate in our study, the simulated single-ended stripline impedance result was 48.5Ω, with a 92.8Ω differential impedance.

Depending on frequency and other factors, the design may be able to survive a 4.5Ω differential impedance gap, but this difference will be in addition to other tolerances and manufacturing variations you must account for, which can pose problems if all the variance works in the same impedance direction. And impedance mismatches – assuming you were targeting 100Ω differentially – cause rise-time degradation that contributes to eye closure, as discussed last month. SI simulations based on the questionable Dk values will be inaccurate. My view is giving away this accuracy when it’s so easily avoidable is not good design practice.

Df and insertion loss. One of the more significant findings in our research is the degree to which *published* Df values tend to diverge from our calibrated stripline-resonator results. These differences vary in magnitude, but always in the same direction: Our *calibrated* stripline-resonator measurements were always higher than vendor-published values. The Df differences were particularly striking at 1GHz, ranging from 33% for one material to 200% for another common material.

To get an idea of the propagation-loss implications for underestimating Df, consider the same stripline configuration noted above, assuming copper foil with R_{z}=2µm roughness on the laminate and processing on the prepreg side that results in R_{z}=1.5µm.

Published: Incorporating a *published* Df of 0.006 produced an insertion loss of 0.72dB/in at 10GHz.

Measured: Using the *measured* Df value of 0.010 at 10GHz for a laminate in our study resulted in an insertion loss of 0.88dB/in at 10GHz, as shown in FIGURE 2.

Multiply these values by a 10" interconnect length and we’re talking about a 1.6dB difference. That’s not a long signal path, and the unplanned loss delta is enough to cause headaches, especially for longer run lengths. There’s a cost element as well. In this example, you paid for 0.006 and received 0.010. That’s a *caveat emptor* moment. The only foolproof way for engineers or PCB fabricators to know they’re getting the loss performance they’re paying for is to measure dissipation factors at frequencies of interest on their own test benches and in the production environment.

Parting thoughts. As engineers and PCB designers are preparing to implement next-generation technologies, we no longer have reason to employ multiple dielectric-characterization methods or fixtures when moving from one frequency to another, to use different methods for Dk and Df, using in-plane measurements that a signal will never see, or using different test setups among laminate manufacturers, PCB fabricators and OEMs.

At DesignCon, we unveiled the first set of low- and ultra-low loss laminate measurements from this system, called “Z-field,” shown in TABLE 1. The system itself, a combination of hardware and software, is shown in FIGURE 3. Results should be similar to what would be expected from a well-calibrated Bereskin Stripline system (now IPC-TM-650, no. 2.5.36), while obviating the need for IPC-TM-650, no. 2.5.5.5, 2.5.5.9 and 2.5.5.13. Game-changing!

*Register now for PCB West, the leading conference and exhibition for the printed circuit board industry! Coming this September to the Santa Clara Convention Center. pcbwest.com*

{/guest}

]]>Can signal-integrity test vehicle results be accurately simulated?

*Ed.*: This is Part 2 of a three-part series on preparing for next-generation loss requirements.

Here in Part 2 of the series, I’ll outline the means by which insertion-loss requirements are determined. In Part 3, I’ll suggest a better method for obtaining more accurate Df numbers without having to go to the trouble of building test boards.

As I stated in last month’s column, if you want to stay on top of the parameters that contribute to loss, there are a lot of factors to juggle. Frequency, copper weight, resin system, glass characteristics, dielectric thickness, trace width, copper roughness, and fabricator processing all contribute to the discussion if you’re savvy, driving fast, with both eyes open.

{guest}

*To continue reading, please log in or register using the link in the upper right corner of the page.*

{/guest}

{!guest}

Component manufacturers will typically specify a loss budget for a chipset. There are multiple server platforms, of course, but Intel’s PCI Express (PCIe) trends provide a good example of the performance jumps seen across today’s interconnect standards. TABLE 1 shows how PCIe speeds have changed in recent years, from PCIe 3.0 to PCIe 5.0.

It is also possible to derive the budget. The relationship for estimating an interconnect loss budget is:

Attenuation budget (dB) = 20 x log (VRX, min / VTX, min)

This loss figure reveals attenuation requirements before employing pre-emphasis or equalization.

Some signal integrity software solutions include these guidelines, represented as eye masks, defining minimum and maximum “keep-out regions” for received signals. At a glance, an eye mask will show if an interconnect is acceptable from a signal-quality standpoint across many bit transitions. FIGURE 1 shows a PCI Express Gen 5.0 eye mask from Mentor’s HyperLynx software. It’s important to observe the inner keep-out region, represented by Top Min in the figure, correlates to the VRX, min column in Table 1. From PCIe 3.0 to 5.0, these requirements have narrowed significantly.

FIGURE 2 shows simulation results for an 8 Gbps signal over a 24" transmission line using a material that was successful on platforms that used PCIe 3.0. The loss tangent or Df for this material is 0.012. The blue keep-out region doesn’t have any bits encroaching on it, which is what we want. Obviously vias, connectors and copper roughness come into play as well. This is simply intended as a high-level example of the interplay between frequency, the eye mask, and the eye diagram. Increase Df above 0.012 and bits begin encroaching on the eye mask’s inner keep-out region.

Keeping with this theme, let’s consider a more expensive, lower-loss material for the next-generation requirement. We’ll use 16 Gbps and the PCIe 4.0 eye mask, both corresponding to Table 1. To make this work, I used a material with a Df of 0.008, and I had to change the transmission-line length to 15", which was right on the edge of what would work. The additional factors noted above must also be considered, but we’ve learned a few things that get us in the ballpark from a dielectric-selection standpoint. FIGURE 3 shows the result. Note the vertical scale was changed, adapting to the tighter keep-out requirements with PCIe 4.0 versus those of 3.0. The same simulation exercise could be performed for PCIe 5.0 and higher frequencies, although the task of producing acceptable eye patterns at the receiver gets much tougher.

In Part 3 of this series, I’ll suggest a better method for obtaining more accurate Df numbers without having to go to the trouble of building test boards.

has more than 25 years’ experience with signal-integrity software and PCB materials. He is director of everything at Z-zero (z-zero.com);{/guest}

]]>Can signal-integrity test vehicle results be accurately simulated?

*Ed.*: This is Part 1 of a three-part series on preparing for next-generation loss requirements.

There are a lot of factors to juggle to stay on top of the parameters that contribute to loss. Frequency, copper weight, resin system, glass characteristics, dielectric thickness, trace width, copper roughness, and fabricator processing all contribute to the discussion if you’re savvy and driving fast, with both eyes open.

If frequencies aren’t increasing, no need to worry. But if your windows are getting chopped in half year-over-year, read on.

Background. Several years ago, I marketed laminates for servers. Older generations bumped up frequencies incrementally, but then we ended up dealing with frequencies that *doubled* from one generation to another, with downward pressure on costs.

{guest}

*To continue reading, please log in or register using the link in the upper right corner of the page.*

{/guest}

{!guest}

There are multiple server platforms, of course, but a quick review of Intel’s PCIexpress (PCIe) trends shows performance jumps going from 8Gbps (4GHz) to 16Gbps (8GHz) to 32Gbps (16GHz) with PCIe 5.0. Those are incredible jumps, particularly when also trying to hold down material costs. The world would be easier if no one had to pay for the performance improvement, but loss and cost are intimately intertwined. And it’s not just Intel and PCIe. Multiple interconnect standards have transitioned from incremental speed increases to doubling generation-over-generation.

To accommodate, it’s common in the server world to build test boards with defined geometries, tracking all the relevant parameters noted above across multiple laminate vendors, resin systems and fabricators, but the process of doing so takes about six months from concept to completion. In today’s design environment, who can wait that long? A lot of designers, I would expect, don’t have the luxury of long server-ecosystem lead times, where the entire system architecture is the long pole in the project schedule tent.

For the purpose of reeling in schedules and narrowing the solution space, I’ve been focused on developing tools that make tradeoffs early in the system-design process, including frequency, interconnect loss budgets, and the design knobs that designers control, including resin system, cost, copper roughness and trace length.

My big question. My BIG QUESTION, which I don’t fully know the answer to before simulating, is whether I could have predicted what I already know from SITVs (signal-integrity test vehicles), based on frequency, resin-system properties, copper characteristics, etc.

I know before performing any simulations what products were successful meeting the loss requirements in TABLE 1, but my interest is actually knowing whether simulation would have provided assistance toward predicting the outcome.

SITVs are designed to emulate typical design configurations, while isolating the relative performance of 3- and 4-mil cores, both in microstrip and stripline configurations, using the same test vehicles (TVs). For simplicity, this column focuses on striplines, but the same process applies to microstrip-signal requirements.

Backwards engineering platform A (legacy). Platform A had an insertion-loss requirement of 0.48dB/inch for the platform’s low-loss boards. Built into this target, of course, are typical interconnect lengths for the platform. Test vehicles for many different competitive mid-to-standard-loss laminate systems were created across multiple PCB fabricators, to find the sweet spot for cost and loss. If you have a lot of money and time, that’s certainly one way to do it.

My first objective, using the specified SITV cross-section, was to determine which Df characteristics would be required to meet the insertion loss (IL) goal. FIGURE 1 shows that at 4GHz a laminate with a loss tangent of 0.013 would meet the 0.48DB/in. target with a little margin. This Df number was determined by entering all the SITV cross-section data, including the Dk from one of the materials under consideration, along with copper roughness, and then determining what Df value met the IL threshold. This is helpful going in. We’re looking for resin systems with actual Dfs at 0.13 or slightly lower.

My second objective was to take one of the laminate systems that was successful on Platform A, with known IL test results, to see whether vendor-published Df numbers would produce the same insertion loss. The vendor-published Df for Laminate A was 0.008 at 4GHz. For this material and the construction shown, the simulated IL using the published Df was fine, but measured insertion loss was just beyond the 0.48dB/in. low-loss requirement at 0.485dB/in., which tells us the actual Df may be a good bit closer to 0.013 at 8GHz. Toward that end, the test vehicles indeed taught us something. As I recall, the laminate in question was bumped down to the mid-loss applications, while slightly more expensive materials were used for the low-loss applications.

This wasn’t a super-high-tech laminate, but the process certainly scales to higher speeds, as we will see below.

Low-loss requirements at 8GHz. Platform B (new) shows a low-loss IL (8GHz) requirement of 0.85dB/in. for a stripline. This guideline is based on typical interconnect lengths for the platform, frequency and the receiver’s tolerance for loss. Using the same SITV cross-section noted above and Dk/Df values from one of the vendor materials proposed for this application, Laminate B, we can see in FIGURE 2 that the laminate-vendor-published Df at 0.007, along with other factors such as the Rz=1.3µm copper roughness, resulted in an insertion loss of 0.60dB/in. This is more than enough margin against the 0.85dB/in. target, which is a good thing. However, in hindsight, the SITV-measured insertion loss was 0.645dB/in., meaning the actual Df was a little bit higher. Trial-and-error with the field solver reveals a Df (8GHz) of 0.008 may be more realistic for this particular laminate. Simulation also showed a Df of roughly 0.013 was needed to meet the IL target at 8GHz.

Low-loss requirements at 12.89GHz. The process is similar but gets more expensive at higher frequencies, requiring we work with sharp pencils. 12.89GHz is the PAM4 Nyquist frequency for a 100 Gbps signal. I like to start these simulations with a material that’s been proposed for this space, so I’ll change the Dk and Df accordingly as a starting point, modeling Laminate C. FIGURE 3 shows the results, with a vendor-published 0.005 Df at 10GHz. The 0.74dB/in. result, using 1.1µm copper (Rz), is well within the 1.25dB/in. low-loss target, but this is just a first pass with datasheet Df numbers. To meet the requirement for a 12.89GHz Platform B signal, simulation shows we need a 0.013 Df (12.89GHz) material.

Using hindsight, we’re able to backwards-engineer the effective Df from actual, fabricated SITVs. Of course, we wouldn’t typically have the IL loss in the planning phase when we’re selecting materials, unless the PCB fabricator had collected it for another project as part of its laminate offering. Our main purpose here, however, is to guide how we may need to interpret vendor Df numbers when framing the solution space for a new platform. The fabricated SITV’s measured IL loss was 0.684dB/in. for this configuration. In this case, we learned vicariously that the published Df number for this material was reasonably accurate. We also learned Laminate C, a PPE resin system at more than two times the price of the next step down, was overkill for the low-loss target. The good news is we’ve found a potential material for the ultra-low-loss boards. The question becomes whether we can stretch the lower-priced Laminate B discussed in the 8GHz section to cover this application, and that’s exactly what happened in practice.

Keep in mind we’re now talking about 12.89GHz. It’s a completely different league than the 4GHz we started with, and that’s a key point in this column.

Conclusion. The point here was to demonstrate a methodology by which a PCB fabricator, design team or laminate vendor could convert interconnect loss requirements into Df requirements and project a material’s insertion-loss performance, without spending several months – from laminate production to PCB fabrication to testing – making SITVs. Doing so without spending tens of thousands of dollars and months of waiting for SITVs is a big benefit. Which makes the most sense: spending $100,000 and waiting six months for answers, or spending a fraction of that and simulating what to expect prior to SITV fabrication?

Along the way, we see sometimes vendor Df numbers a good bit higher in practice than the published vendor value, in some cases by as much as 0.004. With this word of caution, it’s helpful to know a good 2-D field solver can be employed at any point in the design cycle without going to the trouble of building and testing test vehicles.

In Part 2 of this series, I’ll outline the means by which insertion-loss requirements are determined. In Part 3, I’ll suggest a better method for obtaining more accurate Df numbers without building test boards.

Finally, on Jan. 29, I’m hosting an expert panel session at DesignCon to discuss glass-reinforced or PTFE dielectrics that can support the needs of 28, 56, 112 or 128Gbps, along with developing a system for winnowing the list of laminate possibilities from different vendors, or from the same vendor once you’ve chosen a laminate. I won’t be the smartest guy in the room that day by any stretch, but feel free to join us as we finally nail down the answers to this quintessential design concern – or entertain you while some of the experts slug it out! A webinar will follow, so if the subject interests you, send me an email for an invitation.

has more than 25 years’ experience with signal-integrity software and PCB materials. He is director of everything at Z-zero (z-zero.com);{/guest}

]]>