2010 Issues

  

Radiated emissions must be designed out at the beginning.

If your last product passed FCC certification and shipped on time, pat yourself, your EMC engineer and your design team on the back. You accomplished something that is really hard and doesn’t usually happen by accident.

Of the various EMC certification tests, FCC Part 15 Class B, which applies to consumer products, is one of the most stringent. In the roughly 100 MHz range, the maximum allowed radiated emissions from a fully functioning product, when measured 3 meters away, within a 120 kHz bandwidth, must be less than about 100 µV/m.

To put this in perspective, what do you think is the maximum power a radio station could transmit, into a 120 kHz bandwidth, and still pass this FCC test? Is it 1 W? One mWatt? One microwatt?

The answer is shocking. A radio station would have to radiate less than 10 nW of power into a 120 kHz bandwidth in order to pass certification. That is hard.
The most common reason for products to fail this test is due to radiation from common currents on external cables. For a cable 1 meter long, it only takes a common current of 3 µA to radiate enough to fail a certification test.

When you consider that a 1 V signal, driving into a 50 Ω line, is a current of 20 mA, you see that the common currents must be less than 0.01% of the signal currents. This is why passing EMC tests is difficult.

I have yet to encounter a single large system company that does not have a horror story to tell about a product that worked great, passed all the functional tests, but either was never able to pass FCC certification or took so long to fix an EMI problem that its release was late and it missed the market window.

One doesn’t pass an FCC test by accident. It is by designing radiated emissions out of the product right from the beginning, and, in instances where they can’t be designed out, adding filters and shielding to minimize their impact on the certification test.

Don’t expect to learn how to design a product to pass an EMC certification test by following a list of 10 habits. But, if you want a list of topics to use as a guide to begin the discussions in your design team, here are my recommendations for the Top 10 Habits to increase the probability of passing an EMC certification test:

  1. Ground bounce drives common currents on external cables. Minimize ground bounce in all the components of the system.
  2. Use shielded cables. The cable shield should be an extension of the enclosure, not connected to the ground planes of circuit boards. Cable connectors should make a 360° connection between the shield and enclosure.
  3. All control wires and cables that leave the board, even if just routed inside the enclosure, should be routed with an adjacent return conductor. Use as long a rise time as can be afforded for all signals that leave the board. Increase rise times with filters.
  4. Use ferrites around the outside of external cables to suppress common currents.
  5. Minimize mode conversion in all differential channels that leave the enclosure.
  6. Add common signal chokes to all differential signals that leave the enclosure.
  7. The largest source of noise, above 50 MHz, that gets into the power and ground network is from signals passing through the power and ground cavity. Manage this noise with return vias, differential signaling and decoupling capacitors adjacent to signal vias. Smart stack design up can enable the use of return vias.
  8. Design the stackup so that power and ground planes are on adjacent layers, with as thin a dielectric as possible, and preferably close to the board surfaces.
  9. Plan on using a spread spectrum clock generator to smear the first harmonic of all signals into a wider bandwidth. The FCC receiver has a 120 kHz bandwidth. Spreading the spectrum of each harmonic over 1.2 MHz reduces the average power detected in the FCC test by 10 dB.
  10. Enclosure design is not about designing enclosures; it is about designing apertures and seams.

If you weren’t aware of these guidelines when you designed and built your last product, you may have been lucky and dodged a bullet. Don’t rely on luck for your next design. Bring up these topics in the next design review. Have SI engineers and EMC engineers explain what they mean. If still not clear on the concepts, or how to implement them, read a book, find an expert or take a class.

Dr. Eric Bogatin is a signal integrity evangelist with Bogatin Enterprises, and has authored six books on signal integrity and interconnect design, including Signal and Power Integrity – Simplified, published in 2009; This email address is being protected from spambots. You need JavaScript enabled to view it..

Although final performance is measured in the time domain, a detour may be the faster route to a signal integrity solution.

Do you speak frequency domain? Especially if you are a digital designer, you should consider learning the language of the frequency domain. The time domain has special significance because it is the real world. This is the domain in which we live our lives, where we build our intuition, and where we measure digital performance. If the time domain is the real world, why would we ever want to leave to enter the frequency domain?

For signal integrity problems related to lossy transmission lines, working in the frequency domain can often help us find solutions to eye closure problems.

Sometimes, taking a detour through the frequency domain can bring an acceptable answer more quickly than staying in the time domain.

In this brief introduction to solving problems in the frequency domain, we will look at how to improve signal quality in high-speed serial links like PCI Express (PCIe) by speaking the language of the frequency domain.

Rise time in the frequency domain. If the time domain is the real world, what does this make the frequency domain? The frequency domain is not the real world; it’s a mathematical construct. As such, it has certain very special rules that must be followed. One rule is that only sine waves can be used to describe signals.
Each sine wave is described by a frequency, an amplitude and a phase. While the phase of a sine wave is important, we usually focus attention on the amplitude of the sine wave.

Any waveform in the time domain can be translated into the frequency domain using the Fourier Transform. While it is important to have done a Fourier Transform by hand at least once in your life, after that, it’s usually more important to get the answer as quickly as possible. Every version of SPICE can perform a Fourier Transform of any arbitrary time domain waveform.

The hidden assumption with these translations, typically called a Discrete Fourier Transform, or DFT, is that the waveform in the time domain is repetitive. The repeat frequency is either the clock frequency or the total simulation time. The repeat frequency has special significance. It is the lowest frequency that will appear in the spectrum and is called the fundamental frequency.

Every frequency component that appears in the frequency domain is a multiple of this fundamental and is called a harmonic. The collection of all the harmonic components is called the spectrum.

A very important waveform in the time domain and the frequency domain, to which most other waveforms are compared, is an ideal square wave. In the time domain, an ideal square wave has a zero psec rise time and a 50% duty cycle.

Its spectrum has three important features:

  • Only multiples of the fundamental appear.
  • The harmonic of each component drops off like 1/n, the harmonic number.
  • The even harmonics, i.e., 2d, 4th, 6th, etc., have zero amplitude.

For an ideal square wave, the frequency components continue with their pattern to infinite frequency, always dropping off inversely with frequency or harmonic number. But, if the rise time is finite, this is not the case. After some frequency, which we call the bandwidth, the amplitudes of the frequency components drop off much more quickly than 1/f for an ideal square wave.

Figure 1 shows a 1 GHz ideal clock square wave spectrum and the spectrum of a clock with a rise time that is 5% the clock period. The relationship between the 10-90 rise time, RT, and the bandwidth, BW, is roughly approximated as BW = 0.35/RT. For this example, we would expect the bandwidth to be about 0.35/0.05 = 7 GHz. This is about where the amplitude begins to drop off below the ideal square wave spectrum.

Every frequency component of an ideal square wave is significant in contributing to that 0 psec rise time. However small the amplitude may be at very high frequency, it is still important. Remove any and you won’t get the 0 psec rise time.

When the frequency component of a real waveform has an amplitude significantly smaller than the equivalent ideal square wave, it won’t be large enough to contribute to the rise time and can be ignored. The bandwidth of a real waveform is the highest sine wave frequency component that is significant. Decrease its bandwidth in the frequency and its rise time in the time domain will increase.

A fundamental measure of the rise time of a signal in the time domain is the frequency at which the harmonics begin to drop off more quickly than 1/f. For this reason, the bandwidth is often referred to as the knee frequency. The lower the knee frequency in the frequency-domain, the longer the rise time of the signal in the time-domain.

Signal propagation on real interconnects. How do real interconnects like traces on a circuit board, or coax or twin-ax cables, affect signals? In the time domain, we can evaluate the behavior of a single bit as it propagates down a transmission line. Figure 2 shows what a single 1 bit would look like traveling down a 0.003˝ wide FR-4 transmission line for a 1 Gbps signal, initially, after 30˝ and after 60˝.



The single 1 bit starts out with a 1 nsec unit interval, and a very fast rise time. As it propagates down the transmission line, the wave form is dramatically affected. That single bit spreads out into adjoining bits. This cross talk between one bit and other bits is called inter-symbol interference or ISI. It contributes to the collapse of the eye. Figure 3 shows the received eye of a pseudo random bit stream (PRBS) signal at the three locations above.



How the single bit spreads out is a very important metric of the behavior of the interconnect. Yet, as viewed in the time domain, the exact shape of the pulse is complicated and difficult to describe in a simple way.

Here is where the frequency domain description offers a simpler description. The term that describes how a sine wave differential signal is affected by an interconnect when it exits is the differential insertion loss, sometimes referred to by its S-parameter designation, SDD21. This is also called the transfer function of the interconnect. This measureable property describes what a sine wave with amplitude of 1 looks like when it comes out the transmission line. The differential insertion loss for these interconnect paths is shown in Figure 4.

Every single interconnect has a similar SDD21 response. The amplitude coming out is always less than the amplitude going in, and the amplitude of SDD21 drops off with increasing frequency. On a log scale of amplitude, measured in dB, the differential insertion loss generally drops off nearly linearly with frequency. A metric of the dB/inch per GHz is one single number that characterizes most interconnects.

If we send an ideal square wave signal into a real interconnect, its spectrum will be multiplied by this transfer function. The SDD21 tells us how each frequency component will be attenuated. Higher frequencies get attenuated more than lower frequencies. This pushes the knee frequency of the signal’s spectrum to lower frequency, and increases the rise time of the signal.

This frequency-dependent attenuation of typical interconnects will cause rise time degradation, which will smear one bit into adjacent bits, result in ISI and cause collapse of the eye.

It’s not the attenuation of the interconnect that degrades the rise time; it is the frequency dependence of the attenuation. After all, if we take all the frequency components and just attenuate each of them the same amount, we will still have the same spectral shape of the bit sequence coming out. The frequency at which the knee occurs will be unchanged; the rise time of the signal will be unchanged, and there will be no ISI and no collapse of the eye.

Fixing the eye collapse in the frequency domain. We can fix the ISI in real interconnects by either flattening the insertion loss curve of the interconnect, or by changing the spectrum of the signal going into the interconnect so that when it comes out, it preserves the 1/f shape.

Designing an interconnect with a flatter response is tough. The root cause of the frequency-dependent loss is the combination of the skin depth dominated copper series resistance and the laminate material dielectric loss. For example, certain Gore cable assemblies use a very thin signal conductor, which has a frequency-dependent resistance much flatter than a typical copper core cable.

Likewise, laminates such as Rogers’ RO4350 have a lower dissipation factor and a flatter response dielectric loss curve. Both these interconnect design features reduce ISI with a flatter differential insertion loss response.

It is also possible to flatten an interconnect’s transfer function by adding some extra frequency-dependent gain at the receiver. If the interconnect has more attenuation at higher frequency, why not add gain that increases at higher frequency to compensate and flatten the overall response? Figure 5 shows how this works.

When we add frequency-dependent gain, we equalize the response of the interconnect across a wide frequency range. We call this process equalization. The ideal equalizer has a gain curve that is the exact inverse of the attenuation curve. With a flat response, the spectrum of the transmitter is preserved at the receiver; the knee frequency is the same as the transmitter; the rise time is the same, no additional ISI and no additional collapse of the eye.

Finally, the spectrum of the signal at the transmitter can be pre-distorted to add extra high-frequency components. The interconnect will attenuate the high-frequency components anyway. If we add extra high-frequency components at the transmitter, by the time they travel through the interconnect, they will be attenuated away, leaving the 1/f spectrum of a short rise time signal. The shorter the rise time, the less the ISI, and the lower the eye collapse.

One way of implementing adding high-frequency components is called pre-emphasis. Wherever a 1 bit begins, extra energy is added. Likewise, we could obtain the same pre-distorted spectrum by taking out low-frequency amplitudes. Whenever there is a string of more than one bit with the same value, reduce the signal level. This is called de-emphasis (Figure 6).

Both pre- and de-emphasis result in the same distorted shape in the transmitted signal spectrum. When either of these distorted signals travel through an interconnect, if optimized for the interconnect, the signal will come out the other end with a short rise time, less ISI and a more open eye.

By looking in the frequency domain, we can see how the techniques mentioned manipulate the frequency domain spectrum of the received signal. We engineer it to look more like an ideal square wave’s spectrum and recreate the shortest rise time to minimize its ISI and keep the eye open so that each bit can be detected as its true 0 or 1 value.

Even though final performance is always measured in the time domain, sometimes detouring through the frequency domain may be a faster route to a signal integrity solution.

Au: Many of the principles described in this paper are covered in great detail in papers that can be downloaded from bethesignal.com.

Eric Bogatin, Ph.D., is a signal integrity evangelist with Bogatin Enterprises (bethesignal.com); This email address is being protected from spambots. You need JavaScript enabled to view it..

The sudden unintended acceleration problems in Toyota’s vehicles have touched off a firestorm of controversy over the cause(s). Accusations of problems with the electronics throttle system were quickly followed by emphatic denials by the automaker. Then on Feb. 23, Toyota’s top US executive testified under oath before US Congress that the automaker had not ruled out electronics as a source of the problems plaguing the company’s vehicles. Subsequently, other company officials denied that was true.

To confuse matters more, a professor of automotive technology claims to have found a flaw in the electronics system of no fewer than four Toyota models that “would allow abnormalities to occur.” Testifying before Congress, David W. Gilbert, a Ph.D. with almost 30 years’ experience in automotive diagnostics and troubleshooting, said the trouble locating the problem’s source could stem from a missing defect code in the affected fleet’s diagnostic computer.

Prof. Gilbert said his initial investigation found problems with the “integrity and consistency” of Toyota’s electronic control modules to detect potential throttle malfunctions. Specifically, Prof. Gilbert disputed the notion that every defect would necessarily have an associated code. The “absence of a stored diagnostic trouble code in the vehicle’s computer is no guarantee that a problem does not exist.” Finding the flaw took about 3.5 hours, he added. (A video of Prof. Gilbert’s test at his university test track is at www.snotr.com/video/4009.)

It took two weeks for the company to strike back. In early March, Toyota claimed Prof. Gilbert’s testimony (http://circuitsassembly.com/blog/wp-content/uploads/2010/03/Gilbert.Testimony.pdf) on sudden unintended acceleration wasn’t representative of real world situations. In doing so however – and this is important – Toyota made no mention (at least in its report) about Prof. Gilbert’s more important finding: the absence of the defect code. Was Toyota’s failure to address that an oversight? Or misdirection?

Second in concern only to the rising death toll is Toyota’s disingenuous approach to its detractors. Those who follow my blog realize I’ve been harping on this for several weeks. But why, some readers have asked.

The reason is subtle. Electronics manufacturing rarely makes international headlines, and when it does, it’s almost always for the wrong reasons: alleged worker abuses, product failures, (mis)handling of potentially toxic materials, and so on. The unfolding Toyota story is no different.

Yet it’s important the industry get ahead of this one. Planes go down over oceans and their black boxes lost to the sea. Were the failures brought about by conflicts between the cockpit navigational gear and on-board satellite entertainment systems? When cars suddenly accelerate, imperiling their occupants, was it a short caused by tin whiskers that left the driver helpless? It’s vital we find out.

In the past four years, NASA’s Goddard Space Flight Center, perhaps the world’s premiere investigator of tin whiskers, has been contacted by no fewer than seven major automotive electronics suppliers inquiring about failures in their products caused by tin whiskers. (Toyota reportedly is not one of them, but word is NASA will investigate the incidents on behalf of the US government.) These are difficult, painful questions, but they must be examined, answered, and the results disseminated. Stonewalling and misdirection only heighten the anxiety and fuel accusations of a cover-up.

We as an industry so rarely get the opportunity to define just how exceedingly difficult it is to build a device that works, out of the box, as intended, every time. A Toyota Highlander owner has a satellite TV monitor installed into his dash, then finds certain controls no longer work as designed. A Prius driver’s car doesn’t start when he’s using his Blackberry. It’s impossible for an automotive company to predict and design for every single potential environmental conflict their models may encounter.

The transition to lead-free electronics has been expensive and painful for everyone – even for those exempt by law, because of the massive infiltration of unleaded parts in the supply chain. And despite no legal impetus to do so, some auto OEMs have switched to lead-free. Moreover, to save development time costs, automakers are quickly moving to common platforms for entire fleets of vehicles, dramatically exacerbating the breadth of a defect. We do not yet know if lead-free electronics is playing a role in these catastrophic failures. But if tin whiskers or some other electronics-related defect are the cause, or even a cause, of these problems, we need to know. If we are inadvertently designing EMC in, we need to know.

Toyota’s PR disaster could be a once-in-a-lifetime chance for the electronics industry to reposition itself. What we build is important and life-changing. This is a chance to take back our supply chains from those whose single purpose is cost reduction, and to redefine high-reliability electronics as a product worth its premium. And it’s a chance to explore whether wholesale industry changes are conducted for the good of the consumer, or for short-term political gain. It’s a tragic reminder that science, not opinion, must always win, and that moving slowly but surely is the only acceptable pace when designing and building life-critical product.

‘Virtually’ great. A big “thank you” to the 2,600-plus registrants of this year’s Virtual PCB conference and exhibition. It was the best year yet for the three-year-old show, confirming once again that there’s more than one way for the industry to get together. The show is available on-demand through May 4; be sure to check it out at virtual-pcb.com.

Setting up the proper feedback loop saves time and cost in re-spins.

When considering electronics product development, design and manufacturing often are thought of as separate processes, with some interdependencies. The processes seldom are considered as a continuum, and rarely as a single process with a feedback loop. But the latter is how development should be considered for the purposes of making competitive, fast-to-market, and lowest cost product.

Figure 1 shows the product development process from design through manufacturing to product delivery. This often is a segmented process, with data exchange and technical barriers between the various steps. These barriers are being eliminated and the process made more productive by software that provides for better consideration of manufacturing in the design step, more complete and consistent transfer of data from design to manufacturing, manufacturing floor optimization, and the ability to capture failure causes and correct the line to achieve maximum yields. The goal is to relay the failure causes as improved, best-practice DfM rules and prevent failures from happening.



The process starts with a change in thinking about when to consider manufacturing. Design for manufacturability should start at the beginning of the design (schematic entry) and continue through the entire design process. The first step is to have a library of proven parts, both schematic symbols and component geometries. This forms the base for quality schematic and physical design.

Then, during schematic entry, DfM requires communication between the designer and the rest of the supply chain, including procurement, assembly, and test through a bill of materials (BoM). Supply chain members can determine if the parts can be procured in the volumes necessary and at target costs. Can the parts be automatically assembled or would additional costs and time be required for manual operations? Can these parts be tested using the manufacturer’s test equipment? After these reviews, feedback to the designer can prevent either a redesign or additional product cost in manufacturing.

The next DfM is to ensure the layout can be fabricated, assembled and tested. Board fabricators do not just accept designer data and go straight to manufacturing. Instead, fabricators always run their “golden” software and fabrication rules against the data to ensure they will produce boards that are not hard failures, and to determine adjustments to avoid soft failures that could decrease production yields.

So, for the PCB designer, part of the DfM is to use this same set of golden software and rules often throughout the design process. This practice not only could prevent design data from being returned from the fabricator or assembler, but, if used throughout the process, could ensure that design progress is always forward, with no redesign necessary.

A second element in this DfM process is the use of a golden parts library to facilitate the software checks. This library contains information typically not found in a company’s central component libraries, but rather additional information specifically targeted at improving a PCB’s manufacturability. (More on this later.)

When manufacturing is considered during the design process, progress already has been made toward accelerating new product introduction and optimizing line processes. Design for fabrication, assembly and test, when considered throughout the design function, helps prevent the manufacturer from having to drastically change the design data or send it back to the designer for re-spin. Next, a smooth transition from design to manufacturing is needed.

As seen in Figure 2, ODB++ is an industry standard for transferring data from design to manufacturing. This standard, coupled with specialized software in the manufacturing environment, serves to replace the need for every PCB design system to directly produce data in the formats of the target manufacturing machines.



Time was, design systems had to deliver Gerber, machine-specific data for drill, pick-and-place, test, etc. Through standard data formats such ODB++ (and other standards such as GenCAM), and available fab and assembly optimization software, the manufacturing engineers’ expertise can be capitalized on. One area is bare board fabrication. If the designer has run the same set of golden rules against the design, there is a good chance no changes would be required, except ones that might increase yields. This might involve the manufacturer spreading traces, or adjusting stencils or pad sizes. But the risk here is the manufacturer does not understand tight tolerance rules and affects product performance. For example, with the emerging SERDES interconnect routing that supports data speeds up to 10 Gbs, matching trace lengths can be down to 0.001˝ tolerance. Spreading traces might violate these tolerances. It is important the OEM communicate these restrictions to the manufacturer.

From an assembly point of view, the manufacturer will compare received data (pad data and BoM) to a golden component library. Production engineers rely on this golden library to help identity BoM errors and footprint, or land pattern mismatches prior to first run. For example, the actual component geometries taken from manufacturer part numbers in the BoM are compared to the CAD footprints to validate correct pin-to-pad placement. Pin-to-pad errors could be due to a component selection error in the customer BoM. Although subtle pin-to-pad errors may not prevent parts from soldering, they could lead to long-term reliability problems.

Designers have DfT software that runs within the design environment and can place test points to accommodate target testers and fixture rules. Final test of the assembled board often creates a more complex challenge and usually requires a manufacturing test engineer to define and implement the test strategy. Methods such as in-circuit or flying probe testing require knowledge of test probes, an accurate physical model of the assembly (to avoid probe/component collisions), and the required test patterns for the devices.

Manufacturing Process Optimization

Even as a new assembly line is configured in preparation for future cost-efficient production, for high-mix, high-volume or both, simulation software can aid in this process. Many manufacturers are utilizing this software to simulate various line configurations combined with different product volumes and/or product mixes. The result is an accurate “what if?” simulation that allows process engineers to try various machine types, feeder capacities and line configurations to find the best machine mix and layout. Using line configuration tools, line balancers, and cycle time-simulators, a variety of machine platforms can be reviewed. Once the line is set up, this same software maintains an internal model of each line for future design-specific or process-specific assembly operations.

This has immediate benefits. When the product design data are received, creating optimized machine-ready programs for any of the line configurations in the factory, including mixed platforms, can be streamlined. A key to making this possible lies in receiving accurate component shape geometries, which have been checked and imported from the golden parts library. Using these accurate shapes, in concert with a fine-tuned rule set for each machine, the software auto-generates the complete machine-ready library of parts data offline, for all machines in the line capable of placing the part. This permits optimal line balancing, since any missing part data on this or that machine – which can severely limit an attempt to balance – are eliminated. The process makes it possible to run new products quickly because missing machine library data are no longer an issue. Auto-generating part data capability also makes it possible to quickly move a product from one line to another, offering production flexibility.

Programming the automated optical inspection equipment can be time-consuming. If the complete product data model – including all fiducials, component rotations, component shape, pin centroids, body centroids, part numbers (including internal, customer and manufacturer part numbers), pin one locations, and polarity status – is prepared by assembly engineering, the product data model is sufficiently neutralized so that each different AOI platform can be programmed from a single standardized output file. This creates efficiency since a single centralized product data model is available to support assembly, inspection and test.

Managing the assembly line. Setting up and monitoring the running assembly line is a complex process, but can be greatly improved with the right software support. Below is a list of just a few elements to this complex process:

  • Registering and labeling materials to capture data such as reel quantity, PN, date code and MSD status.
  • Streamlining material preparation and kitting area per schedule requirements, plus real-time shop floor demand, including dropped parts, scrap and MSD constraints.
  • Feeder trolley setup (correct materials on trolleys).
  • Assembly machine feeder setup, verification and splicing.
  • Manual assembly, and correct parts on workstations.
  • Station monitoring, capture runtime statistics, efficiency, feeder and placement errors and OEE calculations.
  • Call for materials to avoid downtimes.
  • Tracking and controlling production work orders to ensure visibility throughout the process.
  • Enforcing the correct process sequence is followed, and any test or inspection failure is corrected to “passed” status before product proceeds.

Collecting failure data. It is inevitable some parts will fail. By capturing and analyzing these data, causes can be determined and corrected. Figure 3 shows certain areas where failures are diagnosed and collected using software. One benefit of software is the ability to relate, in real-time, test or inspection failures with the specific machine and process parameters used in assembly and the specific material vendors and lot codes used in the exact failure locations on the PCB.



Earlier, a set of DfM rules in the PCB design environment that reflects the hard and soft constraints to be followed by the designer was discussed. If these rules were followed, we could ensure that once design data reached manufacturing, they would be correct and not require a redesign. What we did not anticipate was that producing a product at the lowest cost and with the most efficiency is a learning process. We can learn by actually manufacturing this or similar designs, and determine what additional practices might be applied to DfM to incrementally improve the processes.

This is where the first feedback loop might help. If we capture data during the manufacturing process and during product failure analysis, this information can be used to improve DfM rule effectiveness. Continuous improvement of the DfM rule set based on actual results can positively influence future designs or even current product yields.

The second opportunity for feedback is in the manufacturing process. As failures are captured and analyzed (Figure 3), immediate feedback and change suggestions can be sent to the processing line or original process data models. Highly automated software support can reconfigure the line to adjust to the changes. For example, software can identity which unique machine feeder is causing dropped parts during placement. In addition, an increase in placement offsets detected by AOI can be immediately correlated to the machine that placed the part, or the stencil that applied the paste, to determine which tool or machine is in need of calibration or replacement. Without software, it would be impossible to detect and correlate this type of data early enough to prevent yield loss, rework or scrapped material.

Design through manufacturing should be treated as a continuum. One starts with the manufacturer’s DfM rules, followed and checked by the designer’s software, the complete transfer of data to the manufacturer using industry-recognized standards, the automated setup and optimization of the production line, the real-time monitoring and visibility of equipment, process and material performance, and finally the capture, analysis and correlation of all failure data. But the process does not end with product delivery, or even after the sale support. The idea of continuum is that there is no end. By capturing information from the shop floor, we can feed that to previous steps (including design) to cut unnecessary costs and produce more competitive products. 

John Isaac is director of market development at Mentor Graphics (mentor.com); This email address is being protected from spambots. You need JavaScript enabled to view it.. Bruce Isbell is senior strategic marketing manager, Valor Computerized Systems (valor.com).

Ed.: At press time, Mentor Graphics had just completed its acquisition of Valor.

  

Bringing tutorial-level instruction to designers’ desktops.

Last month UP Media Group produced our third Virtual PCB trade show and conference for PCB design, fabrication and assembly professionals. Every year the event has grown and shown more promise as a format for bringing together the industry.
 
We originally started researching virtual trade shows in the late 1990s, but the technology was just not ready. There just was no platform for doing a virtual show in a way that we felt would appeal to our audience.

About four years ago, things changed. Editor-in-chief Mike Buetow called our attention to a company that had a good grasp of what we needed. For those of you who don’t know Mike, he is a thorough and hard-charging editor, with a good handle on most areas of the fabrication and assembly sides of the printed circuit board market. Once Mike set us on the path, UPMG’s Frances Stewart and Alyson Skarbek turned the concept of Virtual PCB into a must-attend event. This year we had a record registration of over 2,600 people from literally every corner of the globe. Those who missed the two-day live event can view the on-demand version, available through May 4.

I take this as another indicator of the power of the Internet. And though we at UPMG are, at heart, print kind of people, we realize the Internet has a huge and still-evolving role to play in how we interact with our readers and advertisers.

This leads me to our latest project. The PCB Design Conferences have always been one of my favorite projects. As an old PCB designer at heart, I realized from my own experience how much we need to learn about every aspect of design, and to stay in touch with technology that sometimes changes almost daily. A couple months ago, a friend turned me on to a software platform that allows us to take the virtual experience to a new level. This new project is called Printed Circuit University. PCU’s primary mission is to help PCB designers, engineers and management stay abreast of technology and techniques. We’ll accomplish this through short flash presentations, webinars, white papers, resource links and blogs by some of the most interesting people in the industry. You’ll be able to post questions for peers to comment, and share experiences and opinions on just about any subject that has to do with circuit boards.

Then there is the Design Excellence Curriculum. The DEC is a program we developed around 15 years ago for the PCB Design Conference. Over the years, thousands of PCB designers and engineers have taken the courses and furthered their knowledge of specific areas of PCB design. Today, the Web allows us to reach a greater number of people than ever, so we’ve decided to bring the DEC to Printed Circuit University.

How does it work? Think of a college curriculum where to get a degree in a specific subject you complete core classes like English and math that round out your general knowledge, and then follow a field of study that builds knowledge in the area you want to pursue. Core classes are on subjects that build the foundation for what every good designer should know: PCB fabrication, assembly and test; dimensioning and tolerances; laminates and substrates; electrical concepts of PCBs.

After passing the core courses, you’ll be able to choose from a series of classes on specific subjects such as flex design, signal integrity, RF design, advanced manufacturing, packaging and a host of others designed to build your knowledge in that area. After the core classes, a person can study as many fields as they want, to gain more and more knowledge of all types of PCB design.

The platform we’ve settled on for PCU is the same used by universities and online learning institutions such as Penn State, Tennessee Tech, University of Iowa and many more. The platform establishes a real-time, online classroom where you can see the instructor and ask questions. You’ll be able to view presentation materials, and the instructor can even switch to a white board view to illustrate a point. Sessions will be archived for review at any time, and when you have completed a course of study, you’ll take an exam to demonstrate that you were listening in “class.”

We expect to launch Printed Circuit University at the end of June, so stay tuned. If you have comments or questions, please email me. In the meantime, stay in touch and we’ll do the same.

Pete Waddell is design technical editor of PCD&F (pcdandf.com); This email address is being protected from spambots. You need JavaScript enabled to view it..

When personality enters the equation, process management seems to vanish.

Some things just seem to get to me, and one is the concept of “process management.” It’s not that I don’t like or believe in it. Over the years I have spent tons of money successfully working to improve processes, and have seen firsthand the benefit improved process management can yield. There really is a solid return on investment when improvements are made and productivity improves.

But what gets me is that process management seems to only matter when applied to “things” vs. “people.”

Everyone – regardless of industry or job function – loves to tout improvements derived from applying a process improvement to a “thing.” This may include manufacturing equipment – hardware or software – end-products – newly developed or longstanding favorites – and even bricks and mortar (or elimination of same). However, when the process that needs improvement exclusively involves personnel, and someone’s name is identified with said “process improvement,” watch everyone shy away.

By identifying names, I am referring to when the process improvement involves people who work with people (versus equipment), and when the processes that those people should be developing or following are associated with a small number of colleagues, which means those involved are easily identified. Usually, those people work in externally focused departments (sales, purchasing, administrative functions such as HR and accounts payable and receivable). The process management challenge is to improve the level of service, support or value-add that would create a solid relationship in processes that have heavy interpersonal versus machine-driven interaction. And it is not just the supplier/customer relationship. In any situation that involves people – supplier, customer, consultant, employee – friend or foe – when personality enters the equation, process management seems to vanish.

Some examples: When someone evaluates a plating process and comes up with an optimal chemistry, preferred timing and sequence, or improved equipment or configuration, they tout their process management skills and the resulting improvement as a major victory. However, when dealing with the timing and sequence of involving people in, say, periodic capability updates with customers, or the frequency and “configuration” of communication between key suppliers and purchasing, or even just being more visible with key decision-makers at specific customers or suppliers, those involved often shy away from dealing with the situation. Far worse is if there is a problem, almost everyone will assume the ostrich position: put their head in the sand and hope no one notices the deafening silence caused by a lack of process or management.

This is not to say initiatives don’t take place in people-driven processes. But too often these efforts manifest themselves as software-focused “interfaces” or “portals” – to create a cyber impression of involvement and progressive process management so to improve how people are dealt with – without actually helping the managers tasked with dealing with those people. Others will periodically take momentary actions of heroic brilliance led by an employee who tries to assist. Yet, if the action is not embraced with appropriate reward and recognition, and then adopted and implemented as a true process improvement, then it is not truly process management.

When process improvement exclusively involves people, rather than focusing on process improvement, we tend to deal with only the most serious problems, and even then not in the context of process improvement, but in a superficial, expeditious way.

Using the most basic definition, we are all job shops that cannot create demand or inventory product for future customers, but must react to customers’ specific technological as well as volume needs, responding only when they want it. What differentiates us all – for better or worse – is the level of support and service we offer; the relationships we have with our suppliers, and our abilities to pull together diverse subcontract capabilities to satisfy a pressing customer need. In short, what differentiates us is our ability to interact with … people.

And yet, we tend to not focus as much on process improvement in the very areas that could and will best differentiate us. Yes, cutting-edge software or web portals may help, but do they really address what customers want? We might have the most technologically savvy staffs, but if they don’t want – or know how – to interact with customers consistently and effectively, what is their use? We may be able to identify customers’ or suppliers’ problems, but if we explicitly or implicitly cop an attitude that communicates that we are not the ones who should help, how can our role be viewed as value-add?

While most companies have undergone extraordinary measures to improve internal design and manufacturing process management, I would bet that if the folks at ISO really focused on processes that involve suppliers and customers, most companies would not qualify for certification. Which brings me to the importance of embracing all the processes that link your company with the people to which you are most trying to provide value: customers. Only by making sure you apply the best available processes and people management so you are consistent in how well you treat all the related people – suppliers through customers – can you begin to focus on providing the value-adding process management they all seek.

Disney and Apple seem to get this. Disney parks are consistent in their customer focus and process management to make sure that all customers (“guests”) feel they had the best experience and received the greatest value for their money. Ditto with Apple; its retail stores emphasize service, support and the total “Apple” experience, and hence, the value-add of buying its products and much of its brand loyalty.

In our industry, the highest level of value-add is in supporting engineers who create cutting-edge technology with the often-invaluable manufacturability input we can offer. It’s about making the buying experience easy and seamless for that engineer or the buyer who is under pressure to cut all costs. Value-add is building the personal relationship of being the company that can “make it happen.” And yet, when resources are committed, investments made, and HR reviews given, the people side of process management is all too often the area ignored or neglected.

If only managers and workers realized how important their attitude, commitment and over-the-top involvement means to the bottom line of their customers, as well as their colleagues. Going above and beyond is great, but when that attitude, skill set and commitment are part of the process management – process improvement – then true value-add will be provided to customers and suppliers.

Peter Bigelow is president and CEO of IMI (imipcb.com); This email address is being protected from spambots. You need JavaScript enabled to view it.. His column appears monthly.

Page 10 of 18