Advanced Materials
“Process Development for Micro-Electronics Packaging with Direct Printed Additive Manufacturing”
Authors: R. X. Rodriguez, K. Church, Ph.D., and X. Chen; This email address is being protected from spambots. You need JavaScript enabled to view it..
Abstract: Using additive manufacturing as an alternative for packaging has not been taken seriously, but there is an opportunity to demonstrate the significant advantages of true 3D electronic packages by permitting the package to be the printed circuit board and by utilizing direct print and bare die approaches to print and structure diverse electronics. In order to build a device by implementing an additive, layering process, materials must be studied and characterized for repeatability and reliability of mechanical and electrical performance. This paper summarizes a study of some of the materials and the process used to build 3D structures using DPAM. Also described are variables that are adjusted to optimize a build with a DPAM process. A carbon nanotube – photopolymer composite, conductive ink, conductive adhesive and standalone photopolymer were tested for their individual electrical resistances, as well as the resistances when layered on each other. A single layer (~100µm thick) was dispensed then tested for its electrical resistance. The final build consists of three layers with a surface-mounted active component placed and electrically epoxied in place. (SMTA International, October 2012)
Component Packaging
“3D Packaging for High Computing with Wide IO Processor-Memory Interface”
Authors: Ilyas Mohammed, Ron Zhang and Rajesh Katkar; This email address is being protected from spambots. You need JavaScript enabled to view it..
Abstract: Current interconnect technologies for package-on-package such as stacking with smaller sized solder balls, using solder-filled laser-drilled vias in the mold cap, or using organic interposers are not practically achieving high IO requirements, since the aspect ratios of these interconnects are limited. To address the gap in PoP interconnect density, a wirebond-based package stacking interconnect technology called Bond Via Array is presented that enables reduced pitch and a higher number of interconnects in the PoP perimeter stacking arrangement. The main technological challenges are identified and research results explained. The main challenges were forming freestanding wirebonds, molding the package while exposing the tips of the wirebonds, cleaning the wire tips and package stacking. Assembly results showed that the wire tips were within the desired positional accuracy and height, and the packages were stacked without any loss of yield. (SMTA International, October 2012)
Semiconductor Fabrication
“High-Density Integration of Carbon Nanotubes via Chemical Self-Assembly”
Authors: Hongsik Park, et al; This email address is being protected from spambots. You need JavaScript enabled to view it..
Abstract: Carbon nanotubes have potential in the development of high-speed and power-efficient logic applications. However, for such technologies to be viable, a high density of semiconducting nanotubes must be placed at precise locations on a substrate. Here, the authors show that ion-exchange chemistry can be used to fabricate arrays of individually positioned carbon nanotubes with a density as high as 1 × 109 cm−2 – two orders of magnitude higher than previous reports. With this approach, the authors assembled a high density of carbon-nanotube transistors in a conventional semiconductor fabrication line and then electrically tested more than 10,000 devices in a single chip. The ability to characterize such large distributions of nanotube devices is crucial for analyzing transistor performance, yield and semiconducting nanotube purity. (Nature Nanotechnology, Oct. 28, 2012)
Solder Joint Reliability
“Impact of Electrical Current on the Long-Term Reliability of Fine-Pitch Ball Grid Array Packages with Sn-Ag-Cu Solder Interconnects”
Author: Tae-Kyu Lee, Ph.D.
Abstract: The interaction between electrical current and long-term reliability of fine-pitch BGAs with Sn-3.0Ag-0.5Cu (wt.%) solder ball interconnects is investigated. In this study, 0.4mm fine-pitch packages with 300µm-diameter SnAgCu solder balls are used. Electrical current was applied under various conditions to two different package substrate surface finishes to compare the effects of chemically unmixed and mixed joint structures: a Cu/SAC305/Cu structure and a NiAu/SAC305/Cu structure, respectively. To study the thermal impact on thermal fatigue performance and long-term reliability, the samples were thermally cycled from 0° to 100°C with and without current stressing. Based on Weibull plots, characteristic lifetime was degraded for the mixed joint structure, but little degradation was observed for the unmixed joint structure. The microstructure evolution was observed during constant current stressing and current stressing during thermal cycling. Accelerated intermetallic precipitation depletion at the package-side interface was observed in NiAu/SAC305/Cu structures due to current stressing, which was identified as the potential reason for the degradation in the thermal cycling performance. (Journal of Electronic Materials, Oct. 28, 2012)
This column provides abstracts from recent industry conferences and company white papers. Our goal is to provide an added opportunity for readers to keep abreast of technology and business trends.
The state of reliability of the today’s complex and costly electronics and photonics systems should be checked (monitored) and, if possible and feasible, even managed on a continuous basis. This is particularly true for systems where failure-free operation is especially important. The objective of technical diagnostics (TD)1,2 is to recognize, in a continuous fashion and without taking apart the object of interest, its technical state and its ability to continue to perform in the expected (specified) fashion. TD establishes the links between the observed (detected) signals, the so-called “symptoms of faults” (SoF), and the underlying hidden state (“health”) of the device or the system of interest. TD effort is naturally focused on the most vulnerable elements (weakest links) of the design and can use the failure-oriented accelerated test (FOAT) data3 conducted in the design stage.
TD is an important part of reliability engineering and encompasses a broad spectrum of problems associated with obtaining, processing and assessment of diagnostic information, including diagnostic models, decision making rules and algorithms. TD provides information for the subsequent prognostics and health monitoring/management (PHM) effort.4 TD has to devise solutions and recommendations (“educated guesses”) under conditions of uncertainty and with limited information. Therefore the TD methods, techniques and algorithms are based, as a rule, on the probabilistic risk management and applied probability bodies of knowledge and are supposed to quantify, on the probabilistic basis, the obtained information (signals, SoF) and to provide assistance in making a decision if the device or a system of interest is still sound or has become faulty. There is always the possibility that the interpretation of the obtained SoF signal might be a false alarm or might lead to a missing-a-target decision. Statistical theory of decision-making, which is widely employed in radar engineering and is part of the TD, can be effectively used to avoid a false-alarm/missing-a-target mistake.
The objective of the analysis that follows is to show how the statistical Bayes formula (theorem) to update beliefs5-10 can be used to interpret the TD (PoF) information and to determine if the device (system) of interest is still sound (healthy) or has become faulty, and to use this information to identify a faulty device, if any. Then a reliability physics-oriented Boltzmann-Arrhenius-Zhurkov (BAZ) model11-13 can be employed to estimate the remaining useful lifetime (RUL)4,14 of a damaged (faulty) device. When the PDfR concept is used,2,15 such an assessment will lead, of course, to different RUL predictions, depending on the level of the allowable probability of failure (PoF).
Bayes formula in TD problems. Bayes’ theorem to update beliefs is widely used in many areas of applied science, engineering, economics, game theory, medicine and even law. In this section we interpret the Bayes’ formula in application to TD problems.
Let an event S be the observed (detected) signal (SoF), such as, e.g., measured elevated off-normal temperature, elevated leakage current, drop in the output light intensity, elevated amplitudes (power spectrum) of the induced vibrations, etc., and the events Di, i = 1,2,3...be possible malfunctions, diagnosed deviations from the normal operation conditions of the product (system) elements that might be responsible for the observed symptom(s). It is assumed that one and only one of the product elements is damaged to an extent that its detected off-normal performance has manifested itself in the observed symptom. Simultaneous failure (damage) of two or more systems’ elements is deemed to be extremely unlikely and is excluded from consideration.
Let one know the typical probabilities P(Di) of failure of its particular elements, based on the accumulated experience for the type of the device or system in question. The problem of interest can be formulated this way: The event (signal) S is observed for the given device (system) in operation. What is the probability that it is the system’s particular i-th element that has become faulty and is therefore responsible for the detected symptom?
The Bayes formula
enables one to determine the posteriori probability P(Di)/S), after the symptom S has been detected, from the priori probability P(Di) of the typical, known from the previous experience, probability of the system’s state. The Bayes formula can be obtained from the complete probability formula
and the relationship
Formula 2 reflects a postulate that if a system has several possible and incompatible ways to get transferred from the state Dj to the state S, the probability of such an event can be found as the sum of the conditional probabilities of occurrence of each of these ways. Formula 3 indicates the probability of the simultaneous occurrence of the symptom S and the system condition (diagnosis) Di.
As follows from Bayes formula,
Bayes method is simple, easy-to-use and effective, and is widely used in many applied problems. Its shortcomings are the large volume of the required input information and “suppression” of seldom diagnoses.
Example 1. Let it be established from experience with the given devices or systems that 90% of the devices do not fail during the designated time of operation, and the symptom S, which is the increase in temperature by 20˚C above the normal level, is encountered in 5% of the cases (devices). The probabilities P(D1) and P(D2) of the sound condition D1 and the faulty condition D2, respectively, in the general population of devices are P(D1) = 0.9 and P(D2) = 0.1, respectively. The conditional probabilities are P(S / D1) = 0.05 and P(S / D2) = 0.95. Let us determine the probability that the device, in which the increase in temperature is detected, is sound. The Bayes formula yields
Thus, the probability that the device is still sound has decreased because of the detected increase in the observed temperature, from 0.90 to 0.32.
Example 2. The TD instrumentation has detected two deviations (SoF) from normal operation conditions: increase in temperature by 20˚C in the region of the heat sink location (symptom S1) and increase in the power of the vibration spectrum by 20% (symptom S2). It has been established that these symptoms might be due to the possible malfunction of one of the two pieces of hardware: heat sink (state D1) and/or vibration damping equipment (state D2). It has been established also that the symptom S1 (increase in temperature) is not observed at normal operation condition (state D3), and the symptom S2 is observed in 5% of cases (devices). Based on the existing experience of employing the devices of interest, it has been established that 80% of the devices do not fail during the specified time of operation; 5% of the devices are characterized by the state D1 (malfunction of the heat sink), and 15% are characterized by the state D2 (malfunction of the vibration damping system). It also has been established that the symptom S1 is encountered in the state D1 in 20% of the devices, and in the state D2 in 40% of the devices; that the symptom D2 is encountered in the state D1 in 30% of the devices, and in the state D2 in 50% of the devices. This information can be conveniently presented in the form of a diagnostic matrix (Table 1).
Let us determine first the probabilities of the device states, when both symptoms, S1 and S2, have been detected. The Bayes formula1 yields
Similarly, we find: P(D2 /S1S2) = 0.91; P(D3/S1S2) = 0. Determine now the probability of the device state, if the observations indicated that there was no increase in temperature (the symptom S1 does not take place), but the symptom S2 (increase in the power spectrum of the induced vibrations) was detected. The absence of the symptom S1 means that the symptom of the opposite event takes place, so that . Changing the probability P(S1/Di) in the diagnostics matrix to we find
Similarly, we obtain : Determine now the probabilities of the device states when none of the symptoms takes place. By analogy with the above calculations, we find
Similarly, we obtain: The calculations indicate that when both symptoms S1 and S2 are observed, the state D1 (the heat sink is malfunctioning) might occur with the probability 0.91. When none of these symptoms is observed, the normal state D3 is characterized by the probability 0.92 and is the most likely one to occur. When the symptom S1 (elevated temperature) is not observed, while the symptom S2 (elevated vibrations) is, the probabilities of the states D2 (damping system is not working properly) and D3 (both heat transfer and vibration damping hardware work normally) are 0.45 and 0.41, respectively. Since these probabilities are close, additional information based on observations and/or modeling might be needed to obtain more accurate diagnostics information.
Boltzmann-Arrhenius-Zhurkov’s (BAZ) model. Bayes formula (1) does not require any information about the physical nature of the obtained signals. When there is a reason to believe that the combination of elevated temperature and stress (not necessarily mechanical) can lead to a malfunction of a device or a system, the additional information about the possible source of the deviation of the system’s state from the normal operation conditions could be obtained by using Boltzmann-Arrhenius-Zhurkov (BAZ) model11-13
that enables one to evaluate the mean time to failure (MTTF) τ from the known applied stress σ (not necessarily mechanical); the absolute temperature Τ, the time constant τ0, the (stress-independent) binding (activation) energy U0; k = 1.3807 x 10-23J/0K is Boltzmann’s constant, and the factor γ is the material (device) constant that is a measure of the vulnerability of the material to the applied stress and is measured by energy per unit stress, so that the product γσ measured in energy units.
BAZ model proceeds from the rationale that although the process of accumulation of damages is temperature dependent, it is influenced primarily by an external loading of any relevant nature. In other words, the model is based on the recognition of the experimentally observed situation that the breakage of the chemical bonds in a material under stress is due primarily to this stress, while temperature plays an important, but not a prevailing, role. Since the BAZ model contains three empirical parameters, activation energy U0, parameter γ of the level of the disorientation of the molecular structure of the material, and the time constant τ0, three failure-oriented accelerated test (FOAT) series should be conducted to determine these parameters.
Let FOATs characterized by their absolute temperatures T1, T2 and T3 and applied stresses σ1, σ2 and σ3 be run until failures, and the respective measured times-to-failure (TTF) be t1, t2 and t3, respectively. Based on the observed percentages of failed devices, the probabilities of failure (PoF) where established are Q1, Q2 and Q3 respectively. Assuming that the BAZ model and the exponential law of reliability are applicable, the PoF can be defined as
where t is time in operation. The time constant t0 in the BAZ model can be found from the transcendental equation
where the following notation is used:
Then (stress-free) activation energy U0 and the factor γ of loading (power) in the BAZ model can be computed as
Example 3. Let, e.g., the FOAT carried out until half of the population fails
(Q1 = Q2 = Q3 = 0.5) indicate that
so that then, for the operation temperature of T = 110˚C = 383˚K, the BAZ formula yields:
years. The MTTF will decrease to
years. For the 20˚C increase in temperature, and will be only
days for the 20% increase in the power of the vibration spectrum. Although, as is evident from the obtained data, the faulty damping hardware could result in a significantly lower lifetime than the defected heat removing hardware; the damage in the damping hardware, based on the Bayes formula prediction, is much less likely than that in the heat sink.
Example 4. Let us define the remaining useful lifetime (RUL) as the time between the moment when the diagnostic instrumentation has detected the malfunction (in the case in question it is the heat sink) and the moment of time when the PoF reached the allowable level Q. Assuming that the exponential law of reliability is valid (this law is characterized by the largest entropy and, hence, is the most conservative one), we find the RUL as
Assuming, e.g., Q = 10-3, we conclude that although the RUL is only years = 2.4070 days,
this time is sufficient, however, to replace the faulty heat sink or the heat spreader or to fix the damage. If, however, the specified allowable level of the PoF is as low as Q = 10-5, the expected RUL becomes as short as
years = 34.6min.
There is not very much that could be done to restore the system’s reliability and to maintain it on the high level.
Conclusion
By combining the statistical Bayes formula and the physical BAZ model, one can obtain useful diagnostic information about the state of an electronic or photonic device or a system, subjected to the temperature-vibration bias. This information can be used as input data for the subsequent PHM effort. The suggested methodology is viewed as part of the general PDfR approach and can be used in the analysis, design and operational reliability assurance of electronic and photonic devices and systems, when reliability is imperative and its quantification is therefore a must.
References
1. H. Czilos, ed., “Handbook of Technical Diagnostics,” Springer, 2013.
2. E. Suhir, “Applied Probability for Engineers and Scientists,” McGraw-Hill, 1997.
3. E.Suhir, R.Mahajan, “Are Current Qualification Practices Adequate?” CIRCUITS ASSEMBLY, April 2011.
4. M.G. Pecht, “Prognostics and Health Management of Electronics,” John Wiley, 2008.
5. S.B. McGrayne, “The Theory That Would Not Die: How Bayes’ Rule Cracked The Enigma Code, Hunted Down Russian Submarines, & Emerged Triumphant from Two Centuries of Controversy,” New Haven: Yale University Press, 2011.
6. T. Bayes and R. Price, “An Essay Towards Solving a Problem in the Doctrine of Chance.” By the late Rev. Mr. Bayes, communicated by Mr. Price, in a letter to John Canton, M. A. and F. R. S. Philosophical Transactions of the Royal Society of London, 53 (0), 1763.
7. L. Daston, “Classical Probability in the Enlightenment”. Princeton Univ Press, 1988.
8. S.M. Stigler, “Who Discovered Bayes’ Theorem?” The American Statistician 37(4), 1983.
9. Edwards, A. W. F. (1986), “Is the Reference in Hartley (1749) to Bayesian Inference?” The American Statistician, 40(2), 1986.
10. S.E. Fienberg, “When Did Bayesian Inference Become ‘Bayesian?’ ” Bayesian Anal., Jan. 2006.
11. S. N. Zhurkov, “Kinetic Concept of the Strength of Solids,” Int. J. of Fracture Mechanics, vol. 1, no. 4, 1965.
12. E.Suhir, R. Mahajan, A.E. Lucero and L. Bechou, “Probabilistic Design-for-Reliability Concept and Novel Approach to Qualification Testing of Aerospace Electronic Products,” IEEE Aerospace Conference, March 2012.
13. E. Suhir, “Predicted Reliability of Aerospace Electronics: Application of Two Advanced Probabilistic Concepts,” IEEE Aerospace Conference, March 2013, to be presented and published.
14. E. Suhir, “Remaining Useful Lifetime (RUL): Probabilistic Predictive Model,” International Journal of PHM, vol. 2(2), 2011.
15. E. Suhir, “Probabilistic Design for Reliability,” Chip Scale Review, vol. 14, no. 6, 2010.
Ephraim Suhir, Ph.D., is Distinguished Member of Technical Staff (retired), Bell Laboratories’ Physical Sciences and Engineering Research Division, and is a professor with the University of California, Santa Cruz, University of Maryland, and ERS Co.; This email address is being protected from spambots. You need JavaScript enabled to view it.. Laurent Bechou, Ph.D., is a professor at the University of Bordeaux IMS Laboratory, Reliability Group. Alain Bensoussan is EEE senior parts engineer, Thales Alenia Space France.
The technology arc in the interconnect manufacturing industry is once again in motion. Twenty years ago the United States and Western Europe were the sources of innovation in manufacturing processes.
New equipment and technologies were developed in cooperation with leading American and European fabricators such as IBM, Hewlett Packard and Cray Research. Independents such as Advanced Circuits for wireless and Photocircuits in automotive circuit boards led the way in developing new manufacturing processes. Today, all of those names are history in the interconnect manufacturing business.
The arc then shifted to Japan, where IC packaging substrates and high-density consumer electronics became the technology drivers. Designs for fine pitch rigid/flex, organic packages, and notebook computers drove high density interconnect with a wave of specialty technologies. Embedded components began to be used in consumer products. Japanese companies are still driving much of the technology, but are doing it in an accelerating manner offshore.
Today, Taiwan is set to take the torch. The first tier Taiwanese PCB manufacturers are now in discussions with key customers on pushing closer to 10µm lines and spaces for interconnect substrates containing active components. Shorter signal paths, profile reduction, and product security all play a factor in this drive. If you were to ask an American or European fabricator to build such product, they would look at you like a Martian. In Taiwan with ready capital, government support, and an already outstanding technology base, they would ask, “How many sets?”
The distance between Taoyuan County, where much of the interconnect industry is located, and the Hsinchu Science Park, where much of the R&D and Taiwan’s formidable semiconductor industry resides, is less than 70 miles. This does wonders for technology development.
But the greatest drivers are still external. It is the North American OEMs driving the technology, and the leading European, American, and Japanese suppliers who are at the fore in materials, process and technology. But it is Taiwanese companies synthesizing these ingredients into the next generation of technology.
Business conditions. Businessmen talk. Information is a commodity, and the first question after catching up is usually “how’s business?”
From reports all over Asia, business is very soft. The top tier interconnect manufacturers are off by 15 to 20%, while many others are down by 25 to 30%. A few, especially commodity manufacturers in China, are down as much as 40%. International materials suppliers are down 15 to 20%, and few are holding their own simply by partnering with the right circles. Small hole drill bit usage in Asia is down 25%, while solder mask usage is down as much as 30%.
Several new plants are being designed and built, including a very large project at KCE. AT&S’ new HDI plant in Chongquing is just about to come on line, while Ibiden is expanding in Malaysia. There is a strong trend of Japanese manufacturers moving new production away from China because of the current political environment.
The tension between China and Japan is also affecting current business. There have been reports of Japanese components, machinery and materials mysteriously tied up in customs, which is now affecting production and deliveries to end-users. This could be a double-edged sword if this trend continues and factories are forced to slow or stop production because of shortages.
There is the Apple Circle, the Samsung Circle and the Intel Circle. If you know the secret handshake, your company is doing okay, but even then there is the risk of allowing too much of one’s capacity to be driven by a few customers. Forecasts for 2013 are more positive, but there is a long way yet to go. The European economic slowdown and uncertain demand in North America are also impacting the industry, while weak automotive sales in China and consumer conservatism in that market have not been able to take up excess capacity.
Technology improvements. The most striking news in the technology of the interconnect was sub-10µm lines and spaces.
A very few companies have been manufacturing limited quantities of 15µm line/space product, primarily for packaging applications, but now there is a roadmap including both 10µm geometries and embedded actives.
This will be another phase shift in the materials and equipment of interconnect manufacturing, as the requirements approach those of what was in the semiconductor field 30 years ago. A number of imaging equipment manufacturers have been developing or delivering stepper systems, while wet process suppliers have been inching toward these demands as well. Completely new processes will have to be developed, and it is likely that process chemistry will also require ultra-pure formulations to prevent contamination.
The influence of 3D IC packaging on the interconnect was also a major topic of discussion. Chip-scale packaging is going mainstream, and all of the factors of substrate stability, 35µm pitch, and how to process organic substrates for 3D applications are at the top of the list. TSMC and other fabs will begin production in the first quarter 2013, so the issues are real and immediate. Design tools for 3D IC substrates are in place, and several interconnect manufacturers have established dedicated production lines, but the next year will be one of challenges as volumes rise.
Thermal, mechanical and electrical integrity are more important than ever, as stacked chips present challenges to the interposer. This then goes back to integrating passive devices within the package as well, and the issue of known good die becomes critical. The interconnect is becoming the device.
3D IC is also shaking up the OEM world. Fabless semiconductor companies are finding themselves at a distinct disadvantage, and a number of business models have been changing. Suddenly, major corporations such as Qualcomm, Broadcom and AMD are more vulnerable and constricted in their ability to move rapidly with new designs.
There were fewer Westerners than ever at this year’s TPCA, and yet to understand the means of production, this is one of the best venues in the world. If we do not understand how the product is built, we cannot build the next generation. There has been a move toward reshoring, or returning production to North America and Europe. The reasons range from a less competitive China to quality to being closer to end-users. But if the infrastructure does not exist for a critical technology such as the interconnect, what happens?
A final note. John Mitchell, the new president of IPC, was everywhere at TPCA, meeting with as wide a range of stakeholders as possible. He is a listener and a doer and has already had a very positive impact on the direction of the organization. In times when the challenges are great, it takes cooperation and collaboration to make significant progress. It is good to see that the suggestions of stakeholders are being considered and addressed and to see real world results.
Matthew Holzmann is president of CGI Americas (cig-americas.com); This email address is being protected from spambots. You need JavaScript enabled to view it..
Over the years, the ESDA has regularly advised applying this principle to static control: keep it simple and keep it focused. Although technology, processes and materials have continued to evolve, the design and implementation of effective static control programs continue to be based on the following five concepts:
Design in protection. Our first principle is to design devices, products and assemblies to be as robust as is reasonable to the effects of ESD. Whenever possible, use less-static-sensitive devices or provide appropriate input protection on those ESD-sensitive (ESDS) devices that are used. The paradox here is that advancing product technology means further miniaturization and higher speed of operation that often are more susceptible to ESD and preclude the installation of protection devices at the chip level. However, the more ESD control that is built in to product design, the fewer problems later.
Eliminate and reduce generation. Obviously, product design isn’t the whole answer. You cannot get away from ESDS devices and other products that are sensitive to some form of electrostatic charge and discharge, but the generation and accumulation of electrostatic charge can be reduced to below the threshold of problems in most processes.
Start by reducing or eliminating static generating processes or materials, such as common plastics and other static-generating materials, from the work environment. ANSI/ESD S20.20-2007, “Protection of Electrical and Electronic Parts, Assemblies and Equipment (Excluding Electrically Initiated Explosive Device)” and IEC 61340-5-1-2007, “Electrostatics – Part 5-1: Protection of electronic devices from electrostatic phenomena – General requirements,” both require a careful review of the process areas to remove any unnecessary insulators. Because ESD does not occur between materials kept at the same potential or at zero potential, processes and materials in the work environment should be kept at the same electrostatic potential. Typically, these conductive or dissipative materials should be electrically bonded to the same common point ground, such as the verified utility or earth ground. In addition, provide ground paths with wrist straps, flooring, or work surfaces to safely reduce charge generation and accumulation on all personnel and conductive or dissipative items that enter the defined electrostatic protected area (EPA).
Dissipate and neutralize. Because all generations of static cannot be totally eliminated, our third principle is to safely dissipate or neutralize those electrostatic charges that do occur. Proper grounding and the use of conductive or dissipative materials play major roles. For example, workers who carry a charge into the work environment can rid themselves of that charge by attaching a grounded wrist strap or by stepping on a grounded ESD floor mat while wearing ESD control footwear. The charge goes to ground rather than being discharged into a sensitive part.
For some objects, such as common plastics and other insulators, grounding cannot remove an electrostatic charge. Typically, ionization is used to neutralize charges on these insulating materials. The ionization process generates negative and positive ions that are attracted to opposite charges on the surface of a charged object, thereby effectively neutralizing the charge. Use of dissipative materials for work surfaces and at the point of contact to a device can also be effective.
Protect products. Our fourth principle is to prevent discharges that do occur from reaching susceptible parts and assemblies. One way is to provide parts and assemblies with proper grounding or shunting that will dissipate any discharge away from the product. A second method is to package and store or transport susceptible devices in proper packaging and materials handling products. These materials may effectively shield the product from charge and discharges and reduce the generation of charge caused by any product movement within the container.
Proper packaging and materials handling not only provide protection within our own facilities and process, but may also provide protection when the final product reaches the consumer.
Measure and improve. Finally, measure your processes and environment. Don’t ask whether a process will create an ESD hazard and then guess at the answer; measure it. For example, use a fieldmeter to detect the presence of an electrostatic field that could pose an ESD threat. Measure the resistance to ground of personnel and process equipment. Measure the voltage accumulation on personnel and components in the work environment. Measurement is a win/win situation. Identify and quantify those areas that really do need static protection, so focus can be on those areas that are of the greatest concern. In addition, you can identify areas that do not pose an ESD hazard, saving you the expense of providing unneeded protection.
These five principles form the basis for effective static control programs. They aid in the selection of appropriate materials and procedures to use in effectively controlling ESD. In most circumstances, effective programs will involve all these concepts. No single procedure or product will do the whole job. In developing control programs, identify the devices that are susceptible, their level of susceptibility, and those operations that pose an ESD hazard to them. Then ask which of these concepts will protect these devices. Finally, select and implement the combination of procedures and materials that accomplishes the task.
References
1. ESD Association, 7900 Turin Road, Bldg. 3, Rome, NY 13440, 315-339-6937, esda.org.
2. IEC – International Engineering Consortium, iec.org.
This column is written by The ESD Association (esda.org); This email address is being protected from spambots. You need JavaScript enabled to view it..
Good news, CAD vendors: Your customers don’t hate you. So says an October survey conducted by PRINTED CIRCUIT DESIGN & FAB. The two-week-long survey asked hundreds of printed circuit board designers, design engineers and other layout specialists how they use and think about various CAD tools.
The survey was spurred by a question posed by veteran PCB designer Bill Brooks, who asked, “What sort of data is available to benchmark the top four CAD tools out there?” What we learned might surprise some readers.
Despite the occasional rant, designers actually do like their CAD vendors, with the heavy majority giving high marks for service. But when it comes to selecting a tool, some of the features heavily pushed by vendors – autorouting stands out – are not as important as such fundamental if less sexy features like the parts library or even the GUI.
First, some notes about the demographics: Of the 451 designers who completed the survey, 86.2% consider themselves to be “experienced,” while 11.3% are in the “intermediate” stage of their career. Only 2.4% are “novices.”
Most of the respondents primarily design multilayer boards (four layers and up). Nearly 44% say they typically design multilayer “simple” boards, while 40.1% design multilayer “complex” boards. Smaller percentages say they typically design single-or double-sided through-hole and surface mount boards (Figure 1). A large percentage (79.4%) say they design mostly low-volume/prototype boards (under 500 boards per design built a month), while 20.6% design high-volume boards.
Almost a third of respondents work for large firms; 32.9% say their company has more than 1,000 employees. Another 8.2% work for firms with 500 to 999 employees; 16.2% say 100 to 499, while 20.9% say their company employs 10 to 99. Only 13.6% of those responding work for firms employing two to nine employees; 8.2% employ one employee.
[Ed.: To enlarge the image, right-click on it, then click View Image, then left-click on the table.)
FIGURE 1. Type of board typically designed (n = 451).
While the designers’ companies have a large number of staff, the number of PCB design tool users within each firm (not just at their site) is small. Forty-five percent of respondents say the company has a total of two to five design tool users, and 20.4% say only one user. Another 12.1% say six to 10; 8.5% have 11 to 20; 6.7% say 21 to 50, and 2.2% have 51 to 100 design tool users in the firm; 5.1% say more than 100 users.
Schematic users in the company vary from one (14.1%); two to five (33.9%); six to 10 (12.7%); 11 to 20 (12.1%); 21 to 50 (8.9%); 51 to 100 (4.7%), and more than 100 (13.6%).
Circuit simulation users within the company are few, with about 70% having five or fewer. Some 9.4% of firms have six to 10; 6% have 11 to 20; 4.1% have 21 to 50; 1.9% have 51 to 100, and 8.2% have more than 100.
While the three largest CAD tool providers by revenue are Mentor Graphics, Zuken and Cadence, the respondents didn’t match up so neatly. Indeed, Cadence, Mentor Graphics and Altium users were most represented, with about 63% using OrCad or Allegro, 31.7% using Pads, and 39.2% using Altium Designer (Figure 2). Another 19% use Expedition, while 8% use Board Station. (Users were allowed to list more than one tool.)
FIGURE 2: CAD tools currently used (respondents could choose more than one answer).
Just under 10% of the respondents indicated they use Zuken’s CR 5000, CADStar or CR 8000. Other CAD software such as CadSoft Eagle PCB, DipTrace, Intercept Technology Pantheon, and CADInt PCB Freeware received small percentages.
Not surprisingly, board designers are busy. When asked how many new designs they complete per year, 21.4% indicated more than 25 (Figure 3), and another 13% design 15 to 25 boards.
FIGURE 3: New board designs per year (not including respins).
Slightly more than half the responding firms (53.5%) have librarians who manage their CAD libraries, according to the survey, and slightly more designers (53.3%) design PCBs with high density interconnects than those who don’t.
Designers Like Their CAD Vendors …
The main reason for the survey was to gauge how designers feel about their primary tools – and tool provider.
One might not believe this from user message boards, but designers are generally happy with their primary vendor’s customer service. They were asked to rate the service on a scale of 1 to 7, with 1 being “poor” and 7 being “excellent.” Nearly 54% of respondents rated their vendor a 5 or 6, and another 11.5% of designers gave them a 7. Just 16% rated their vendor as a 3 or less.
A little more than half of survey respondents (52.8%) say the primary tool vendor’s customer/technical support is worth the expense. Designers say they most often use vendor support annually (40.4%) or monthly (32.1%), with few using it weekly (7.8%); 19.6% say “never.”
Designers were asked to estimate the total cost of ownership to produce successful PCB designs (purchase price, maintenance, training, libraries, process development, interfaces, etc.). It was basically an even split between one to two times and three to four times the base purchase price (Figure 4).
FIGURE 4: Total cost of ownership to produce a successful PCB design, relative to the tool base price.
Designers would prefer to own their design tools outright (69%), with 40.4% of those saying they’d also like the option to upgrade. Slightly more than 31% of respondents say they’d prefer an annual subscription that includes maintenance.
When asked how much primary tool vendors charge for support and maintenance, the most common response was six to 10% of the tool price (Figure 5). However, few charge less than that: A combined 17.2% charge 5% or less for support and maintenance or don’t offer it.
FIGURE 5: Typical maintenance and support add-on charges.
A substantial number of designers use their primary EDA/CAD tool for analog design (87.4%), and a significant 92% use their primary CAD tool for digital design.
While almost half of respondents (49.8%) say they don’t design RF/microwave boards, of those who do, one-third say they use their primary CAD tool for such designs. Other responses include Mentor’s Hyperlinx (8.8%), Agilent ADS (4.8%), AWR Microwave Office (3.2%), and CST Microwave Studio (1.2%).
Designers aren’t quick to install upgrades, with 68.8% waiting to see if others have problems with the patches first.
… But Not Necessarily Their Autorouters
Designers were asked to rate a series of criteria in terms of their importance in a PCB design tool. Here’s how the answers stacked up:
Most designers say design reuse is either very important (37.8%) or moderately important (38%). Another 16.9% were “neutral.”
Designers are divided on the importance of concurrent design, or different areas of the board designed in parallel. Only 17.1% say it’s very important, while 24.5% believe it to be moderately important, and 23.1% say it’s of little importance. The “not considered” category received 11.5% of responses.
Constraints (high-speed design rules, impedance, skew, topology, etc.) are high on the designers’ list, with 59.2% saying they are very important. Another 25.7% say they are moderately important. The rest of designers are “neutral” on the subject (9.7%), think they aren’t important (4.3%), or don’t consider them (1.1%).
A small number of the designers who responded to the survey find autorouting speed to be a very important feature of their CAD software (6.1%). However, 22.6% say it is moderately important, while 25.7% are neutral on the topic. Another 21% say it is of little importance, and 24.6% say it is “not considered.”
Autorouting accuracy is a somewhat different story, as 28.9% find it to be very important; 16.9% say moderately important; 16.3% are “neutral”; 12% say it is of little importance, and 26% responded “not considered.”
About 68% of designers say analysis features are either very important (27.6%) or moderately important (40.5%). Another 21.5% are neutral, while 10.4% either don’t believe they are important or don’t consider them.
DfM/DfT/DfA support is a priority, as more than 75% of designers list it as either very important (38.7%) or moderately important (36.4%). Less than 10% say it’s not important or don’t consider it.
Respondents find ease of design visualization another top priority, with 87.5% saying it’s either very important (51.7%) or moderately important (35.8%). A small percentage are neutral on the topic (11.1%), and only 6 of the 451 designers say it’s not important or “not considered.”
Also a priority is the ability to integrate with other EDA tools. Nearly 77% of designers say this is very important (40%) or moderately important (36.9%). Only 5.6% responded with “of little importance” (4.1%) or “not considered” (1.4%). Another 17.6% are neutral.
More than 91% of designers find ease of data importing/exporting to be very important (65.9%) or moderately important (25.3%).
Library creation is also paramount, with more than 93% listing it as very important (75.1%) or moderately important (18.2%).
Placement is another key element in PCB design software. Nearly 90% of designers say it’s very important (61.6%) or moderately important (28.2%).
A whopping 96.2% of respondents think user interface is very important (75.4%) or moderately important (20.8%).
Yet another important feature is manual/interactive routing, with 89.4% finding it to be very important; 8.3% say moderately important.
Design rule definitions also top the list, with 97.3% of survey respondents saying they are very (78.6%) or moderately important (18.7%). Design rule verification is also a high priority, with “very important” (82.7%) and “moderately important” (14.4%) eclipsing other responses.
Answers for mechanical and documentation capabilities were split between very important (46.3%) and moderately important (43.8%). Only 1.8% find them to be of little importance, while 7.2% are neutral.
What is apparent is that designers tend to agree on the important features they want out of their CAD tools, specifically DfM/DfT/DfA support, ease of design visualization, integration with other EDA tools, user interface, data importing and exporting, and library creation, to name a few. We did not ask how often designers change tools; that might be something to ask next time. It will be interesting to see if these numbers shift over time.
FIGURE 6. PCB design experience.
FIGURE 7. Number of employees.
FIGURE 8. Firms’ PCB design tool users.
FIGURE 9. Firms’ schematic users.
FIGURE 10. Firm’s circuit simulation users.
FIGURE 11. Company librarians who manage CAD libraries.
FIGURE 12. Volume of boards designed.
FIGURE 13. HDI used to design PCBs.
FIGURE 14. Primary EDA/CAD tool used for analog design.
FIGURE 15. Primary CAD tool used for digital design.
FIGURE 16. Preference for tool ownership.
FIGURE 17. Frequency of primary tool vendors’ customer/technical support.
FIGURE 18. Is primary tool vendors’ support worth the expense?
FIGURE 19. Upgrade installation.
FIGURE 20. RF/microwave design software tool.
FIGURE 21. Ratings for primary vendors’ customer service (1 = poor, 7 = excellent).
FIGURE 22. Satisfaction with vendors’ customer service (1 = poor, 7 = excellent).
Chelsey Drysdale is senior editor of PCD&F (pcdandf.com); This email address is being protected from spambots. You need JavaScript enabled to view it..
The answer to the question “Is it ever okay to put open vias in BGA pads?” is simply “no.”
It’s no, no, no, no, not ever!!! That makes it easy. No technique to worry about. No tolerances. Nothing. Just don’t put an exposed via in a BGA pad. The only option is between the pads, with a complete solder mask dam between the pad and via, or have the vias filled and plated over at the board house. It doesn’t matter if the BGA is sub-millimeter pitch or larger-than-millimeter pitch. Nothing but metal is allowed on the BGA pad.
Now, other components offer more flexibility and thus require some choices and guidelines. Reader Andy B. asked about large components, such as voltage regulators, where the manufacturer has recommended vias to connect the thermal pad to the ground plane, or to additional thermal area on the back side of the PCB.
The easy answer is to just treat it like a QFN and read our various suggestions1 surrounding that form factor. Having the extra room does allow for additional flexibility, but if the vias are open, they still run the risk of sucking solder to the other side of the PCB. You can sometimes get away with really tiny vias.2 But it’s not best practice.
It’s really a matter of tradeoffs. Some say to never fill or cap the via because doing so might impede the thermal transfer. Well, power chip manufacturers, you shouldn’t rely on unbuildable design to meet product specs. Vias can be filled with thermally conductive material. Or the via can be capped with solder mask.2 Just make the via cap as small as possible – 100 to 125µm larger than the via.
Finally, segment the paste stencil layer. Solder paste on top of an open via or even on top of a masked via could be asking for trouble. In this power MOSFET footprint (Figure 1), there are four vias (which will be capped) between the openings of the stencil under the part. The original copper stops at the blue dashed lines. I’ve added the extra thermal area as a copper pour with vias to give additional cooling area, another technique sometimes used with power parts. The vias in the extra thermal area do not need to be filled, as they are in the solder mask covered area, not the paste covered area.
Ed.: Read Duane’s blog each week at circuitsassembly.com/blog.
References
1. Duane Benson, “Large Via In Pad,” June 11, 2010, circuitsassembly.com/blog/?p=992.
2. Duane Benson, “Open Small Vias,” Sept. 21, 2011, circuitsassembly.com/blog/?p=2195.
Duane Benson is marketing manager at Screaming Circuits (screamingcircuits.com); This email address is being protected from spambots. You need JavaScript enabled to view it.. His column appears bimonthly.