Technology and market drivers for computing, networking and storage.

Ed.: This is the first of an occasional series by the authors of the 2017 iNEMI Roadmap. This information is excerpted from the Roadmap, which is available now from iNEMI (http://community.inemi.org/content.asp?contentid=51).

Three segments of high-end systems that place a demand on technology are computing, networking and storage. The industry is converging, and there is significant overlap of the technology, bandwidth demands, power constraints, thermal requirements and environmental situations. This month, we look at areas of uniqueness.

Computing. In a high-end computing environment, the first priority is optimizing the performance of the processor chip. The microprocessor chips are still increasing in the number of cores they hold to maximize computing throughput. The thermal cooling capability is critical to maximizing the frequency of the cores and minimizing the power demand. Tight integration of components, including processor chips, memory chips, and I/O bridge chips is a key consideration in minimizing latency and also the power applied to signaling. The number of channels and bit rate on signals connecting processors with each other, processors and DRAM, and processors to I/O bridge chips must scale with each other and the processing performance of the microprocessors.

Thus, three primary signal interfaces define the development of the computer subsystem. The processor to processor buses are proprietary to maximize frequency, and minimize power and latency. Processor to DRAM is defined by DDR standards to permit optimal interoperability of suppliers, and processor to I/O bridge chips use standards such as PCI-E or InfiniBand.

Networking. In the networking space, the hardware challenges are different for enterprise and data center applications. In enterprise networking, where the interface is primarily from the networking equipment to users, the challenge is to drive lower-cost hardware. On the other hand, in the data center space, where the interface is from one system of networking equipment to another, the challenges are to drive the highest possible bandwidth and performance at reasonable cost.

In the enterprise networking space, the lower cost is driving the technologies for higher integration on ASIC (SoC), resulting in reduced number of power rails, reduced number of PCB layers, lower cost PCB material, high-speed channel routing with minimal discontinuity and crosstalk, savings in real estate with dense routing, and reduced number of components on the system board. In the next five to 10 years, we will see higher system integration on the ASIC and packaging to drive lower cost and higher performance. The SiP, PoP and 2.5/3D packaging and Si photonics technologies will dominate in this space.

In the data center networking space, the challenges are to drive the higher data rate transfer from machine to machine at reasonable cost. This is driving the technologies for improving the performance of the system components, such as ASICs with high-speed (serializers / deserializers), SerDes (28→56→112 Gbps), integration with Si photonics, optical packaging and interconnect, optical backplanes, cable backplanes, backplane connectors, optical transceivers, and PCBs with high-performance materials. We foresee that this can be achieved in the next five to 10 years.

Storage. The storage systems are quickly growing in capacity to be able to handle the fast growing amount of data generated. Spinning disk drives provide the bulk of data storage, and system implementation is evolving to provide higher resiliency to ensure the data are always available and improved redundancy to ensure the data are never lost. The data signaling interfaces such as SAS are increasing in data rate measured in GT/s bandwidth, transporting data to and from storage. Flash is quickly making advances in high-end storage, where the low latency and low power compared to disk fill a key role in meeting demands on new storage systems.

Technology Challenges

The packaging technologies that will be developed and integrated into high-end systems will be those that are successfully developed with acceptable cost and risk of adoption.

TSV. Through silicon vias are enabling 2.5D silicon interposers and 3D chip stacking providing high-density interconnect and, therefore, high-bandwidth capability between components. Also, glass interposers may be a factor for some applications with through glass vias (TGV) providing advanced connectivity. Memory modules are already introduced and applications will expand.

Advanced packaging. System-in-package (SiP) and package-on-package (PoP) technologies provide the capability of optimizing cost and function in a package. Integrating voltage regulation and silicon photonics with processor chips or bridge chips will be increasing. High-end systems will adopt these advanced package technologies because the increased interconnect pins, more memory, and more cores when placed in close proximity enable high-bandwidth interconnect in the existing power envelope.

Optics. Optical interconnect will continue to be used more broadly. First, transceivers and active optical cables (AOC) will be used more broadly for in-frame communication, potentially replacing copper interconnect in backplanes or cables when the cost, power and bandwidth tradeoffs justify the switch to optical. Integrating optical devices into packaging to reduce trace length and, thus, power demand for high-bandwidth interfaces will demand advanced packaging and leverage SiP and PoP components for increased integration at the package level.

Si photonics. The desire for higher levels of integration of optics will favor the adoption of silicon photonics. The system-level cost management, integration density and power limit tradeoffs must be carefully considered as development of silicon photonics is pursued.

Electrical connectors for packages and cards. Electrical interconnection will continue to be the dominant interconnection for short-reach communication. The developing signaling standards are in discussion to go beyond 50 Gb/s per channel. Electrical connectors for PCB  and cable communication delivering low insertion loss, flat impedance profiles and minimal crosstalk will maximize the reach of the copper interconnect at an acceptable bit error rate. The speed of adoption of the higher speeds will depend on the ability to equalize the channels in the existing power envelope, while the channel cost-performance as measured in $/Gb/s is reduced over time. Cost-performance is strongly impacted by bandwidth density. Bandwidth density can be channels x Gb/s/channel per unit area for a package on a PCB or channels x Gb/s/channel per unit length for card edge interconnection. The required ground pins that provide shielding of signals and a continuous return path will increase the effective number of pins per channel. So even in cases where the channels per unit area or channels per unit length are constant, the number of pins may increase to effectively shield the signals.

Low-loss electrical for packages and cards. Reduced dielectric loss materials are increasingly used for the high-speed electrical channels, and the demand for those materials will increase as speeds above 50 Gb/s per channel are adopted. However, low-loss electrical channels also require attention to processing and design of all the elements of packages and printed circuit boards. The copper roughness, via stubs, antipad size and shape, and internal via and PTH design are all as important as the loss characteristics of the dielectric material. Coreless packages and thin laminates for improved via and PTH design will reduce discontinuities significantly for high-speed channels. The footprint design at the electrical connector will require special design to avoid becoming the bandwidth limiting factor in a package-to-board, backplane or cable interconnection. This footprint design includes via, or PTH diameter, length and stub, antipad size and shape and routing escapes from the vias, or PTH, and land sizes. Reference plane gaps, holes and interconnection to PTHs that create return path discontinuities are part of the channel design.

Efficient power distribution. To efficiently address these technology challenges, power efficiency must also continue to improve. The channel shielding requirements demand a greater amount of layers and vias for the high-speed channel. Improving power efficiency demands lower impedance power distribution for less loss through I2R loss and less inductance for faster regulation. This creates a trend toward more metal and placing regulation closer to the loads competing with short-reach signaling
and increased signal shielding. These trends also leverage the aforementioned advanced packaging concepts of TSV, SiP and PoP and are part of the economic driver to adopt this technology.

Ed.: Work has begun on the 2019 Roadmap. iNEMI membership is not required to participate in the roadmap. Those interested may visit iNEMI at http://community.inemi.org/2019_rm.

Dale Becker is chief engineer of electronic packaging integration at IBM and chair, High-End Systems Product Emulator Group (PEG) chapter of the 2017 iNEMI Roadmap; This email address is being protected from spambots. You need JavaScript enabled to view it..

Submit to FacebookSubmit to Google PlusSubmit to TwitterSubmit to LinkedInPrint Article