Press Releases

CAMBRIDGE, UK – Artificial Intelligence is transforming the world as we know it; from the success of DeepMind over Go world champion Lee Sedol in 2016 to the robust predictive abilities of OpenAI’s ChatGPT, the complexity of AI training algorithms is growing at a startlingly fast pace, where the amount of compute necessary to run newly-developed training algorithms appears to be doubling roughly every four months. In order to keep pace with this growth, hardware for AI applications is needed that is not just scalable – allowing for longevity as new algorithms are introduced while keeping operational overheads low – but is also able to handle increasingly complex models at a point close to the end-user.

Drawing from the “AI Chips: 2023–2033” and “AI Chips for Edge Applications 2024–2034: Artificial Intelligence at the Edge” reports, IDTechEx predicts that the growth of AI, both for training and inference within the cloud and inference at the edge, is due to continue unabated over the next ten years, as our world and the devices that inhabit them become increasingly automated and interconnected.

The why and what of AI chips

The notion of designing hardware to fulfill a certain function, particularly if that function is to accelerate certain types of computations by taking control of them away from the main (host) processor, is not a new one; the early days of computing saw CPUs (Central Processing Units) paired with mathematical coprocessors, known as Floating-Point Units (FPUs). The purpose was to offload complex floating point mathematical operations from the CPU to this special-purpose chip, as the latter could handle computations more efficiently, thereby freeing the CPU up to focus on other things.

As markets and technology developed, so too did workloads, and so new pieces of hardware were needed to handle these workloads. A particularly noteworthy example of one of these specialized workloads is the production of computer graphics, where the accelerator in question has become something of a household name: the Graphics Processing Unit (GPU).

Just as computer graphics required the need for a different type of chip architecture, the emergence of machine learning has brought about a demand for another type of accelerator, one that is capable of efficiently handling machine learning workloads. Machine learning is the process by which computer programs utilize data to make predictions based on a model and then optimize the model to better fit with the data provided, by adjusting the weightings used. Computation, therefore, involves two steps: Training and Inference.

The first stage of implementing an AI algorithm is the training stage, where data is fed into the model, and the model adjusts its weights until it fits appropriately with the provided data. The second stage is the inference stage, where the trained AI algorithm is executed, and new data (not provided in the training stage) is classified in a manner consistent with the acquired data.

Of the two stages, the training stage is more computationally intense, given that this stage involves performing the same computation millions of times (the training for some leading AI algorithms can take days to complete). As such, training takes place within cloud computing environments (i.e. data centers), where a large number of chips are used that can perform the type of parallel processing required for efficient algorithm training (CPUs process tasks in a serialized manner, where one execution thread starts once the previous execution thread has finished. In order to minimize latency, large and numerous memory caches are utilized so that most of the execution thread’s running time is dedicated to processing. By comparison, parallel processing involves multiple calculations occurring simultaneously, where lightweight execution threads are overlapped such that latency is effectively masked. Being able to compartmentalize and carry out multiple calculations simultaneously is a major benefit for training AI algorithms, as well as in many instances of inference). By contrast, the inference stage can take place within both cloud and edge computing environments. The aforementioned reports detail the differences between CPU, GPU, Field Programmable Gate Array (FPGA) and Application-Specific Integrated Circuit (ASIC) architectures, and their relative effectiveness in handling machine learning workloads.

Within the cloud computing environment, GPUs currently dominate and are predicted to continue to do so over the next ten-year period, given Nvidia’s dominance in the AI training space. For AI at the edge, ASICs are preferred, given that chips are more commonly designed with specific problems in mind (such as for object detection within security camera systems, for example). Digital Signal Processors (DSPs) also account for a significant share of AI coprocessing at the edge, though it should be noted that this large figure is primarily due to Qualcomm’s Hexagon Tensor Processor (which is found in their modern Snapdragon products) being a DSP. Should Qualcomm redesign the HTP such that it strays from being a DSP, then the forecast would heavily skew in favour of ASICs.

AI as a driver for semiconductor manufacture

Chips for AI training are typically manufactured at the most leading-edge nodes (where nodes refer to the transistor technology used in semiconductor chip manufacture), given how computationally intensive the training stage of implementing an AI algorithm is. Intel, Samsung, and TSMC are the only companies that can produce 5 nm node chips. Out of these, TSMC is the furthest along with securing orders for 3 nm chips. TSMC has a global market share for semiconductor production that is currently hovering at around 60%. For the more advanced nodes, this is closer to 90%. Of TSMC’s six 12-inch fabs and six 8-inch fabs, only two are in China, and one is in the USA. The rest are in Taiwan. The semiconductor manufacture part of the global supply chain is therefore heavily concentrated in the APAC region, principally Taiwan.

Such a concentration comes with a great deal of risk should this part of the supply chain be threatened in some way. This is precisely what occurred in 2020 when a number of complementing factors (discussed further in the “AI Chips: 2023 – 2033” report) led to a global chip shortage. Since then, the largest stakeholders (excluding Taiwan) in the semiconductor value chain (the US, the EU, South Korea, Japan, and China) have sought to reduce their exposure to a manufacturing deficit, should another set of circumstances arise that results in an even more exacerbated chip shortage. This is shown by the government funding announced by these major stakeholders in the wake of the global chip shortage.These government initiatives aim to spur additional private investment through the lure of tax breaks and part-funding in the way of grants and loans. While many of the private investments displayed pictorially below were made prior to the announcement of such government initiatives, other additional and/or new private investments have been announced in the wake of them, spurred on as they are by the incentives offered through these initiatives.

A major reason for these government initiatives and additional private spending is the potential of realizing advanced technology, of which AI can be considered. The manufacture of advanced semiconductor chips fuels national/regional AI capabilities, where the possibility for autonomous detection and analysis of objects, images, and speech are so significant to the efficacy of certain products (such as autonomous vehicles and industrial robots) and to models of national governance and security, that the development of AI hardware and software has now become a primary concern for government bodies that wish to be at the forefront of technological innovation and deployment.

Growth of AI chips over the next decade

Revenue generated from the sale of AI chips (including the sale of physical chips and the rental of chips via cloud services) is expected to rise to just shy of USD$300 billion by 2034, at a compound annual growth rate of 22% from 2024 to 2034. This revenue figure incorporates the use of chips for the acceleration of machine learning workloads at the edge of the network, for telecom edge, and within data centers in the cloud. As of 2024, chips for inference purposes (both at the edge and within the cloud) comprise 63% of revenue generated, with this share growing to more than two-thirds of the total revenue by 2034.

This is in large part due to significant growth at the edge and telecom edge, as AI capabilities are harnessed closer to the end-user. In terms of industry vertical, IT & Telecoms is expected to lead the way for AI chip usage over the next decade, with Banking, Financial Services & Insurance (BFSI) close behind, and Consumer Electronics behind that. Of these, the Consumer Electronics industry vertical is to generate the most revenue at the edge, given the further rollout of AI into consumer products for the home. More information regarding industry vertical breakout can be found in the relevant AI reports.

For more information regarding key trends and market segmentations with regards AI chips over the next ten years, please refer to the two reports: “AI Chips: 2023–2033” and “AI Chips for Edge Applications 2024–2034: Artificial Intelligence at the Edge”.

WEST CHICAGO – Anaya Vardya, president of American Standard Circuits Sunstone, announces that his company has signed a licensing agreement with Precision Circuit Technologies (PCT), an operating unit of LCP Medical Technologies LLC. ASC Sunstone is now a licensed manufacturing partner for PCT’s high performance, high density, multi-layer Laminated Liquid Crystal Polymer (LCP) technology.

This technology will give ASC Sunstone the ability to provide customers PCBs with:

  • Up to 30 layers of LCP with sequential laminations
  • Sub 25-micron lines and spaces
  • Additive processing with 1 micron control of circuit geometries for 1-2% impedance control
  • Low loss performance beyond 120 GHz wireless and 112 gbps-224 gbps-448 gbps high speed digital
  • Solid copper full metal stacked vias through all layers
  • High speed flex, rigid-flex, rigid boards, RF MW modules and advanced semiconductor package substrates

This relationship establishes ASC Sunstone as a LCP production technology leader with a reliable manufacturing process that has been lacking in the industry. ASC and PCT will support customers jointly with dedicated applications and engineering support to provide sophisticated and complex high-performance circuits designed to meet next generation system speeds and circuit density requirements. The material set performance, design-rule based engineering and precision control of the circuit geometries provide unmatched capabilities compared to ABF, Polyimide, and other laminate construction. No other technology and material set can provide this level of density and performance across Flex, Rigid-Flex, Rigid Boards and Package Substrates and ASC will be the US standard.

This agreement enhances ASC’s position as a supplier of Ultra High-Density products in the RF/Microwave, Flex, Medical and Digital markets using advanced materials and creates a synergy with LCP Medical’s mutual customer relationships.

 

As a leading manufacturer of technology interconnect solutions, ASC Sunstone can now offer a new set of solutions to its customers based on the unique properties of LCP materials in the focused markets served today and opening new opportunities.

When making the announcement Mr. Vardya said, “As a longtime admirer of Jim Rathburn’s technology and contributions to our industry it is a true pleasure to finally be working with him. We feel being able to offer our customers such a high level of technology that LCP materials provide is another giant step in our effort to be the leading independent PCB fabricator on the market today. LCP technology combined with our Ultra HDI capabilities make ACS the go-to company for companies looking for tomorrow’s solutions today.”

Added James Rathburn, founder and president, LCP Medical Technologies LLC, “We feel that American Standard Circuits is the perfect partner for Precision Circuit Technologies PCB Liquid Crystal Polymer technology. They have all the experience, equipment and most importantly desire to not only produce but also take this technology of the future to market. We expect great things not only for our company and ASC Sunstone but for the industry as a whole to come out of this partnership going forward.”

LYON, FRANCE – Yole Group is part of Chiplet Summit 2024. The Summit occurs on February 6-8 at the Santa Clara Convention Center.

The Chiplet Summit program is now available HERE.

Chiplets improve chip yields and costs but still provide the performance of a large monolithic chip. Designers can mix and match chiplets, use the process technologies best suited to specific functions, take advantage of chiplet IP, and simplify moves to new process nodes, avoiding wafer waste and manufacturing defects. Chiplets are the key to producing the extremely high-density, high-performance chips required for today’s networking, storage, AI/ML, analytics, media processing, HPC, and virtual reality applications.

Chiplet Summit covers the latest architectures, development platforms, and applications. It includes pre-conference seminars, keynotes, annual updates, and paper and panel sessions. It covers all aspects of chiplet development, including design, interconnect, packaging, integration, and testing.

At Chiplet Summit 2024, Principal Analyst, Computing Tom Hackenberg and Technology & Cost Analyst, Computing Ying-Wu Liu from Yole Group present a “Market Research Update” in the leadoff plenary on “Chiplet Markets Are Rising: Where and When?”

In addition, Tom Hackenberg heads an expert panel on market research in the closing session which will discuss market trends in “Chiplets in 2028 and How We Got There” and provide a keynote for the Open Compute Platform Track. This panel discussion will be moderated by Bill Wong, Technology Editor for Electronic Design. In addition to Tom, panelists include John Shalf from Lawrence Berkeley Laboratory, Bapi Vinnakota from the Open Compute Project, and Jawad Nasrullah from Palo Alto Electron. More info.

Be sure to meet Yole Group’s computing analysts at Chiplet Summit 2024; book a meeting with them now: This email address is being protected from spambots. You need JavaScript enabled to view it..

ATLANTA — ECIA is pleased to announce the 2024 Executive Conference Core Committee. Previously announced Chair Ken Bellero, Schaffner, and Co-Chair Maryellen Stack, Sager Electronics have finalized the core committee of returning and new members who already have hit the ground running for the best conference yet.

  • Pam Berigan, Sager Electronics
  • Lori Bruno, LuscomBridge
  • Tobi Cornell, Kruvand Associates
  • Robert Derringer, Crouzet
  • Juliet Fajardo, TDK-Lambda
  • Heather Fulara, DigiKey
  • Bob Garcia, Ferrari Technical Sales
  • Nicolle Ladouceur, ROHM Semiconductor

“I am really looking forward to working with this talented group of people,” commented Stephanie Tierney, ECIA’s Director of Member Communications and Member Engagement. “2023 was a record-breaking conference on many levels: sponsorships, attendance, survey responses and more were all ahead of the year before. This group has already vowed to surpass those records, and it will be fun working with them to meet the ambitious goals they have set for the 2024 Conference,” she continued. “All I can say is, ‘Watch out for another great event planned by this fantastic committee!’”

Subcommittees will be announced soon. For more information, go to https://www.eciaexecconference.org/. Sponsorship opportunities have been released and are filling up fast!

WASHINGTON – Printed circuit boards are prioritized in a December 12, 2023 report from the House Select Committee on the Strategic Competition Between the United States and the Chinese Communist Party:

  • Of 150 recommendations #1 is to strengthen domestic manufacturing of printed circuit boards and reduce reliance on adversary nations for microelectronics

Department of Defense:

The recently released National Defense Industrial Strategy focuses on creating resilient supply chains including microelectronics. The DoD has identified the need to expand domestic production and invest in a skilled workforce.

Congress has also obligated approximately $150 million for investments in microelectronics using the Defense Production Act. Two recent awards include:

  • $46.2 million to GreenSource Fabrication LLC via the Defense Production Act Investment Program. The award will enhance existing production capabilities at a manufacturing facility of state-of-the-art integrated circuits (IC) substrate, high-density interconnect (HDI) and ultra-high-density interconnect (UHDI), and advanced packaging.
  • $39.9 million via the Defense Production Act Investment (DPAI) Program to Calumet Electronics Corporation to enhance capabilities to produce High-Density Build-Up (HDBU) substrates, which include High-Density Interconnect Printed Circuit Board (PCB) cores and HDBU build-up layers.

While these actions are helpful, America needs to make a sustained, robust investment in manufacturing the microelectronics that power all aspects of modern life. The Printed Circuit Board Association of America and IPC have jointly called on Congressional appropriators to increase DPAI funding for microelectronics.

“Report after report coming out of Washington highlights our overreliance on Asia for microelectronics and the pressing need to make more printed circuit boards and substrates in America," said PCBAA Executive Director David Schild. "The current situation puts our national and economic security at risk. Congress and other agencies of government must act to reduce our dependency on foreign nations for the components that power national security and critical infrastructure systems.”

SAN DIEGO, CA – Pre-registration is now open for Chiplet Summit's second annual event on February 6-8 at the Santa Clara Convention Center. Hundreds of registrants and many key exhibitors will be at the premier showcase for chiplet technology. All major chip makers have adopted chiplets as their approach to producing devices at leading-edge nodes.

The event will cover the latest architectures, development platforms and methods, and applications. Expert panels will discuss best choices, likely breakthroughs, optimization, and long-term trends. Pre-conference seminars will focus on chiplet basics, packaging methods, interfaces, design methods, working with foundries, AI in chiplet design, and the Open Chiplet Economy. Other features include a high-powered superpanel on “How Can Chiplets Accelerate Generative AI Applications?”, an expert table session (with beer and pizza), and an annual update of technologies and markets.

Industry-leading keynotes offer designers insight into trends and roadmaps. Speakers represent Applied Materials, Synopsys, Micron, Alphawave Semi, Hyperion Technologies (introducing a new packaging technology with larger packages and a 1000W power budget), and Open Compute Project (OCP). Their focus is on methods for designing processors, coprocessors, communications devices, and graphics and AI chips.

“We now see the full impact of chiplets. They allow designs to include off-the-shelf components as well as drop-ins from older process nodes,” said Chuck Sobey, Summit General Chair. “They are ideal for applications such as AI that demand more processing power, faster response time, and the ability to handle more data. Chiplet Summit will help designers make the right decisions for current projects and future needs.”

Chiplet Summit also features innovative products from industry leaders such as Applied Materials, Synopsys, Alphawave Semi, Open Compute Project, Achronix, Arm, Cadence, and Siemens EDA.

Visit Chiplet Summit:
ChipletSummit.com

Page 9 of 291

Subcategories