header-langage
简体中文
繁體中文
English
Tiếng Việt
한국어
日本語
ภาษาไทย
Türkçe
Scan to Download the APP

Google plans to collaborate with AMD to develop two AI inference chips, speeding up its move away from reliance on Broadcom.

According to Omdia research, Google is in talks with Marvell Technology to develop two custom chips aimed at enhancing AI model efficiency. This move signifies Google's accelerated effort to diversify its AI hardware suppliers, reduce reliance on long-term partner Broadcom, and lower costs.

These two chips include a new Tensor Processing Unit (TPU) dedicated to AI inference and a Memory Processing Unit (MPU) that collaborates with the existing TPU. Google plans to produce around 2 million MPUs, with design work expected to be completed as early as next year. Inference chips are primarily responsible for running pre-trained models and are the core of driving commercial products like autonomous agents.

Currently, Broadcom is Google's sole long-term design partner for TPUs and charges a hefty fee for each chip produced. With increasing TPU demand, Google is mitigating risk by bringing in Marvell Technology and MediaTek. Marvell Technology has previously assisted startup Groq in designing inference chips and has technical expertise in high-performance memory management and network interfaces.

举报 Correction/Report
Correction/Report
Submit
Add Library
Visible to myself only
Public
Save
Choose Library
Add Library
Cancel
Finish