header-langage
简体中文
繁體中文
English
Tiếng Việt
한국어
日本語
ภาษาไทย
Türkçe
Scan to Download the APP

No Downgrade Sales: Nvidia's Groq Inference Chip is Expected to Enter China in May, Potentially Bypassing GPU Export Restrictions

According to 1M AI News monitoring, Reuters cited two sources as saying that Nvidia is preparing to launch the Groq 3 LPU inference chip in the Chinese market, expected to be available as early as May. The sources emphasized that the chip is "not a downgraded version or specifically tailored for the Chinese market." This marks Nvidia's first introduction of products to China since its acquisition of AI inference chip company Groq for around $17 billion by the end of 2025, as part of a separate chip strategy for China, distinct from the previously approved resumption of production of the H200 GPU.

The Groq 3 LPU is an inference-specific coprocessor with 500MB of on-chip SRAM, an inference bandwidth of up to 150 TB/s, but lower floating-point performance, making it less suited for model training. According to an analysis by Electronic Engineering Journal, this architecture may place its peak performance below the performance threshold of current U.S. export controls (TPP <21,000 and DRAM bandwidth <6,500 GB/s), thus circumventing export restrictions faced by GPUs such as the H200. However, in Nvidia's original plan, the Groq LPU needs to be paired with the Vera Rubin GPU, which cannot be exported to China (recommended ratio approximately 25:75), requiring the Chinese version to be compatible with other systems for independent operation, and the actual performance remains to be observed.

举报 Correction/Report
Correction/Report
Submit
Add Library
Visible to myself only
Public
Save
Choose Library
Add Library
Cancel
Finish