Samsung, Micron, Intel Compete for Nvidia's Major Orders: Samsung Manufactures LPU, Micron Mass-Produces HBM4

robot
Abstract generation in progress

NVIDIA GTC 2026 is not just a product launch event but also a reshuffling of the supply chain landscape. As details of NVIDIA’s next-generation AI platform Vera Rubin gradually come into focus, the roles of the three major chip giants—Samsung, Micron, and Intel—are also emerging.

According to TrendForce, among the most watched supply chain developments, NVIDIA CEO Jensen Huang publicly confirmed for the first time that its Groq 3 LPU is being manufactured by Samsung; Micron announced that HBM4 entered mass production in the first quarter of 2026, breaking previous rumors that excluded Vera Rubin’s supply chain. These announcements directly impact the competitive landscape of the HBM market and supplier bargaining power.

Meanwhile, Intel also confirmed its partnership with NVIDIA at this event, stating that its Xeon 6 processors will support the computing power of the DGX Rubin NVL8 system. Looking further ahead, Wccftech reports that Intel may participate as a foundry for the packaging of NVIDIA’s 2028 next-generation Feynman GPU.

Samsung secures LPU foundry order, Huang confirms in person

Groq 3 is one of the most anticipated releases at this GTC. This high-speed inference-oriented LPU will be integrated into the Vera Rubin platform, with shipments expected to begin in the second half of 2026. According to Korea’s Chosun Ilbo, Huang Huang publicly confirmed at the event that Groq 3 will be produced by Samsung’s wafer foundry, continuing the existing foundry agreement between NVIDIA and Samsung prior to NVIDIA’s $20 billion acquisition of Groq last year.

Technically, Groq 3’s design logic differs significantly from mainstream AI accelerators. According to Tom’s Hardware, each Groq 3 LPU contains 500MB of SRAM—super-fast memory typically used for CPU and GPU caches.

Although this capacity is much smaller than the 288GB HBM4 in Rubin GPUs, its bandwidth reaches approximately 150TB/s, far exceeding the 22TB/s provided by HBM4. For bandwidth-intensive AI inference tasks, this design is expected to greatly enhance inference performance.

Samsung securing this foundry order means its role in NVIDIA’s supply chain extends from HBM4 memory supply to logic chip foundry, strengthening its strategic position on the Vera Rubin platform.

Micron HBM4 mass production begins, SK Hynix faces premium pressure

Micron officially announced at this event that its 36GB 12-layer stacked HBM4 has entered mass production for NVIDIA’s Vera Rubin platform in Q1 2026. The product features pin speeds over 11 Gb/s and bandwidth exceeding 2.8 TB/s, a 2.3x improvement over HBM3E, with over 20% better power efficiency. Additionally, Micron has begun sending 48GB 16-layer stacked HBM4 samples to customers, with single-chip capacity increasing by 33% compared to the 12-layer version.

This progress is significant not only for Micron’s technological advancement. According to Joseilbo.com, Micron’s accelerated mass production will reduce the concentration of HBM suppliers, exerting greater pressure on existing vendors in terms of shipment allocation and pricing negotiations. The report notes that the core impact is not directly eating into SK Hynix’s market share but weakening the monopoly premium formed during peak HBM demand.

Samsung also faces more direct competition. Joseilbo.com points out that although Samsung has officially advanced HBM4 production to demonstrate technological strength, Micron’s large-scale supply to NVIDIA’s Vera Rubin platform may shift industry standards from “ability to mass produce” to “actual adoption scale,” posing a new challenge to Samsung.

Intel’s dual-track strategy and Feynman packaging collaboration emerge

Intel’s presence at this GTC is also notable. Intel officially confirmed that its Xeon 6 processors will support NVIDIA’s DGX Rubin NVL8 system. According to Tom’s Hardware, this product offers a 2.3x increase in memory bandwidth over the previous generation, providing scalable high-performance AI compute for next-generation GPU workloads.

Looking further ahead, Wccftech reports that NVIDIA intends to collaborate with Intel in wafer foundry, leveraging Intel’s advanced packaging technologies, including EMIB, to support the packaging of the Feynman GPU launching in 2028. Notably, the Feynman GPU chip is expected to be manufactured using TSMC’s 1.6nm process, with Intel’s involvement mainly in the packaging stage.

The Feynman platform will also introduce 3D chip stacking technology, potentially marking NVIDIA’s first use of 3D stacking in GPU products. In terms of memory, NVIDIA plans to equip Feynman with customized HBM rather than standard next-generation HBM products, further strengthening its competitive edge in AI data center platforms.

Risk Disclaimer

Market risks are present; invest cautiously. This article does not constitute personal investment advice and does not consider individual users’ specific investment goals, financial situations, or needs. Users should evaluate whether any opinions, views, or conclusions herein are suitable for their particular circumstances. Invest at your own risk.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin