Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Nvidia is entering the AI large model market
Source: Beijing Business Today
If someone asks who the biggest winner of the AI era is, the answer is almost undoubtedly NVIDIA. With its supply-demand-scarce H100 chips, it’s like the person selling shovels during a gold rush—watching AI companies around the world fight fiercely while quietly making huge profits, with its market value soaring beyond the sky. The latest financial documents show that NVIDIA plans to invest a total of $26 billion over the next five years to fully promote the development of open-source AI large models. This means NVIDIA is no longer content with just selling shovels but is now digging for gold itself.
Major Investment
On March 12, according to financial documents submitted by NVIDIA to the U.S. Securities and Exchange Commission (SEC), the company will invest a total of $26 billion (about 178.8 billion RMB) over the next five years to advance open-source AI large model research. NVIDIA is also officially shifting from a “chip manufacturer” to a “full-stack top AI research lab” strategy.
According to the plan, this $26 billion investment is not solely focused on developing a single model but covers the entire industry chain of open-source AI large models. The funds will be gradually implemented over the next 18 to 24 months, with the first self-developed open-source AI models expected to be released by late 2026 to early 2027.
In comparison, this scale of investment far exceeds the $3 billion spent training GPT-4 by OpenAI. Technologically, NVIDIA has chosen an “open weight” approach—an intermediate path between OpenAI’s fully closed-source model and Meta’s fully open-source Llama series.
Specifically, NVIDIA will publish key model parameters (weights), allowing companies and developers to download for free and run or fine-tune on their own devices or private clouds, meeting enterprise needs for data privacy, customization, and cost control. However, training data and code may not be fully disclosed.
Andy Konwinski, founder of the nonprofit Laude Institute and a computer scientist focused on promoting AI openness, describes NVIDIA’s investment as a milestone signal. “They are at the intersection of many open and closed AI projects,” Konwinski said. “This is an unprecedented statement of their commitment to openness.”
Industry analysts also point out that open-source strategies have longer-term commercial significance for NVIDIA. When releasing models, NVIDIA will publish weights and technical details, facilitating startups and researchers to modify and innovate based on its technology. This helps build a developer network around NVIDIA’s hardware ecosystem, further strengthening the market stickiness of its chips.
Competing with OpenAI
Since launching its first Nemotron model in November 2023, NVIDIA has successively released specialized models for robotics, climate modeling, and protein folding. Bryan Catanzaro, NVIDIA’s Vice President of Deep Learning Research, also revealed that NVIDIA recently completed pre-training a 550-billion-parameter model. The company’s core model development will focus on multi-modal, multi-domain frontier large models covering language, code, scientific computing, and intelligent agents.
Recently, NVIDIA launched a new generation open-source large language model, Nemotron 3 Super, designed for enterprise multi-agent systems, with a total of 128 billion parameters (only 12 billion active during inference) and native support for a 1 million token long context window. Unlike mainstream API access, NVIDIA has opened model weights, pre-training/post-training datasets, and the complete training scheme.
With 128 billion parameters, roughly comparable to the largest version of OpenAI’s GPT-OSS, NVIDIA claims that Nemotron 3 Super scored 37 points on the AI Index comprehensive score, compared to GPT-OSS’s 33 points.
Notably, NVIDIA also admits that some Chinese models scored higher than this level. Additionally, NVIDIA states that Nemotron 3 Super participated in a new benchmark called PinchBench, which specifically evaluates the model’s control ability over OpenClaw. In this test, Nemotron 3 Super ranked first.
On the technical side, NVIDIA has disclosed multiple innovative methods used in training this model, including architectures and training techniques to improve inference, long-context processing, and reinforcement learning responsiveness.
Catanzaro said, “NVIDIA is placing much greater emphasis on open-source model development than ever before, and we are making significant progress.”
On the ecosystem front, NVIDIA has partnered with major cloud service providers and hardware vendors such as Google Cloud Vertex AI, Oracle Cloud Infrastructure, Dell Technologies, and HPE. Integration with Amazon AWS Bedrock and Microsoft Azure is also underway. Software companies like CodeRabbit, Factory, and Greptile, as well as life sciences organizations Edison Scientific and Lila Sciences, have announced plans to incorporate this model into their intelligent workflows.
Redefining the Roadmap
For a long time, NVIDIA’s core strength has been in hardware chips, holding over 80% of the global AI chip market share. However, its influence in AI models has been relatively weak, with standards and training paradigms largely defined by OpenAI, Meta, and others.
This move into developing top-tier open-source models aims to define the technical route of AI models from the bottom up, making NVIDIA’s hardware architecture and software stack the industry standard, and driving demand for computing power through open-source models. If Nemotron becomes the mainstream foundational model for enterprise AI agents, large-scale deployment will still rely heavily on NVIDIA’s GPUs—while promoting openness at the model level, it also consolidates hardware demand.
Financial analysts predict that if NVIDIA can secure just 10% of the foundational model market while maintaining its hardware dominance, this could generate up to $50 billion in additional annual revenue within three years. Bryan Catanzaro stated that promoting the open-source ecosystem aligns with NVIDIA’s core interests, and this massive investment is a strategic decision based on long-term industry insights rather than mere follow-the-trend behavior.
On Tuesday, NVIDIA CEO Jensen Huang also published a rare long-form blog post about AI—the seventh since 2016—systematically explaining the underlying logic of the AI industry. In the article, Huang defined a “five-layer architecture” of AI. He stated that the AI industry is still in its very early stages; despite billions of dollars invested, the true potential of AI has yet to be fully realized, and ongoing investments worth trillions of dollars are needed to improve the underlying infrastructure.
Huang pointed out that AI has become one of the most powerful forces shaping the world today. It is not just a single smart application or model but a fundamental infrastructure like electricity and the internet—operating on real hardware, energy, and economic foundations, capable of transforming raw materials into scalable intelligence. In the future, every company will use AI, and every country will build AI infrastructure.
Regarding concerns about job displacement caused by AI, Huang believes AI will not reduce jobs but create many new employment opportunities, especially in infrastructure and skilled technical fields. The workforce needed for AI infrastructure—electricians, plumbers, steelworkers, network technicians, installers, and operators—is large, highly skilled, and well-paid, with current shortages. AI is filling significant labor gaps worldwide in truck drivers, nurses, accountants, and more, rather than causing unemployment.
Beijing Business Today Reporter Zhao Tianshu
Disclaimer: This message is reproduced from Sina’s partner media. Sina publishes this article to share information and does not endorse or verify its views. The content is for reference only and does not constitute investment advice. Investors operate at their own risk.
Massive information, precise analysis, all on Sina Finance APP
Editor: Gao Jia