Nvidia Planned to Launch Data Center Infrastructure in Space - ForkLog: Cryptocurrencies, AI, Singularity, Future

SpaceX Space космос вселенная криптовалюты# Nvidia Plans to Launch Space-Based Data Center Infrastructure

Nvidia announced the development of a computing platform for orbital data centers. CEO Jensen Huang revealed this at GTC 2026.

“Computing in space is the final frontier, and it’s already here. As satellite constellations expand and we venture deeper into the galaxy, intelligence must be where data is generated,” said the entrepreneur.

In a press release, Nvidia stated that several companies will use the Vera Rubin Space-1 module, which includes IGX Thor and Jetson Orin, in space missions. The chips are specially designed “for conditions with severe size, weight, and power constraints.”

Huang emphasized that the company is working with partners on a new computer for orbital data centers, but there are technical challenges at this stage.

“There is no convection in space—only radiation. So, we need to figure out how to cool such systems. Many engineers are working on this,” he said.

The construction of data centers to meet the growing demand for AI is linked to rising electricity prices. One solution is to place computing power in space—where there is unlimited space and constant solar energy. However, high launch costs remain a significant obstacle.

In February, SpaceX filed a request with the U.S. Federal Communications Commission to deploy a group of 1 million satellites for data centers in orbit.

The project involves creating a low Earth orbit (LEO) network of data centers connected via laser links. The document uses bold phrases like “the first step toward a second-level civilization on the Kardashev scale.”

In 2026, California startup Aetherflux plans to deploy solar mini-farms in the form of low-orbit satellites to transmit energy from space to Earth via lasers. They will use SpaceX rockets for deployment.

In November 2025, Google announced its intention to create a satellite system in Earth’s orbit to harvest solar energy and power data centers. In the same month, research group 33FG estimated that by 2030, orbital AI computing will be cheaper than on Earth.

Trillion

Huang stated that the expected order volume for Blackwell and Vera Rubin chips will reach $1 trillion by 2027.

Last year, the company estimated potential revenue from these two generations of semiconductors at $500 billion. However, after releasing financial reports last month, CFO Colette Kress noted that growth in 2026 could exceed previous estimates.

According to Huang, demand for Nvidia solutions is increasing from startups and large corporations.

“Gaining more computing power allows for generating more tokens and increasing revenue,” he said.

Autonomous Vehicles

Nvidia is expanding partnerships in autonomous vehicle development. The company announced new agreements with Hyundai Motor, Nissan Motor, Isuzu, BYD, and Geely.

These relate to the Drive Hyperion platform, designed for vehicles. The system helps develop and integrate driver assistance and Level 4 autonomous driving tools.

“We have long been working on self-driving cars. The ChatGPT moment for autonomous vehicles has already arrived,” Huang stated.

Currently, there are no vehicles on the market that can operate completely without human control. However, companies like Waymo already offer Level 4 taxi services.

Most modern autopilots operate at Level 2—drivers must constantly monitor the process.

Drive Hyperion includes model training in data centers, large-scale simulations, and onboard computing systems. Current clients of the platform include Aurora Innovation, Nuro, Sony Group, Uber, Stellantis, and Lucid Group.

Other Announcements

At GTC 2026, Huang introduced Groq 3 Language Processing Unit (LPU)—the first chip from startup Groq, which Nvidia acquired in December 2025 for $20 billion. Shipments are expected in Q3.

He also announced the Groq 3 LPX server rack, consisting of 256 LPUs. It is designed to work alongside the Vera Rubin system, with deliveries expected later in 2026. Huang said the rack can increase token-to-watt efficiency for Rubin by 35 times.

“We combined two processors with completely different characteristics: one for high throughput, the other for low latency. This doesn’t negate the need for a lot of memory. So, we will simply add many Groq chips to expand its available capacity,” Huang explained.

They also showcased a prototype called Kyber—a next-generation server architecture. It will consist of 144 GPUs arranged vertically to increase computational density and reduce costs.

Kyber will be part of the Vera Rubin Ultra system, with deliveries scheduled for 2027.

Nvidia’s CEO also introduced a developer toolkit to create and test new AI systems on company hardware. He demonstrated the NemoClaw stack, designed specifically for OpenClaw.

Recall that in March, Huang dismissed the idea of AI as a “job killer.”

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin