"How much computing power to buy? All of it": OpenAI co-founder says $110 billion still can't meet demand; pretraining has shifted toward joint cost optimization

BlockBeatNews

According to 1M AI News monitoring, OpenAI co-founder Greg Brockman, in an interview, looked back on a step-change improvement in AI programming capabilities in December 2025. He measured progress using a test prompt he kept for years: getting the AI to build a website that, when he first learned to program years earlier, took him several months to complete. Throughout all of 2025, this task required multiple rounds of prompts and about four hours to get done; by December, a single prompt was enough, and the quality was good. He said the new model made the AI jump from “able to complete about 20% of tasks” to “about 80%,” and that leap forced everyone to “reorganize their workflows around AI.”

As for where the $11 billion in funding goes, Brockman likened computing power to “hiring salespeople”: as long as the product has a scalable sales channel, hiring more salespeople brings in more revenue. Computing power isn’t a cost center—it’s a revenue center. He recalled a conversation with his team on the eve of ChatGPT’s launch: “They asked, ‘How much compute should we buy?’ I said, ‘All of it.’ They said, ‘No no no, seriously—how much should we buy?’ I said, ‘No matter how we build, we can’t keep up with demand.’” That judgment still holds today, and compute procurement needs to be locked in 18 to 24 months in advance.

On how to use this computing power, Brockman revealed that OpenAI is no longer simply chasing the largest possible scale of pretraining. Instead, it treats pretraining capability and inference costs as jointly optimized targets: “You don’t necessarily want to go as large as possible, because you also have to account for the many downstream inference use cases. What you really want is the optimal solution for intelligence multiplied by cost.” But he explicitly opposed the claim that “pretraining is no longer important.” He believes that the smarter the base model is, the more efficient the subsequent reinforcement learning and inference phases are, and that it still “absolutely” requires NVIDIA GPUs to support large-scale centralized training.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.
Comment
0/400
No comments