Elon Musk's statement implies: Claude Opus may have 5 trillion parameters, ten times that of Grok 4.2.

robot
Abstract generation in progress

Crypto World News reports that on April 10, a recent reply by Elon Musk on the X platform unexpectedly sparked heated public speculation about the parameter scale of Anthropic’s flagship model.

In response to a user’s follow-up asking about the parameter count of Grok 4.2, Musk confirmed: “0.5 trillion total parameters. Currently, Grok is half of Sonnet and one-tenth of Opus. In terms of its size, this is a very powerful model.” If Grok 4.2 is “one-tenth of Opus,” as Musk said, then Claude Opus would have approximately 5 trillion parameters and Claude Sonnet would have about 1 trillion.

It is worth noting that Anthropic has never publicly disclosed the parameter scale of any of its models. The figures above are only estimates derived from Musk’s remarks and are not official data.

Meanwhile, Musk revealed that SpaceX AI’s Colossus 2 supercomputing cluster is currently synchronously training 7 models, with the largest reaching 100 trillion parameters, and he added, somewhat cryptically: “There are still some catch-up efforts to be made.” If the extrapolation proves true, Claude Opus with 5 trillion parameters would top the current roster of known deployed models; meanwhile, the 100 trillion-parameter model being trained by xAI would become an important variable in the next round of an AI arms race.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin