Verification: Yesterday, inferred V4 architecture from TileKernels core code, with three key points confirmed and one point unconfirmed.

robot
Abstract generation in progress

According to Beating monitoring, after DeepSeek open-sourced the TileKernels core kernel library yesterday, we inferred V4’s core architectural components from the production-grade kernels contained in the library. Today, with the release of the V4 model card, we verified them one by one as follows:

mHC (manifold constraint hyper-connection): Yesterday, we speculated that V4 did not use the raw byte HyperConnection, but instead used DeepSeek’s improved mHC. The model card confirms that V4 uses Manifold-Constrained Hyper-Connections, a hit. MoE architecture and Top-k expert routing: Yesterday, TileKernels included the complete MoE dispatch and collection kernels. The model card confirms that V4 is an MoE model, a hit. FP4+FP8 mixed precision: Yesterday, the library included FP4 and FP8 quantization kernels. The model card confirms that weights use FP4+FP8 mixed storage, a hit.

The only one that didn’t match was Engram (the conditional memory module). Yesterday, we already noticed that the V4 specifications disclosed by Yifan Zhang did not mention Engram, leaving room in the wording. The V4 model card also does not mention Engram.

The model card also reveals new components that TileKernels did not cover: the hybrid attention mechanism (CSA + HCA) is the core of V4’s major leap in long-context efficiency. Under a 1M context, inference FLOPs are only 27% of V3.2, and the KV cache is only 10%. For training, it has switched to the Muon optimizer.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin