Just realized something that's been bothering me about the whole Cursor situation. You know that $29.3B AI coding tool everyone's obsessed with? Turns out the brain powering Composer 2 isn't what you think it is.



So last week, developers started digging into the API responses and found something interesting in the model path: kimi-k2p5-rl-0317-s515-fast. Kimi K2.5. That's Moonshot AI's open-source model from China. Not exactly hidden in the fine print, but definitely not advertised either.

Cursor's VP of Developer Education acknowledged it a couple days later, saying about 25% of the computing power comes from the Kimi platform, the rest from their own training. Called the omission in the blog post "a mistake." Except this is the second time. When Composer 1 launched, people noticed it used DeepSeek's tokenizer—also never mentioned. At what point does it stop being a mistake?

Here's the thing though: using Kimi K2.5 is actually smart. The model is solid at code generation, it's open-source so acquisition costs are basically zero, and for a company focused on product layer and toolchain integration, it makes total business sense. The problem isn't the technical choice. It's the silence.

But there's a compliance issue people aren't talking about. Kimi K2.5 uses a modified MIT License with one specific requirement: if a commercial product has over 100M monthly active users or $20M+ monthly revenue, you have to prominently display "Kimi K2.5" in the UI. Cursor's doing roughly $2B annually—that's like 8x the threshold. The requirement is clear. It's been ignored.

I'm not a lawyer, but this matters because the software industry spent two decades learning to respect open-source licenses. We went from GPL lawsuits to SBOMs becoming standard practice. AI model licensing is probably still in early stages of that same journey. If companies can skip something as straightforward as adding a label, what about the harder stuff—data flows, model auditability, cross-border compliance?

There's a concept called "Trust Tax" that applies here. Users paying $20/month for what they think is cutting-edge proprietary tech, then finding out it's a free open-source model with tweaks? That trust cracks. Especially when Cursor already had pricing drama with the "Unlimited" Pro plan where people burned through monthly credits in three days.

The real question is what users actually pay for. If it's model capabilities, just call the Kimi API directly—way cheaper. If it's product experience and toolchain integration, then be clear about that instead of implying everything's self-developed. Apple doesn't pretend to manufacture their own chips. TSMC makes them. Nobody feels cheated because they know what they're actually paying for.

What's actually interesting here is the bigger structural shift: Chinese open-source models are becoming the invisible foundation of global AI applications. DeepSeek, Tongyi Qianwen, Kimi—these are quietly powering stuff all over the world. Hugging Face's CEO literally said China's open source is "the biggest force shaping the global AI technology stack." Not exaggerating.

For enterprise users though, this creates a real problem. Your developers are routing code through models whose origins you don't even know. In regulated industries—finance, healthcare, government—that's a compliance nightmare. Data sovereignty, cross-border regulations, all of it becomes unclear. Some people call it "Shadow AI," like how Shadow IT used to be. Developers embed these models into IDEs and pipelines while security teams have no idea.

The software industry eventually solved this with SBOMs—Software Bill of Materials. Lists what components you use, versions, known vulnerabilities. AI needs the same thing. AI-BOM is already being discussed in security circles. Should include: what base model, training data source and processing, fine-tuning method, deployment, data flows.

For developers choosing tools, this means auditing model sources the same way you audit dependency licenses. npm audit, pip check—those are standard. Model audit could be next. For AI vendors, proactively disclosing model sources isn't weakness, it's investing in long-term trust. First company to make AI-BOM standard might actually command a premium.

Bottom line: Kimi K2.5 is genuinely good. Moonshot's technical work deserves respect. Cursor's product expertise is real. The issue was never "a Chinese model was used." In an open-source ecosystem, good tech shouldn't have a national label. The issue is we weren't told. As these AI agents get woven deeper into our workflows, handling more code and data and decisions, we should at least know who's actually thinking behind the scenes.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin