Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Still buying AI relay stations on Taobao? Whistleblower: At least dozens poisoned after Claude Code source code leak
Whistleblower’s Latest Research Reveals Security Risks Hidden in Commercial AI Intermediary Stations Following Claude Code Source Leak Incident
Claude Code Source Leak Whistleblower Uncovers Security Risks in AI Intermediary Stations
A recent research paper titled “Your Agent Is Mine” was published, with one of the authors being Chaofan Shou, the whistleblower who first exposed the Claude Code source leak incident.
This paper conducts a systematic security threat analysis of third-party API routers for large language models (LLMs), commonly known as intermediary stations, and reveals that these stations could become nodes for supply chain attacks.
What is an AI Intermediary Station?
Because calling LLMs consumes a large number of tokens, resulting in high computational costs, AI intermediary stations can cache repeated questions and background explanations, helping clients significantly reduce costs.
At the same time, these stations have automatic model allocation functions, dynamically switching between models with different billing standards and performance based on the difficulty of user questions, and can automatically switch to backup models if the primary server goes offline, ensuring overall service stability.
Intermediary stations are especially popular in China because the country cannot directly access certain overseas AI products, and due to enterprises’ demand for localized billing, these stations serve as important bridges connecting upstream models and downstream developers. Platforms like OpenRouter and SiliconFlow fall into this category.
However, seemingly cost-reducing and lowering technical barriers, intermediary stations hide significant security risks behind the scenes.
Image source: Research paper revealing AI intermediary supply chain attack risks
AI Intermediary Stations Have Full Access Rights, Becoming Supply Chain Vulnerabilities
The paper points out that intermediary stations operate at the application layer within network architecture, with full plaintext reading rights over JSON payloads during transmission.
Because there is a lack of end-to-end encryption integrity verification between clients and upstream model providers, intermediary stations can easily view and tamper with API keys, system prompt words, and model output tool invocation parameters.
The research team notes that as early as March 2026, the well-known open-source router LiteLLM was attacked via dependency confusion, allowing attackers to inject malicious code into the request processing pipeline, highlighting the vulnerability of this link.
Empirical Testing Shows Dozens of AI Intermediary Stations Exhibit Malicious Behavior
The research team purchased 28 paid intermediary stations on platforms like Taobao, Xianyu, and Shopify, and collected 400 free intermediary stations from public communities for in-depth testing. The results found that 1 paid station and 8 free stations actively injected malicious code.
Among the free stations tested, 17 attempted to use AWS bait credentials set up by researchers, and 1 directly stole cryptocurrencies from the researchers’ Ethereum wallets.
Further data shows that as long as intermediary stations reuse leaked upstream credentials or direct traffic to nodes with weaker security defenses, even seemingly normal stations can become part of the same attack surface.
During poisoning tests, the team found that these affected nodes processed over 2.1 billion tokens, exposed 99 real credentials in 440 sessions, and 401 sessions were fully autonomous, enabling attackers to inject malicious payloads directly and easily without complex trigger conditions.
Image source: Research paper testing over 400 intermediary stations, revealing dozens of malicious behaviors
Four Major Attack Techniques Revealed
The paper categorizes malicious intermediary station attacks into two main types and two adaptive evasion variants.
To evade routine security detection, attackers have further evolved dependency goal injection techniques, specifically altering package names in installation commands, replacing legitimate packages with malicious ones published in public registries with the same or confusing names, establishing persistent supply chain backdoors in target systems.
Another method involves conditional delivery, where malicious actions are triggered only under certain conditions, such as when request counts exceed 50 or when the system is in fully autonomous mode (YOLO mode), thus avoiding limited security checks.
Three Feasible Defense Measures
In response to supply chain poisoning attacks on AI intermediary stations, the paper proposes three practical defense strategies:
Call for Upstream Model Providers to Establish Cryptographic Verification
Although client-side defenses can currently reduce some risks, they cannot fundamentally address source identity verification vulnerabilities. As long as modifications by intermediary stations do not trigger client alerts, attackers can easily alter program semantics and cause damage.
To truly secure the AI agent ecosystem, upstream model providers must support cryptographic verification mechanisms. Only by cryptographically binding model outputs with the final commands executed on the client side can end-to-end data integrity be ensured, fully preventing supply chain risks from intermediary tampering.
Further Reading:
OpenAI’s Mixpanel Breach! Causing Data Leakage for Some Users, Beware of Phishing Emails
A Copy-Paste Error Causes 50 Million USD to Vanish! Crypto Address Poisoning Scam Reemerges—How to Prevent It