Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
The nationwide "shrimp farming" craze is sweeping the internet, but the banking industry is collectively "ignoring" it; experts say: OpenClaw's high system permissions conflict inherently with financial compliance requirements.
Recent reports indicate that open-source AI agents like OpenClaw (also known as “Lobster”) have recently gained widespread attention across various industries. However, the banking sector remains cautious about this “shrimp farming” trend. A head office of a joint-stock bank told Daily Economic News that they have recently received risk alerts from regulators regarding “Lobster.”
Before OpenClaw’s popularity surged, banks had already been exploring and applying intelligent agents. Many banks are actively promoting the use of these agents in frontline operations to improve efficiency.
As a highly regulated industry, how can banks balance innovation and compliance in the face of AI technological advances?
OpenClaw, named for its icon resembling a red lobster, is also called “Lobster.” Installing and deploying it is colloquially referred to as “raising lobsters.” Unlike purely conversational AI like ChatGPT, OpenClaw integrates communication software and large language models, enabling it to autonomously perform complex tasks such as file management, email handling, and data processing on users’ local computers. It acts like a “digital employee,” which has attracted many users to experiment with practical applications.
As OpenClaw continues to gain popularity, security concerns are increasingly in the public eye. Recently, the Ministry of Industry and Information Technology and the National Internet Emergency Center issued risk alerts, warning users to exercise caution due to potential security risks associated with OpenClaw.
Amid this “shrimp farming” craze, the banking industry remains quite “calm.” Recently, industry insiders revealed that a joint-stock bank’s head office received a regulatory risk alert about “Lobster.” An official from a state-owned bank also told Daily Economic News that their bank has not yet deployed or studied OpenClaw.
Why are banks cautious about OpenClaw?
“Unlike conversational AI, OpenClaw as an agent needs access to local files, external APIs, and even system-level permissions. This end-to-end automation can easily trigger cyberattacks or lead to leakage of core transaction data, which conflicts with the strict regulatory and zero-tolerance policies of banks,” said Wang Peng, deputy researcher at Beijing Academy of Social Sciences, in an interview on March 16.
Gao Chengfei, general manager of the IP Business Department at Zhanyou Marketing Consulting, shared similar views: “OpenClaw’s high system permissions are inherently at odds with financial compliance requirements.”
Gao explained that OpenClaw defaults to high-level permissions such as local file access and API calls. While this can improve work efficiency, multiple medium- and high-risk vulnerabilities have been publicly disclosed. Its plugin functions lack effective security review mechanisms, posing risks of malicious exploitation—such as stealing online banking passwords or payment keys. More critically, its autonomous execution capabilities could cause errors like unintended fund transfers or purchasing investment products. Since AI technology still lacks full explainability, it’s difficult to determine responsibility after automated actions. Additionally, data generated during operation could be transmitted to third parties, raising compliance risks when involving sensitive information like credit data or loan approval materials.
Therefore, Gao believes that in the short term, OpenClaw is more suitable for small-scale pilots in non-core business scenarios. Large-scale deployment should wait until key issues such as security control, clear responsibilities, and algorithm explainability are resolved.
Wang Peng suggests that banks are unlikely to directly adopt open-source OpenClaw but will instead incorporate its technological approach. Future implementations are likely to be “private deployment in restricted environments,” meaning within internal networks, using self-developed or customized solutions to apply intelligent agents in non-core, high-sensitivity scenarios like office automation and risk management.
It’s worth noting that even before OpenClaw’s rise, banks had already been exploring intelligent agents. Many are actively promoting their use in frontline operations to improve efficiency.
For example, Nanjing Bank has partnered with Volcano Engine to explore large-scale deployment of intelligent agents in financial scenarios. They have launched a one-stop intelligent agent workstation called HiAgent, which has already deployed over 20 high-quality agents. These are deeply integrated into areas such as office work, operations, business development, and risk management.
How effective are these implementations? For instance, corporate relationship managers often spend significant time gathering information across multiple systems and platforms before visiting clients. An “one-page” pre-visit intelligent agent can automatically consolidate data from internal and external sources, perform cleaning, merging, and quality checks, and generate a comprehensive, accurate pre-visit report in just five minutes—reducing preparation time from two hours to minutes. This has become a key tool during peak marketing periods.
According to KPMG’s recent 2026 China Banking Outlook Report, analysis of public tender information and case studies show that from January to November 2025, the overall number of large model projects in banks was trending upward, with a small peak in August. In the first half of the year, projects mainly focused on knowledge Q&A, with applications being sporadic. Starting in July, the number of intelligent agent projects exploded, especially in October and November, with all project types being related to intelligent agent applications.
So, how should banks balance innovation and compliance when exploring intelligent agent applications?
On March 16, Fu Yifu, a special researcher at Su Commercial Bank, told Daily Economic News that when promoting intelligent agents in frontline operations, banks need to innovate management mechanisms, test new technologies in controlled environments, and ensure risks are measurable and controllable. They should strengthen data privacy protections and algorithm audits, follow the principle of “least privilege,” and avoid excessive collection of customer information. Maintaining close communication with regulators and participating in industry standard development can help identify compliance red lines early. Additionally, banks should establish manual review processes for key decisions made by intelligent agents to prevent errors. Embedding compliance requirements throughout the R&D process and cultivating multidisciplinary talent will help banks safely unlock the innovative potential of intelligent agents.