Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
National "shrimp farming" craze, financial institutions maintain "cool" thinking
Recently, as the open-source AI agent “OpenClaw” (nicknamed “养龙虾” because of its red lobster icon) gained popularity online, authorities issued continuous risk alerts.
On March 10, the National Internet Emergency Center issued a security risk warning regarding OpenClaw; previously, the Ministry of Industry and Information Technology also stated that “Lobster (OpenClaw)” can easily cause security issues such as cyberattacks and information leaks when configured improperly or by default.
It is worth noting that, compared to the widespread enthusiasm in the consumer market, feedback from financial institutions has been notably “calmer.”
Most Financial Institutions Have Not Deployed It
“Currently, we haven’t tested integrating this agent in our operations; we’re still cautious,” said an employee at a North China city commercial bank. “Some clients asked about it, but our company hasn’t adopted OpenClaw for business, nor do we allow its use.” An employee from a securities firm in North China also commented.
“A department’s office terminals prohibit deploying such open-source AI agents, and personal phones are not monitored for now,” said a risk control staff member at a South China joint-stock bank. He mentioned that some colleagues installed and tested OpenClaw on private devices, feeling there are certain security risks, and later uninstalled it.
From feedback across multiple banks and brokerages, it appears that financial institutions generally adopt a wait-and-see attitude toward open-source AI agents like OpenClaw.
“OpenClaw requires permissions including, but not limited to, access to local file systems, calling external service APIs, system-level and extension permissions, which are far beyond those of conversational AI. Both institutions and individuals should remain cautious,” said a technical staff member from a joint-stock bank’s fintech department.
“The core reason is that the financial industry has strict regulations and high-risk bottom-line requirements. The current end-to-end automation capabilities of OpenClaw are severely mismatched with financial compliance standards,” pointed out Wang Pengbo, senior analyst at Broadcom Consulting. He emphasized that the seriousness and security of the financial sector are red lines that cannot be crossed, making it fundamentally different from other fields.
Differentiated Development of Industry AI Applications
In fact, even before OpenClaw’s popularity surged, the banking sector had already been exploring and applying AI agents. Industrial and Commercial Bank of China, Shanghai Pudong Development Bank, WeBank, and others have reported enterprise-level self-developed AI agents used in office work, customer acquisition, risk control, and other financial scenarios.
According to McKinsey’s “2025 Global Banking Industry Annual Report,” in the future, AI agents will be integrated throughout the entire banking workflow: one AI agent performs tasks and outputs results; a second reviews the output, identifies vulnerabilities, and offers optimization suggestions; a third submits the results for human final approval.
The report indicates that in human-AI collaboration, human “review” remains indispensable: humans must take ultimate decisions, oversee quality, handle anomalies, and manage risks and compliance.
This trend aligns with current industry practices.
“We observe that the current digital transformation in banks, consumer finance, and payment institutions is mainly supportive, not blindly pursuing full automation. Their approach is pragmatic, fitting the strong regulatory environment of finance and the current technological and business realities,” Wang Pengbo said.
He pointed out that different financial institutions focus on different aspects of AI application: banks mainly use AI for risk approval, customer marketing, post-loan management, and intelligent customer service; consumer finance companies focus on using AI to optimize risk models, improve credit approval efficiency, and enhance post-loan collection accuracy; payment institutions mainly deploy AI for transaction fraud detection, anti-money laundering, real-time risk interception, and transaction monitoring.
“These areas are either non-core support functions or fields where AI can play a fundamental role with manageable risks. This approach avoids the compliance and security risks previously mentioned and sidesteps core conflicts in business openness,” he explained. From a financial industry perspective, the core value of open-source AI agents is cost reduction and efficiency improvement—automating repetitive, tedious processes like customer service responses, advertising writing, data entry, and basic compliance checks—saving labor costs and boosting productivity.
Parallel Development of Innovation and Compliance/Security
While AI technology empowers financial institutions efficiently, it also raises technical concerns.
Recently, the Sichuan branch of the People’s Bank of China issued an administrative penalty to a bank for violating fintech management regulations, warning and fining over 300,000 yuan.
Xue Hongyan, a special researcher at the Suzhou Commercial Bank, analyzed that concerns about open-source AI agents among financial institutions mainly focus on data privacy, compliance, and R&D costs.
“Regarding data privacy, the high sensitivity of financial data conflicts with the massive data collection needs of AI agents, and open-source code vulnerabilities can be exploited; in terms of regulatory compliance, the opacity of AI models conflicts with requirements for traceability and auditability, and third-party component sourcing is difficult; R&D costs for local adaptation, security reinforcement, and error correction due to model hallucinations may exceed expected benefits,” Xue said.
It is foreseeable that the deep integration of AI agents will continue to advance alongside the digital transformation of banking.
For example, Nanjing Bank has partnered with external vendors to deploy a one-stop AI workstation called HiAgent, which has already implemented over 20 high-quality AI agents. The bank also launched the “Big Model Double Hundred Plan,” aiming to fully empower frontline operations with AI and train frontline staff to become heavy users of AI.
Wang Pengbo believes that for open-source AI agents to enter core financial scenarios, six issues must be addressed:
Algorithms must be interpretable and traceable, with no black boxes, meeting strict regulatory and security standards.
Clearly define responsibilities and accountability boundaries, aligning with the seriousness of the financial industry.
Address the shortcomings of large AI models, reduce common errors, enhance deep intelligence, and ensure accurate execution of instructions.
Ensure data compliance, protecting sensitive user information from leaks.
Balance commercial interests by finding a middle ground between open-source openness and core institutional benefits, motivating organizations to open environments and APIs.
Retain manual intervention rights to prevent irreversible risks.
Layout: Liu Junyu
Proofreading: Liao Shengchao