Part One: Upgrading Payment and Financial Infrastructure
Stablecoin trading is experiencing explosive growth, reshaping traditional finance
Last year, stablecoin transaction volume reached $46 trillion, a staggering figure—more than 20 times PayPal’s daily average transaction volume, nearly three times the total global transaction volume of the largest payment network Visa, and rapidly approaching the scale of the US ACH electronic transfer network.
Transaction speed is no longer a bottleneck: on-chain settlement can be completed within 1 second, at a cost of less than one cent. The real challenge lies in the “last mile”—how to seamlessly connect stablecoins with the real financial system?
New generation startups are filling this gap. They use cryptographic verification technologies to enable users to effortlessly transfer local account balances into digital assets; integrate regional payment networks for QR code transfers and real-time clearing; and even build global interoperable wallets and card platforms, making stablecoins a daily payment tool.
These innovations collectively promote a phenomenon: digital dollars are moving from the fringes of the financial market into the mainstream payment layer. Employees can receive cross-border real-time salaries, merchants can accept global settlements without bank accounts, and applications can instantly deliver value to users—stabilcoins are evolving from mere trading tools into the foundational clearing layer of the internet.
Tokenization of physical assets requires more “native” design
Tokenizing traditional assets has become a trend, but most approaches remain superficial. US stocks, commodities, and index funds are packaged into tokens, yet they do not fully leverage the native features of blockchain.
The real opportunities lie in perpetual contracts and other synthetic products—they offer deep liquidity and are easier to implement. The leverage mechanisms of perpetual contracts are transparent and easy to understand, making them highly compatible with crypto markets. Emerging market stocks are especially suitable for “perpetualization” (some stock options markets already have liquidity surpassing spot markets).
But the question is: should we “perpetualize” assets or “tokenize” them? Both paths are feasible, but by 2026, we will see more asset management approaches that are inherently native to crypto.
Another trend worth noting is the emergence of truly “native stablecoin issuance” rather than simple tokenization. As stablecoins become mainstream, those lacking strong credit foundations will face difficulties—they resemble “narrow banks,” holding only ultra-safe assets, which makes it hard to become the backbone of on-chain economy.
The breakthrough lies in establishing on-chain credit infrastructure. Emerging asset management firms and curated protocols are beginning to offer loans collateralized by off-chain assets and settled on-chain. However, the problem is that most of these loans are tokenized after being initiated off-chain, increasing costs. The ideal model would be to originate loans directly on-chain, reducing management costs, accelerating settlement, and expanding accessibility. Standardization and compliance are challenges, but the industry is actively exploring solutions.
Decades-old banking legacy systems are entering a wave of modernization
The truth about core banking systems is often surprising: trillions of dollars of assets worldwide still run on mainframe systems built in the 1960s-70s, programmed in COBOL, with data flowing through batch files rather than APIs.
Second-generation core banking systems (like Temenos GLOBUS) emerged in the 1980s-90s but are now outdated, with slow upgrade cycles. These systems manage critical accounts such as deposits, collateral, and liabilities. Although validated and trusted by regulators, their cumbersome technical debt and complex compliance costs hinder innovation—adding real-time payment features can take months or even years.
The advent of stablecoins and on-chain assets has changed all that. Not only do stablecoins find product-market fit, but traditional financial institutions are also unprecedentedly embracing them. Tokenized deposits, on-chain government bonds, and bonds enable banks, fintechs, and asset managers to launch new products and serve new clients without rewriting those outdated yet stable legacy systems.
Stablecoins open a new avenue for innovation in traditional finance.
Rebuilding payment infrastructure in the era of intelligent agents
As AI agents become widespread, business operations will shift from user click-driven to backend autonomous processes, redefining capital flow requirements.
In a world driven by intent rather than commands, AI agents need to recognize needs, execute commitments, and trigger transactions—value flow speed must match information flow.
This is where blockchain, smart contracts, and on-chain protocols come into play. Currently, smart contracts can settle global USD transactions within seconds. By 2026, new primitives like HTTP 402 will make settlements programmable and real-time: agents can make permissionless, instant payments for data, GPU computing power, or API calls, without invoices, reconciliation, or batch processing.
Software updates can embed payment rules, credit limits, and audit trails—without fiat currency integration, merchant verification, or financial institutions. Prediction markets will settle in sync with event developments, traders can trade freely, and global payments will clear instantly.
When value flows like data packets across the internet, “payment streams” will no longer be a separate business layer but a fundamental network behavior. Banks will evolve into internet pipelines, assets into infrastructure. At that point, money is essentially information routed over the internet—Internet not only supports the financial system but is the financial system itself.
Democratization of wealth management: from elite to全民 coverage
Historically, personalized wealth management has been a privilege of high-net-worth individuals—offering customized investment advice and cross-asset portfolios that are costly and complex.
Asset tokenization changes this game. Through crypto channels, AI strategies and protocols can realize and dynamically adjust personalized portfolios in real-time at low cost. This surpasses passive management by “robo-advisors”—today, anyone can access actively managed investment services.
By 2025, traditional institutions will increase their crypto exposure (directly or via products), but this is just the beginning. By 2026, platforms designed for “wealth growth” rather than “wealth preservation” will emerge. Fintech firms and leading trading platforms will compete for market share thanks to technological advantages. Meanwhile, DeFi tools like yield aggregators can automatically allocate assets to optimal risk-return lending markets, building the core yield of investment portfolios.
Using stablecoins to replace fiat for idle liquidity, investing in RWA money market funds instead of traditional products, these micro-adjustments can significantly boost returns. Additionally, retail investors can more easily access less liquid private market assets—private credit, pre-IPO equity, private equity. Tokenization unlocks the potential of these markets while meeting compliance reporting requirements.
The ultimate value lies in a diversified tokenized portfolio containing bonds, stocks, private investments, and alternative assets, which can automatically rebalance without manual cross-platform transfers—a qualitative leap in efficiency.
Part Two: Infrastructure of AI and Agent Layers
From “Know Your Customer” to “Know Your Agent”
The bottleneck constraining AI agent economies is shifting from intelligence level to identity verification. The number of “non-human identities” in financial services has exceeded human employees by 96 times, yet these identities are still “ghosts without accounts.”
The missing key infrastructure: KYA (Know Your Agent). Just as humans need credit scores to obtain loans, AI agents require cryptographic signatures and verifiable certificates to execute transactions—certificates must link to authorized entities, operational limits, and accountability chains.
Until this mechanism is perfected, traders will intercept agents at firewall levels. The KYC infrastructure built over the past decade now must address KYA within months.
AI is reshaping academic research paradigms
As a mathematical economist, I still spend a lot of effort early this year teaching AI models to understand my research workflows. By year-end, I can give abstract instructions like guiding a PhD student—sometimes the model provides entirely new and correct answers.
This trend goes beyond personal experience. AI’s application in academic research is increasingly widespread, especially in logical reasoning—existing models not only support scientific discovery but can even independently solve Putnam math competition problems (the most difficult university-level math contests).
Which disciplines benefit most, and how to use these tools remains an open question. But I predict AI research will give rise to and reward a new class of scholars: those who can predict relationships between concepts and quickly derive conclusions from fuzzy answers. These answers are not always accurate but can point in the right direction.
Ironically, this is somewhat like leveraging the “hallucinations” of models: when sufficiently clever, giving space for thought may produce absurd conclusions, but it can also lead to breakthroughs—similar to human creativity in nonlinear, non-obvious thinking.
This reasoning requires new workflows: not just single-agent interactions but nested model systems. Multi-layer models help researchers evaluate ideas from preliminary models, gradually eliminate noise, and reveal core value. I have used this method to write papers; others use it to search patents, create art, or (regrettably) find vulnerabilities in smart contracts.
But running such systems requires better model interoperability and mechanisms to identify and fairly reward each model’s contribution—precisely what cryptography can help solve.
The “invisible tax” facing open networks
The surge of AI agents imposes an invisible tax on open networks, fundamentally threatening their economic foundation.
The core issue is the widening gap between two internet layers: the content layer (ad-supported) and the execution layer. Currently, AI agents draw data from ad-driven websites to provide convenience to users but systematically bypass revenue channels supporting content creation (ads, subscriptions).
To protect open networks and foster diverse content for AI development, we need large-scale deployment of technical and economic solutions—new sponsorship models, attribution systems, innovative financing mechanisms, etc.
Existing AI licensing agreements are proven stopgaps, often compensating only a small part of lost revenue. Networks need new economic models that enable automatic value flow.
Key changes will occur in the future: shifting from static licensing to real-time usage-based billing. This involves testing and deploying systems—perhaps leveraging blockchain—to enable micro-payments and precise provenance, automatically rewarding contributors providing data for AI agents.
Part Three: Privacy, Security, and Trust
Privacy will become the strongest competitive barrier in the crypto space
Privacy is a necessary condition for global on-chain finance but is a flaw in almost all existing blockchains. Most chains treat privacy as a patch after deployment rather than a core design feature.
However, now, mere privacy is enough to isolate a chain. More importantly, privacy creates network lock-in effects—called “privacy network effects.” This is especially critical in an era of performance convergence.
Through cross-chain bridging, if all data is public, migrating assets between chains is trivial. But when involving private data, the situation reverses: token bridges are simple, secret bridges are extremely difficult. Moving in and out of private zones always carries risks of de-anonymization via chain monitoring, mempools, or network traffic. When shifting boundaries between private and public chains, metadata such as transaction timing and scale can leak, making tracking easier.
Compared to emerging homogeneous chains (where competition for block space drives fees to zero, erasing differences), privacy chains can establish stronger network effects. In fact, if general-purpose public chains lack a developed ecosystem, killer apps, or distribution advantages, users and developers have no reason to use or stay loyal. Public chain users can easily transact with any chain’s users—choice becomes irrelevant.
But privacy chains change the game: once entered, migrating out becomes harder, and privacy leaks are more likely—creating a “winner-takes-all” effect. Because privacy is critical for most applications, a few privacy chains may dominate the entire crypto market.
The future of communication: not only quantum-resistant but also decentralized
The world is preparing for the quantum era. Many communication applications (like certain social media or messaging tools) have adopted quantum-resistant standards. The problem is: almost all mainstream communication apps rely on private servers managed by a single organization.
These servers are ideal targets for governments—they can be shut down, backdoored, or forced to deliver data. If governments can shut down servers, or if companies hold private keys or simply own them, what’s the point of quantum cryptography?
Private servers demand “trust me,” while no private servers mean “trust no one.” Communication needs protocols that are open and trustless.
This is achieved through network decentralization: no private servers, no reliance on single applications, all open-source and cryptographically strongest (including quantum resistance). In an open network, no individual, enterprise, NGO, or government can deprive us of communication.
Even if a government shuts down an app, within a day, 500 new versions will emerge. Even if nodes are shut down, blockchain-based economic incentives will immediately fill the gap. When people control data and identities via private keys as they do money, everything changes.
Applications may come and go, but users always control their data and identities—even if they do not own the app itself. This is not only about quantum resistance and cryptography but also about ownership and decentralization. Both are indispensable; otherwise, we are just building superficially unbreakable but easily shut-down systems.
Privacy as a service
Behind every model, agent, and automation process is a fundamental element: data. Today, most data flows (inputs and outputs) are opaque, mutable, and hard to audit.
Acceptable for some consumer applications, but in finance, healthcare, and other industries, sensitive data privacy must be protected. This is also the main obstacle to institutional tokenization of RWA.
How to promote security, compliance, autonomy, and global interoperability while protecting privacy?
The key is access control: who controls sensitive data? How does it flow? Who (or what) can see it? Without access control mechanisms, privacy-conscious users must rely on centralized platforms or build their own systems—costly, time-consuming, and limiting the advantages of on-chain data management.
With the emergence of autonomous agents (browsing, trading, decision-making), users and institutions need cryptographic verification mechanisms rather than “trust me” approaches.
Therefore, I believe in “privacy as a service”: new technologies providing programmable native data access rules, client-side encryption, and decentralized key management, precisely controlling who can decrypt under what conditions and when—all implemented on-chain.
Combined with verifiable data systems, privacy protection will become a core part of internet infrastructure rather than an application patch, becoming a truly critical foundational layer.
Evolving from “Code is Law” to “Rules are Law”
Recently, several well-established DeFi protocols have been hacked, despite strong teams, rigorous audits, and years of stable operation. This reveals an unsettling reality: current industry security standards are still based on case-by-case and experiential approaches.
To mature, DeFi security must evolve from reactive responses to proactive design, shifting from “do your best” to principle-based approaches:
In the static phase—before deployment (testing, auditing, formal verification)—this means verifying global invariants rather than just handpicking local ones. Many teams are developing AI-assisted proof tools to help write technical specifications and state invariants, significantly reducing manual proof costs.
In the dynamic phase—post-deployment (monitoring, real-time enforcement)—these invariants can be transformed into dynamic safeguards—the last line of defense. These safeguards are encoded as conditions, and each transaction must satisfy them in real-time. This way, we no longer assume all vulnerabilities will be discovered—instead, we enforce critical security properties directly in code, and any violating transaction is automatically reverted.
This is not just theoretical. In practice, almost every known exploit triggers such security checks, which could have prevented the attack.
Thus, the once-popular concept of “code is law” has evolved into “rules are law”: even novel attack vectors must satisfy the system’s security requirements, making remaining attack surfaces either trivial or extremely difficult.
Part Four: Emerging Applications and Cross-Domain Innovation
The evolution of prediction markets toward mainstream, diversification, and intelligence
Prediction markets are gradually becoming mainstream. In the coming year, combined with crypto and AI, their scale will expand, scope broaden, and operations become smarter—yet this also creates new challenges for startups.
First, a surge in new contracts. We can now bet not only on major elections or geopolitical events but also price finer results and complex cross-criteria events. As new contracts integrate into the information ecosystem (already happening), key social issues arise: how to evaluate information and optimize its design to make it more transparent, auditable, and open—this is where crypto’s advantages lie.
To handle the explosion of contracts, new consensus verification mechanisms are needed. Centralized decision-making (did an event happen? How to confirm?) is critical but controversial. Cases like Zelensky or Venezuelan elections expose its limitations.
To address such cases and expand prediction markets into more practical applications, decentralized governance mechanisms and large language models as oracles will help establish facts in disputes. AI has demonstrated remarkable predictive potential. AI agents operating on these platforms can scan global trading signals, profit from short-term trades, discover new cognitive dimensions, and improve event predictions. These agents are not just political advisors—they can analyze strategies to better understand factors influencing complex social events.
Will prediction markets replace polls? No, but they can enhance polls (poll data as market input). As a political scientist, I am most interested in how prediction markets can collaborate with rich polling ecosystems, but we need to leverage AI and crypto to improve survey experiences, ensuring respondents are real humans, not bots.
The rise of wager-based media
The “objectivity” of traditional media has long been questioned. The internet has empowered everyone to voice opinions, and more operators, practitioners, and creators are directly engaging with the public. Their viewpoints reflect their interests, and audiences—counterintuitively—respect and even appreciate their honesty.
Innovation is not in social media growth but in crypto tools enabling public, verifiable commitments. AI makes generating unlimited content cheap and easy, allowing any viewpoint or identity (real or virtual) to be expressed solely through words (human or machine).
Tokenized assets, programmable locks, prediction markets, and on-chain histories provide a more solid trust foundation: commentators can express opinions and simultaneously prove their stakes. Podcasts can lock tokens to demonstrate they are not market speculators. Analysts can link predictions to on-chain settlement, creating auditable track records.
I call this early form of “betting media”: such media not only acknowledge conflicts of interest but can prove them. In this model, credibility does not come from false neutrality or hollow statements but from the willingness to openly and verifiably assume risk. Betting media will not replace other forms but will complement them, providing new signals: not “trust me because I am neutral,” but “see the risk I bear—you can verify.”
Crypto as a new infrastructure beyond blockchain applications
For years, SNARKs (zero-knowledge proofs) have been confined to blockchain applications. The cost is enormous: generating proofs takes work equivalent to hundreds of times the computation itself. Distributed across thousands of nodes, it makes sense, but in other fields, it’s impractical.
This will change. By 2026, zkVM proof costs will drop to about 10,000 times less, with memory usage reduced to hundreds of MB—making it feasible to run on smartphones with minimal deployment costs.
The 10,000-fold figure is critical because GPU performance is roughly 10,000 times that of a laptop CPU. By the end of 2026, a single GPU will be able to generate proofs for CPU computations in real-time. This could unlock a long-standing academic vision: verifiable cloud computing.
If you have already used cloud CPUs (without GPU, zero-knowledge, or legacy systems), you will be able to obtain cryptographic proofs of computation correctness at reasonable prices. The proof generators themselves will be optimized for GPUs, requiring no code changes on your part.
Light trading, heavy rebuilding
Treating transactions as stations rather than endpoints is the business philosophy of crypto enterprises. Today, aside from stablecoins and infrastructure, almost every successful crypto company has experienced or planned to shift toward trading.
But what if “every crypto company becomes a trading platform”? How would the industry look? Widespread convergence would lead to vicious competition, leaving only a few winners.
This means that rushing into trading causes companies to miss the opportunity to build more resilient, durable business models. I sympathize with founders struggling for survival, but rapid product-market fit pursuit comes at a cost.
This problem is especially acute in crypto: token speculation often drives founders to seek immediate gratification rather than long-term market fit—like a cotton candy test. Trading itself is harmless and an important market function, but it should not be the ultimate goal. Founders focusing on “product” and its fit with the market have higher success probabilities.
Improved regulation and technological synergy will unlock the full potential of blockchain
Over the past decade, one of the biggest obstacles to blockchain in the US has been legal uncertainty. Securities laws are often misused and selectively enforced, forcing founders to adopt frameworks designed for traditional companies rather than blockchain.
For years, companies prioritized minimizing legal risk over product strategy, with engineers taking a backseat and lawyers dominating. This led to strange phenomena: founders discouraged from transparency, token distributions made arbitrarily (to avoid legal issues), governance being superficial, organizational structures designed for compliance rather than efficiency, and token designs avoiding economic value or even business models.
Worse, projects operating on legal edges often outperform honest builders.
But regulatory oversight of crypto market structures is imminent and may remove these distortions next year. If legislation passes, it will encourage transparency, establish clear standards, and provide explicit pathways for fundraising, token issuance, and decentralization—replacing the current “regulatory roulette.”
The implementation of a certain stablecoin law has already seen explosive growth; regulatory frameworks for market structure will bring even greater change—this time, in the network ecosystem. In other words, such regulation will enable blockchain to truly operate as a network: open, autonomous, composable, neutral, and decentralized.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
Cryptocurrency Industry Outlook 2026: 17 Key Transformations from Payments to Privacy
Compilation: Gate Content Plaza Original Author’s Perspectives
Part One: Upgrading Payment and Financial Infrastructure
Stablecoin trading is experiencing explosive growth, reshaping traditional finance
Last year, stablecoin transaction volume reached $46 trillion, a staggering figure—more than 20 times PayPal’s daily average transaction volume, nearly three times the total global transaction volume of the largest payment network Visa, and rapidly approaching the scale of the US ACH electronic transfer network.
Transaction speed is no longer a bottleneck: on-chain settlement can be completed within 1 second, at a cost of less than one cent. The real challenge lies in the “last mile”—how to seamlessly connect stablecoins with the real financial system?
New generation startups are filling this gap. They use cryptographic verification technologies to enable users to effortlessly transfer local account balances into digital assets; integrate regional payment networks for QR code transfers and real-time clearing; and even build global interoperable wallets and card platforms, making stablecoins a daily payment tool.
These innovations collectively promote a phenomenon: digital dollars are moving from the fringes of the financial market into the mainstream payment layer. Employees can receive cross-border real-time salaries, merchants can accept global settlements without bank accounts, and applications can instantly deliver value to users—stabilcoins are evolving from mere trading tools into the foundational clearing layer of the internet.
Tokenization of physical assets requires more “native” design
Tokenizing traditional assets has become a trend, but most approaches remain superficial. US stocks, commodities, and index funds are packaged into tokens, yet they do not fully leverage the native features of blockchain.
The real opportunities lie in perpetual contracts and other synthetic products—they offer deep liquidity and are easier to implement. The leverage mechanisms of perpetual contracts are transparent and easy to understand, making them highly compatible with crypto markets. Emerging market stocks are especially suitable for “perpetualization” (some stock options markets already have liquidity surpassing spot markets).
But the question is: should we “perpetualize” assets or “tokenize” them? Both paths are feasible, but by 2026, we will see more asset management approaches that are inherently native to crypto.
Another trend worth noting is the emergence of truly “native stablecoin issuance” rather than simple tokenization. As stablecoins become mainstream, those lacking strong credit foundations will face difficulties—they resemble “narrow banks,” holding only ultra-safe assets, which makes it hard to become the backbone of on-chain economy.
The breakthrough lies in establishing on-chain credit infrastructure. Emerging asset management firms and curated protocols are beginning to offer loans collateralized by off-chain assets and settled on-chain. However, the problem is that most of these loans are tokenized after being initiated off-chain, increasing costs. The ideal model would be to originate loans directly on-chain, reducing management costs, accelerating settlement, and expanding accessibility. Standardization and compliance are challenges, but the industry is actively exploring solutions.
Decades-old banking legacy systems are entering a wave of modernization
The truth about core banking systems is often surprising: trillions of dollars of assets worldwide still run on mainframe systems built in the 1960s-70s, programmed in COBOL, with data flowing through batch files rather than APIs.
Second-generation core banking systems (like Temenos GLOBUS) emerged in the 1980s-90s but are now outdated, with slow upgrade cycles. These systems manage critical accounts such as deposits, collateral, and liabilities. Although validated and trusted by regulators, their cumbersome technical debt and complex compliance costs hinder innovation—adding real-time payment features can take months or even years.
The advent of stablecoins and on-chain assets has changed all that. Not only do stablecoins find product-market fit, but traditional financial institutions are also unprecedentedly embracing them. Tokenized deposits, on-chain government bonds, and bonds enable banks, fintechs, and asset managers to launch new products and serve new clients without rewriting those outdated yet stable legacy systems.
Stablecoins open a new avenue for innovation in traditional finance.
Rebuilding payment infrastructure in the era of intelligent agents
As AI agents become widespread, business operations will shift from user click-driven to backend autonomous processes, redefining capital flow requirements.
In a world driven by intent rather than commands, AI agents need to recognize needs, execute commitments, and trigger transactions—value flow speed must match information flow.
This is where blockchain, smart contracts, and on-chain protocols come into play. Currently, smart contracts can settle global USD transactions within seconds. By 2026, new primitives like HTTP 402 will make settlements programmable and real-time: agents can make permissionless, instant payments for data, GPU computing power, or API calls, without invoices, reconciliation, or batch processing.
Software updates can embed payment rules, credit limits, and audit trails—without fiat currency integration, merchant verification, or financial institutions. Prediction markets will settle in sync with event developments, traders can trade freely, and global payments will clear instantly.
When value flows like data packets across the internet, “payment streams” will no longer be a separate business layer but a fundamental network behavior. Banks will evolve into internet pipelines, assets into infrastructure. At that point, money is essentially information routed over the internet—Internet not only supports the financial system but is the financial system itself.
Democratization of wealth management: from elite to全民 coverage
Historically, personalized wealth management has been a privilege of high-net-worth individuals—offering customized investment advice and cross-asset portfolios that are costly and complex.
Asset tokenization changes this game. Through crypto channels, AI strategies and protocols can realize and dynamically adjust personalized portfolios in real-time at low cost. This surpasses passive management by “robo-advisors”—today, anyone can access actively managed investment services.
By 2025, traditional institutions will increase their crypto exposure (directly or via products), but this is just the beginning. By 2026, platforms designed for “wealth growth” rather than “wealth preservation” will emerge. Fintech firms and leading trading platforms will compete for market share thanks to technological advantages. Meanwhile, DeFi tools like yield aggregators can automatically allocate assets to optimal risk-return lending markets, building the core yield of investment portfolios.
Using stablecoins to replace fiat for idle liquidity, investing in RWA money market funds instead of traditional products, these micro-adjustments can significantly boost returns. Additionally, retail investors can more easily access less liquid private market assets—private credit, pre-IPO equity, private equity. Tokenization unlocks the potential of these markets while meeting compliance reporting requirements.
The ultimate value lies in a diversified tokenized portfolio containing bonds, stocks, private investments, and alternative assets, which can automatically rebalance without manual cross-platform transfers—a qualitative leap in efficiency.
Part Two: Infrastructure of AI and Agent Layers
From “Know Your Customer” to “Know Your Agent”
The bottleneck constraining AI agent economies is shifting from intelligence level to identity verification. The number of “non-human identities” in financial services has exceeded human employees by 96 times, yet these identities are still “ghosts without accounts.”
The missing key infrastructure: KYA (Know Your Agent). Just as humans need credit scores to obtain loans, AI agents require cryptographic signatures and verifiable certificates to execute transactions—certificates must link to authorized entities, operational limits, and accountability chains.
Until this mechanism is perfected, traders will intercept agents at firewall levels. The KYC infrastructure built over the past decade now must address KYA within months.
AI is reshaping academic research paradigms
As a mathematical economist, I still spend a lot of effort early this year teaching AI models to understand my research workflows. By year-end, I can give abstract instructions like guiding a PhD student—sometimes the model provides entirely new and correct answers.
This trend goes beyond personal experience. AI’s application in academic research is increasingly widespread, especially in logical reasoning—existing models not only support scientific discovery but can even independently solve Putnam math competition problems (the most difficult university-level math contests).
Which disciplines benefit most, and how to use these tools remains an open question. But I predict AI research will give rise to and reward a new class of scholars: those who can predict relationships between concepts and quickly derive conclusions from fuzzy answers. These answers are not always accurate but can point in the right direction.
Ironically, this is somewhat like leveraging the “hallucinations” of models: when sufficiently clever, giving space for thought may produce absurd conclusions, but it can also lead to breakthroughs—similar to human creativity in nonlinear, non-obvious thinking.
This reasoning requires new workflows: not just single-agent interactions but nested model systems. Multi-layer models help researchers evaluate ideas from preliminary models, gradually eliminate noise, and reveal core value. I have used this method to write papers; others use it to search patents, create art, or (regrettably) find vulnerabilities in smart contracts.
But running such systems requires better model interoperability and mechanisms to identify and fairly reward each model’s contribution—precisely what cryptography can help solve.
The “invisible tax” facing open networks
The surge of AI agents imposes an invisible tax on open networks, fundamentally threatening their economic foundation.
The core issue is the widening gap between two internet layers: the content layer (ad-supported) and the execution layer. Currently, AI agents draw data from ad-driven websites to provide convenience to users but systematically bypass revenue channels supporting content creation (ads, subscriptions).
To protect open networks and foster diverse content for AI development, we need large-scale deployment of technical and economic solutions—new sponsorship models, attribution systems, innovative financing mechanisms, etc.
Existing AI licensing agreements are proven stopgaps, often compensating only a small part of lost revenue. Networks need new economic models that enable automatic value flow.
Key changes will occur in the future: shifting from static licensing to real-time usage-based billing. This involves testing and deploying systems—perhaps leveraging blockchain—to enable micro-payments and precise provenance, automatically rewarding contributors providing data for AI agents.
Part Three: Privacy, Security, and Trust
Privacy will become the strongest competitive barrier in the crypto space
Privacy is a necessary condition for global on-chain finance but is a flaw in almost all existing blockchains. Most chains treat privacy as a patch after deployment rather than a core design feature.
However, now, mere privacy is enough to isolate a chain. More importantly, privacy creates network lock-in effects—called “privacy network effects.” This is especially critical in an era of performance convergence.
Through cross-chain bridging, if all data is public, migrating assets between chains is trivial. But when involving private data, the situation reverses: token bridges are simple, secret bridges are extremely difficult. Moving in and out of private zones always carries risks of de-anonymization via chain monitoring, mempools, or network traffic. When shifting boundaries between private and public chains, metadata such as transaction timing and scale can leak, making tracking easier.
Compared to emerging homogeneous chains (where competition for block space drives fees to zero, erasing differences), privacy chains can establish stronger network effects. In fact, if general-purpose public chains lack a developed ecosystem, killer apps, or distribution advantages, users and developers have no reason to use or stay loyal. Public chain users can easily transact with any chain’s users—choice becomes irrelevant.
But privacy chains change the game: once entered, migrating out becomes harder, and privacy leaks are more likely—creating a “winner-takes-all” effect. Because privacy is critical for most applications, a few privacy chains may dominate the entire crypto market.
The future of communication: not only quantum-resistant but also decentralized
The world is preparing for the quantum era. Many communication applications (like certain social media or messaging tools) have adopted quantum-resistant standards. The problem is: almost all mainstream communication apps rely on private servers managed by a single organization.
These servers are ideal targets for governments—they can be shut down, backdoored, or forced to deliver data. If governments can shut down servers, or if companies hold private keys or simply own them, what’s the point of quantum cryptography?
Private servers demand “trust me,” while no private servers mean “trust no one.” Communication needs protocols that are open and trustless.
This is achieved through network decentralization: no private servers, no reliance on single applications, all open-source and cryptographically strongest (including quantum resistance). In an open network, no individual, enterprise, NGO, or government can deprive us of communication.
Even if a government shuts down an app, within a day, 500 new versions will emerge. Even if nodes are shut down, blockchain-based economic incentives will immediately fill the gap. When people control data and identities via private keys as they do money, everything changes.
Applications may come and go, but users always control their data and identities—even if they do not own the app itself. This is not only about quantum resistance and cryptography but also about ownership and decentralization. Both are indispensable; otherwise, we are just building superficially unbreakable but easily shut-down systems.
Privacy as a service
Behind every model, agent, and automation process is a fundamental element: data. Today, most data flows (inputs and outputs) are opaque, mutable, and hard to audit.
Acceptable for some consumer applications, but in finance, healthcare, and other industries, sensitive data privacy must be protected. This is also the main obstacle to institutional tokenization of RWA.
How to promote security, compliance, autonomy, and global interoperability while protecting privacy?
The key is access control: who controls sensitive data? How does it flow? Who (or what) can see it? Without access control mechanisms, privacy-conscious users must rely on centralized platforms or build their own systems—costly, time-consuming, and limiting the advantages of on-chain data management.
With the emergence of autonomous agents (browsing, trading, decision-making), users and institutions need cryptographic verification mechanisms rather than “trust me” approaches.
Therefore, I believe in “privacy as a service”: new technologies providing programmable native data access rules, client-side encryption, and decentralized key management, precisely controlling who can decrypt under what conditions and when—all implemented on-chain.
Combined with verifiable data systems, privacy protection will become a core part of internet infrastructure rather than an application patch, becoming a truly critical foundational layer.
Evolving from “Code is Law” to “Rules are Law”
Recently, several well-established DeFi protocols have been hacked, despite strong teams, rigorous audits, and years of stable operation. This reveals an unsettling reality: current industry security standards are still based on case-by-case and experiential approaches.
To mature, DeFi security must evolve from reactive responses to proactive design, shifting from “do your best” to principle-based approaches:
In the static phase—before deployment (testing, auditing, formal verification)—this means verifying global invariants rather than just handpicking local ones. Many teams are developing AI-assisted proof tools to help write technical specifications and state invariants, significantly reducing manual proof costs.
In the dynamic phase—post-deployment (monitoring, real-time enforcement)—these invariants can be transformed into dynamic safeguards—the last line of defense. These safeguards are encoded as conditions, and each transaction must satisfy them in real-time. This way, we no longer assume all vulnerabilities will be discovered—instead, we enforce critical security properties directly in code, and any violating transaction is automatically reverted.
This is not just theoretical. In practice, almost every known exploit triggers such security checks, which could have prevented the attack.
Thus, the once-popular concept of “code is law” has evolved into “rules are law”: even novel attack vectors must satisfy the system’s security requirements, making remaining attack surfaces either trivial or extremely difficult.
Part Four: Emerging Applications and Cross-Domain Innovation
The evolution of prediction markets toward mainstream, diversification, and intelligence
Prediction markets are gradually becoming mainstream. In the coming year, combined with crypto and AI, their scale will expand, scope broaden, and operations become smarter—yet this also creates new challenges for startups.
First, a surge in new contracts. We can now bet not only on major elections or geopolitical events but also price finer results and complex cross-criteria events. As new contracts integrate into the information ecosystem (already happening), key social issues arise: how to evaluate information and optimize its design to make it more transparent, auditable, and open—this is where crypto’s advantages lie.
To handle the explosion of contracts, new consensus verification mechanisms are needed. Centralized decision-making (did an event happen? How to confirm?) is critical but controversial. Cases like Zelensky or Venezuelan elections expose its limitations.
To address such cases and expand prediction markets into more practical applications, decentralized governance mechanisms and large language models as oracles will help establish facts in disputes. AI has demonstrated remarkable predictive potential. AI agents operating on these platforms can scan global trading signals, profit from short-term trades, discover new cognitive dimensions, and improve event predictions. These agents are not just political advisors—they can analyze strategies to better understand factors influencing complex social events.
Will prediction markets replace polls? No, but they can enhance polls (poll data as market input). As a political scientist, I am most interested in how prediction markets can collaborate with rich polling ecosystems, but we need to leverage AI and crypto to improve survey experiences, ensuring respondents are real humans, not bots.
The rise of wager-based media
The “objectivity” of traditional media has long been questioned. The internet has empowered everyone to voice opinions, and more operators, practitioners, and creators are directly engaging with the public. Their viewpoints reflect their interests, and audiences—counterintuitively—respect and even appreciate their honesty.
Innovation is not in social media growth but in crypto tools enabling public, verifiable commitments. AI makes generating unlimited content cheap and easy, allowing any viewpoint or identity (real or virtual) to be expressed solely through words (human or machine).
Tokenized assets, programmable locks, prediction markets, and on-chain histories provide a more solid trust foundation: commentators can express opinions and simultaneously prove their stakes. Podcasts can lock tokens to demonstrate they are not market speculators. Analysts can link predictions to on-chain settlement, creating auditable track records.
I call this early form of “betting media”: such media not only acknowledge conflicts of interest but can prove them. In this model, credibility does not come from false neutrality or hollow statements but from the willingness to openly and verifiably assume risk. Betting media will not replace other forms but will complement them, providing new signals: not “trust me because I am neutral,” but “see the risk I bear—you can verify.”
Crypto as a new infrastructure beyond blockchain applications
For years, SNARKs (zero-knowledge proofs) have been confined to blockchain applications. The cost is enormous: generating proofs takes work equivalent to hundreds of times the computation itself. Distributed across thousands of nodes, it makes sense, but in other fields, it’s impractical.
This will change. By 2026, zkVM proof costs will drop to about 10,000 times less, with memory usage reduced to hundreds of MB—making it feasible to run on smartphones with minimal deployment costs.
The 10,000-fold figure is critical because GPU performance is roughly 10,000 times that of a laptop CPU. By the end of 2026, a single GPU will be able to generate proofs for CPU computations in real-time. This could unlock a long-standing academic vision: verifiable cloud computing.
If you have already used cloud CPUs (without GPU, zero-knowledge, or legacy systems), you will be able to obtain cryptographic proofs of computation correctness at reasonable prices. The proof generators themselves will be optimized for GPUs, requiring no code changes on your part.
Light trading, heavy rebuilding
Treating transactions as stations rather than endpoints is the business philosophy of crypto enterprises. Today, aside from stablecoins and infrastructure, almost every successful crypto company has experienced or planned to shift toward trading.
But what if “every crypto company becomes a trading platform”? How would the industry look? Widespread convergence would lead to vicious competition, leaving only a few winners.
This means that rushing into trading causes companies to miss the opportunity to build more resilient, durable business models. I sympathize with founders struggling for survival, but rapid product-market fit pursuit comes at a cost.
This problem is especially acute in crypto: token speculation often drives founders to seek immediate gratification rather than long-term market fit—like a cotton candy test. Trading itself is harmless and an important market function, but it should not be the ultimate goal. Founders focusing on “product” and its fit with the market have higher success probabilities.
Improved regulation and technological synergy will unlock the full potential of blockchain
Over the past decade, one of the biggest obstacles to blockchain in the US has been legal uncertainty. Securities laws are often misused and selectively enforced, forcing founders to adopt frameworks designed for traditional companies rather than blockchain.
For years, companies prioritized minimizing legal risk over product strategy, with engineers taking a backseat and lawyers dominating. This led to strange phenomena: founders discouraged from transparency, token distributions made arbitrarily (to avoid legal issues), governance being superficial, organizational structures designed for compliance rather than efficiency, and token designs avoiding economic value or even business models.
Worse, projects operating on legal edges often outperform honest builders.
But regulatory oversight of crypto market structures is imminent and may remove these distortions next year. If legislation passes, it will encourage transparency, establish clear standards, and provide explicit pathways for fundraising, token issuance, and decentralization—replacing the current “regulatory roulette.”
The implementation of a certain stablecoin law has already seen explosive growth; regulatory frameworks for market structure will bring even greater change—this time, in the network ecosystem. In other words, such regulation will enable blockchain to truly operate as a network: open, autonomous, composable, neutral, and decentralized.