Daniil and David Liberman: AI is not just a battle of models, but a battle of computational infrastructure

Author | Gonka.ai

Preface: Amid the ongoing global discussion on AI, industry focus often centers on model capabilities, technological breakthroughs, and regulatory frameworks. But beneath these conversations, a more fundamental question is gradually emerging: Who controls the computational infrastructure that underpins AI? In a dialogue at the Unlockit Conference, Daniil and David Liberman, co-creators of the Gonka protocol, futurists, entrepreneurs, and investors, presented a core idea: AI has never been a neutral technology; the infrastructure behind it determines who AI ultimately serves. They see the future of AI as not just a technological race but a long-term contest over control of the infrastructure.

The true foundation of AI: not models, but compute power

Only when people stop questioning its underlying assumptions does centralized AI infrastructure seem inevitable.

For a long time, most discussions about AI have focused on models, ethics, or regulation. But beneath these layers lies a more decisive factor—compute power. Who owns the compute resources, who controls access to them, and under what conditions they can be used—these ultimately determine how AI functions and whom it serves.

Viewing AI from this perspective makes the current landscape hard to ignore. OECD studies and other public data show that advanced AI compute is increasingly concentrated among a few cloud providers and limited countries. This creates a growing “compute gap”—the disparity between those who can access infrastructure and those who cannot.

This concentration is no accident. Today, access to advanced GPUs is controlled by a handful of providers and is increasingly influenced by national priorities. As a result, compute is expensive, capacity is limited, and distribution is uneven geographically. All this is happening at a critical moment when AI is becoming a foundational element of science, industry, and society.

Meanwhile, current decentralization efforts do not automatically solve this problem. Many decentralized systems still allocate significant compute to consensus and security overhead, and incentive mechanisms often reward capital rather than actual computational contribution. This discourages hardware providers and slows innovation at the infrastructure level.

This is where our thinking begins to diverge. We are not driven by ideology nor opposed to centralized players; we start from a practical question: if efficiency, access, and contribution could be aligned rather than conflict, what would AI infrastructure look like?

This question leads us to a model: most compute is used for genuine AI work rather than system overhead; participation and governance are determined by verified computational contributions, not capital; access to global GPU resources is permissionless by design. In practice, these assumptions are continually tested through open discussions, including real-time collaborations with GPU operators, developers, and researchers—such as in our Discord community.

AI has never been just software. It has always been infrastructure. And infrastructure choices often lock society into decades-long development trajectories. Placing this infrastructure under the control of a few companies or nations is not a neutral technical outcome but a structural decision with long-term economic and geopolitical consequences. If intelligence is to become truly rich, its supporting infrastructure must be designed from the outset to foster “richness.”

The true measure of decentralized AI success

The main challenge is that you’re not just debating with people—you’re debating with “default assumptions.”

Mainstream tech communities tend to optimize for short-term effectiveness: speed, capital efficiency, centralized control, and scaling through integration. These choices are reasonable locally, but once they become default, they’re rarely questioned. Challenging these assumptions feels like speaking a different language—not because the ideas are extreme, but because they threaten established incentives within careers, companies, and strategies.

Timing makes it even harder. Centralized systems often appear highly successful before their long-term costs become apparent. Massive investments and infrastructure expenditures are obvious, but deeper costs—such as increased dependency, reduced flexibility, concentration of pricing power, and systemic entrenchment—only reveal themselves later.

For us, success isn’t about winning a debate or replacing existing players. It’s much quieter: success is when decentralized infrastructure stops being a declaration and simply becomes the most practical choice—used not because people believe in decentralization but because it’s the best option.

Ultimately, true success occurs when the entire conversation shifts. When the question is no longer “Should AI be centralized?” but “Why did we think it had to be?” At that point, beliefs will evolve naturally without direct confrontation.

How do companies decide whether to go centralized or decentralized?

AI infrastructure is no longer just a technical issue; it’s becoming a strategic dependency.

For companies, centralized AI infrastructure creates irreversible lock-in. Once critical systems depend on a few providers, control shifts from users to infrastructure owners. Over time, this affects pricing, access, innovation pace, and strategic flexibility.

The key decision point is strategic flexibility. Early on, centralized infrastructure may work well, but it tends to become a long-term dependency. Costs become harder to control, alternatives more difficult to adopt, and changing architecture at scale increasingly costly.

The critical moment often arrives earlier than most realize. Infrastructure choices are often locked in before their full consequences are clear. Once AI moves from experimental to operational infrastructure, changing underlying architecture becomes exponentially more expensive. The real decision point isn’t when centralized systems fail but when they still seem to work well. Exploring decentralized options early preserves options; waiting often means choices are already made.

Is it too late once you’re dependent on centralized infrastructure?

Rarely “too late,” but difficulty increases exponentially over time.

Once most systems rely on centralized AI infrastructure, the challenge shifts from technical to institutional. Workflows, incentives, budgets, compliance, and talent development all assume centralization as the norm. At that stage, changing isn’t just about migrating infrastructure; it’s about relearning ingrained habits, contractual patterns, and mindsets.

Research on infrastructure lock-in confirms this. Industry analyses show that after years of operating in centralized cloud environments, switching costs rise sharply—not linearly. This growth stems from long-term contracts, regulatory frameworks, deeply integrated internal processes, and highly specialized workforces. OECD studies also highlight that countries and organizations that don’t secure early access to AI compute face increasing disadvantages over time—losing competitiveness and the ability to choose alternative infrastructure models.

History shows infrastructure shifts rarely happen all at once. They usually start at the margins. New applications, participants, and constraints create pressure points where centralized systems become insufficient—costly, slow, restrictive, or fragile. These are often the moments when alternatives become viable.

Over time, what’s truly eroded is “choice.” The longer centralized infrastructure dominates, the fewer real options remain.

Dependence gradually solidifies, and decentralization shifts from an active design choice to a passive correction—one that is always more costly, complex, and difficult to control.

Thus, the real risk isn’t that it’s “too late.” The real risk is waiting until decentralization is no longer a choice but a forced response to systemic failure. Early exploration, even in parallel with centralized approaches, provides more room to shape outcomes proactively rather than reactively.

For the next generation, AI architecture will determine opportunity distribution

Future generations must understand that technology doesn’t become neutral simply because it advances.

Each generation inherits infrastructure choices made before, often unaware that these were deliberate decisions rather than inevitable outcomes. For future generations, AI will be as ubiquitous as electricity or the internet today. That’s why the underlying architecture is so critical—it not only defines what’s possible but also for whom.

They need to realize that access to intelligence can be organized in fundamentally different ways. It can be a shared foundation: open, rich, and hard to monopolize. Or it can be enclosed, priced, and controlled—even if superficially convenient and efficient. Both paths can produce impressive technology, but only one preserves long-term freedom, resilience, and genuine choice.

They should also understand that centralization often arrives quietly—not through coercion but through convenience. The initial trade-offs seem minor: lower costs, faster deployment, easier coordination. But the consequences become apparent later—when changing course becomes costly or nearly impossible.

It’s equally important to recognize that infrastructure directly impacts social mobility. Seemingly neutral systems can reduce inequality at the start but also lock in disparities for decades. As you may know, this is a key concern for us. Younger generations already face greater disadvantages than their predecessors. Current AI deployment methods do little to address this—and may even worsen it. From this perspective, architecture choices influence not just efficiency but also who gets the opportunity to experiment, build, and shape the future.

Most importantly, future generations must understand that these systems are still human-designed. They are not determined by fate, markets, or machines alone. Questioning default assumptions, asking who benefits from a given architecture, and insisting on maintaining options are not acts of resistance to progress—they are essential to keeping progress open.

Why share these stories on Unlockit?

Unlockit appears to be a space for discussion where conversations aren’t about hype, releases, or predictions, but about why people make certain choices. That’s important to us. Our story isn’t about a specific project or technology; it’s about recognizing structural patterns early and choosing not to accept them as inevitable.

Over the years, we’ve operated within mainstream systems: building companies, investing, collaborating with large organizations, and benefiting from centralized infrastructure. We understand how these systems work from the inside. At some point, we realized that repeating the same structures while hoping for different results rarely produces anything truly new. Instead of staying silent or packaging this insight as another success story, we choose to share it openly.

We come to Unlockit not only to reflect but also to share practical experiences that are relevant to different groups present. For entrepreneurs, these issues involve infrastructure control, dependency on providers, and scaling without losing flexibility. For investors, they concern long-term risks, infrastructure lock-in, and which models can create lasting value. For corporate and technical leaders, they relate to cost structures, reliability, regulatory constraints, and strategic freedom in a rapidly changing environment.

We aim to share an alternative approach—one that’s already operational in practice—not as a universal answer but as a different way of thinking: how to build AI infrastructure with less dependency, greater transparency, and more long-term options. Equally important, we want to hear feedback from those making real decisions at the business, capital, and institutional levels.

We believe these discussions shouldn’t be confined to insiders. When infrastructure decisions aren’t openly debated, they quietly become default choices. Unlockit provides a space for reflection before these choices become irreversible, making participation in this dialogue meaningful.

Ultimately, participating in Unlockit isn’t about explaining what we’re doing but about emphasizing why questioning default assumptions remains vital—especially in an era where technological progress seems rapid, powerful, and inevitable. It’s also about listening to those shaping the future of business, technology, and society.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin