From Electric Telegraph to AI: Why Technology Promises Always Hide Real Losses

History of technology shows recurring patterns: every major innovation comes with dazzling promises of a better future, but leaves behind a trail of damage rarely discussed. From the electric telegraph in the 1890s to today’s artificial intelligence, technological shifts always bring the same victims—those with the least bargaining power in society. Media theorist Douglas Rushkoff observes that this pattern repeats in the utopian narratives spun by Silicon Valley leaders about AI. For Rushkoff, a professor at Queens College/CUNY and author of “Survival of the Richest” and “Team Human,” optimistic jargon about automation and a future without jobs is just a cover for the elite tech class’s self-preservation strategies from the consequences they themselves create.

Fear Behind the Optimism: Why Tech Billionaires Are Building Bunkers

In a recent interview with Arden Leigh on Repatterning Podcast, Rushkoff sharply criticizes leading tech players. He points out the stark contradiction between what tech billionaires say and what they secretly do. While figures like Mark Zuckerberg and Sam Altman are rumored to be building private bunkers, Elon Musk publicly promotes dreams of space colonization. “These billionaires don’t actually believe in the utopian scenarios they present to the public,” Rushkoff says. “They believe the technology they create can save them—not all of us.”

This approach reflects a deeper fear: anxiety that the system they build will contribute to social and environmental collapse. In hiding this fear, tech leaders craft different narratives for public consumption. “What their actions—building bunkers, planning escapes to space—show is concrete proof that they don’t believe technology will save the world,” Rushkoff states. “They only believe it will save themselves, while the rest of us sink.”

Jobs Aren’t Disappearing, Just Changing—For the Worse

One of the most repeated claims about AI is that it will reduce the need for human labor. But Rushkoff rejects this simple narrative. In his view, what’s happening isn’t job reduction but transformation into forms that are less visible, lower-paid, and far more exploitative. “We’re not seeing a decrease in jobs,” Rushkoff says. “What we see is a degradation of skills and a decline in job quality.”

Robinhood CEO Vladimir Tenev and other tech leaders argue that AI will trigger a boom in new jobs. But Rushkoff exposes a fundamental irony in this claim: the infrastructure needed to make AI work actually relies on millions of human workers. From mining rare earth metals for valuable minerals to massive data labeling in facilities in China and Pakistan, AI systems are built on a hidden foundation of exploited labor. “You need thousands of people to mine rare earth metals,” Rushkoff explains. “You need tens of thousands to label billions of data points. There’s a huge labor infrastructure behind the scenes, but these jobs are the kinds we don’t want to acknowledge or pay fairly.”

This pattern closely resembles what happened during the 1890s industrial revolution with the electric telegraph. New technology shifted traditional jobs into lower-status, lower-wage forms, while tech pioneers told stories of progress and efficiency. Rushkoff warns that we are repeating the same history, but on a much larger scale.

Hidden Costs of AI: The Uncounted Labor

Lisa Simon, chief economist at Revelio Labs—a company analyzing labor market trends—admits that data already reflects the real impact of this shift. Jobs most exposed to automation have experienced the largest demand declines, especially at entry levels. “We see this mainly in low-wage jobs, where there’s real potential to replace entire functions through automation,” Simon tells Decrypt. “And ironically, wages in these positions are growing the slowest.”

Beyond employment impacts, Simon points out that the environmental costs of AI infrastructure are often ignored in the benefits calculations. “I don’t think the environmental costs of these massive data centers are properly accounted for,” she says. Data centers running large AI models require enormous power, creating a significant carbon footprint and driving energy demand that spurs exploration of new resources. Again, history repeats itself: new technology tends to increase resource extraction and cheap labor exploitation, contradicting promises of efficiency and liberation.

Human Bifurcation: Winners and Losers in the AI Era

Vasant Dhar, professor at Stern School of Business and NYU’s Center for Data Science, describes a more nuanced scenario. Dhar argues that the outcome of AI transformation is unlikely to be pure utopia or dystopia, but something more complex: what he calls “human bifurcation.” In this scenario, AI “empowers some” with skills and positions to benefit from it, while “weakening others,” leaving them with AI as a “support, not an enhancer.”

“We will see a lot of job destruction,” Dhar says, adding that it’s still unclear what kinds of jobs will emerge to fill the void. This scenario differs from the optimistic tech narratives—there’s no smooth transition from old jobs to new ones. Instead, there’s a real risk of increasing inequality.

David Bray of the Stimson Center, a leading think tank focused on security and technology, warns against overly extreme extrapolations from either side. “The truth probably lies somewhere in the middle,” Bray tells Decrypt. But he admits that utopian narratives often oversimplify the actual complexity. “When I hear utopian visions, part of me is glad because they’re not spreading fear. But I worry they overlook things that need to be addressed beyond just the technology itself.”

Lessons from History: Why Governance Matters More Than Technology

If there’s one lesson from the history of technology—from the electric telegraph to AI—it’s that the real impact is determined not by the technology itself, but by the policy choices we make. Dhar emphasizes this clearly: “The outcome will depend entirely on governance, not just technological innovation. Will we regulate AI, or will AI regulate us?”

Simon, while optimistic about AI’s long-term potential, believes serious policy intervention is needed now. To maintain social cohesion amid shifting jobs and uneven distribution of benefits, governments may need to consider programs like universal basic income or more progressive redistribution models.

Rushkoff offers a more critical perspective, stressing that the core issue is the ideology behind AI promotion—what he calls a form of transhumanism that views the majority of humans as disposable. “They have a kind of religion,” Rushkoff says. “Where you and I are seen as a larval stage of human evolution. They imagine themselves flying away or uploading into the cloud, while we’re just fuel for their escape plans.”

Thus, the debate over AI isn’t just about technology or jobs. It’s about fundamental choices regarding who benefits from innovation, who bears the costs, and whether we will repeat centuries of exploitation or finally choose a different path.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin

Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)