Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
MIT: No need to panic about AI doomsday theories; verification ability is a scarce resource
Source: Bankless Podcast; Organized by: Felix, PANews
MIT economist Christian Catalini appeared on Ryan and David’s show to delve into his new paper “Some Simple Economics of General Artificial Intelligence.” The paper points out that the scarce resource in AI economics is no longer intelligence, but verification: the ability of humans to check, assess, and confirm the correctness of AI outputs.
Christian elaborated on two cost curves reshaping various industries (automation costs and verification costs), explaining why entry-level jobs are the first to disappear and why even top experts are unknowingly cultivating their successors (the “coder’s curse”). He also illustrated three roles that are likely to remain during the transformation: directors, meaning creators, and responsibility underwriters.
PANews has organized the highlights of the conversation.
Host: I think many listeners may share my panic about AI. Why do you think people are worried about AI? Are their concerns reasonable?
Christian: We all feel the same way. This is a fast and transformative period of change, and the closer you are to the code, the sooner you may witness this acceleration, which has become very real in the past few months. This technology has accomplished things that many thought would take longer to achieve, and it’s a feeling we are all struggling to cope with. But I think the “apocalyptic” view is misguided; people often underestimate the potential of these tools. Yes, there will be an extremely difficult transition period, and the speed of job transformation is unprecedented in history. Nevertheless, if you leverage the best features of this technology and invest in it, the long-term outlook is mostly positive, even though the journey will be bumpy. Economics views work as a collection of tasks, some of which will be automated, which is good news. The key is how you retrain yourself and stay at the forefront.
Host: Who do you think will be the first to be impacted?
Christian: That’s an excellent question, and I have many different thoughts on it. First, when I say that those closest to the code will be the first to feel the impact, I mean they will be the first to experience how powerful this technology is. As the “Jevons Paradox” reveals, when something becomes more efficient, we tend to consume more of it, like writing more software. I believe programming will undergo a differentiation like many other professions, which we refer to in the paper as the “disappearing primary loop.” If you are a junior worker who has not yet acquired the “tacit knowledge” to distinguish between great products and mediocre ones, then AI can effectively replace you across various fields.
Everyone can now easily access a fairly decent marketer, junior programmer, or lawyer who can help you handle most situations; you just need to hire top lawyers for final verification at the end. On the other hand, even top experts, in the process of introducing AI, are unintentionally creating labels, information, and digital footprints that will eventually lead to the automation of their own work. Top laboratories are hiring elite talent from fields like finance to create assessment criteria and incorporate that domain expertise into large models. So I believe no single job is 100% safe; even physical labor constrained by robotic manufacturing capabilities will see reward models achieve significant leaps in the coming years. Anything that happens in front of a screen can be tracked, replicated, and learned. The key for every profession is to think: where can I add value if I delegate as much work as possible to AI?
In fact, there is a lot of “self-soothing” regarding “taste” and “judgment.” They are very ambiguous. So in the paper, we say there is no such thing as taste or good-bad judgment; there is only the distinction between “measurable” and “immeasurable.” If something has already been measured, machines can replicate it. If something is still embedded in the weights of your brain, like how a top designer has accumulated thousands of hours of experience to decide what should be published and what shouldn’t, that is what we call “verification.” All verification is this final step: AI agents create products, and you, as the decision-maker, judge whether it meets the market standards. As machines obtain better data, things will be automated; but in the face of unknown areas or places with no data at all, that part will still belong to humans for the next few years.
Host: That’s a profound insight. But I also think it’s natural for engineers to automate their own work. Is the impact on every industry the same?
Christian: We have enough evidence to show that the change will be uneven. You can think about it this way: is this job just a “packaging” of something that society fundamentally doesn’t need? For example, general consulting work, if it mainly involves repackaging, refining, and summarizing information that is already widely available, that clearly poses a risk. However, if it brings scarce domain expertise or is needed for political reasons, those positions will survive. Ask yourself whether this profession is profitable because it solves a complex problem or merely due to some artificial bottleneck.
Host: What does verification actually mean? I find it hard to break down my day’s work into cognitive tasks and verification tasks.
Christian: Agents have learned and measured everything from the web and books, and because they are cheaper and scalable, they will replace the measurable parts. But what agents still do not know is the unique neural network weights in your brain. This is something you have gained through your own experiences and struggles, which makes you a top expert. For example, early cryptocurrency participants, many from Argentina, Venezuela, etc., who have experienced hyperinflation, respond to assets in a completely different way. This intrinsic unique measurement is still a huge advantage.
What is verification? It is the difference between your own measuring standards of the world and the standards that the agent possesses. Like a top editor who knows exactly which articles will resonate; or a top CTO who, faced with a massive codebase generated by AI, knows precisely which critical edge cases must be checked by humans, a part that machines cannot measure yet.
Host: Let me give an example. If I see a video on X about Israel being bombed, but I find out it was generated by AI. I use my brain to identify the problem and might prompt it again to generate a better video. Is that my “verification ability”?
Christian: That’s a great example. Furthermore, we might soon find ourselves in a world where, for most people, this video is indistinguishable from reality. The next step might be military experts noticing that the dynamics of the flames are off. Then, even military experts may not be able to tell at a glance, requiring AI to analyze physical principles and conduct simulation tests. Eventually, it might become completely indistinguishable, and at that point, we will have to rely on cryptographic infrastructure to confirm authenticity. The same goes for the medical field; edge cases ultimately require top radiologists, who use 20 years of experience and an understanding of the specific context of patients, to override AI’s judgments. This is the last thin layer of “filtering” we focus on. When we do this, we free up a lot of time. So, that’s the upside. We can do more with fewer resources. The cost of expensive things will decrease. Society as a whole will consume more of these things. I think that’s good news.
Host: But in your example, currently, he is doing the verification, but soon he won’t be able to verify, needing a military commander, and eventually even the commander won’t be able to verify and will have to rely on AI. Doesn’t this illustrate that “verification” was initially valuable but will soon be automated by AI? So even “verification” itself is not secure?
Christian: Exactly. We refer to this in the paper as the “coder’s curse.” The very rational act of doing verification is pushing the frontier of technology development and data-izing experience. We can’t stop because all lawyers or practitioners are trying to use AI. Verification is indeed a shrinking frontier.
Host: Even the last bastion of verification work is shrinking. When will we be able to stop feeling anxious?
Christian: First of all, some things are inherently immeasurable, like the so-called “status games” or things that humans ascribe meaning to. These areas will not be invaded by machines because they are characterized by human coordination and consensus. Cryptocurrency is somewhat similar; what matters is the consensus among humans about what has value. As measurable work areas shrink, we will invent many ways to make immeasurable work meaningful.
Host: AI can build a website in 10 seconds but may not be able to write a tweet that appeals to humans. Could this be one of the last remaining verification tasks?
Christian: Capturing attention and telling a truly novel joke is extremely difficult creative work, trying to break new ground that has never been measured before. We have evolved a strong ability to cope with unknown environments over a long survival journey. Those engaged in this kind of work are called “meaning makers.” For example, in the arts or culture, what is good depends on human consensus. Even when you use AI agents, you still have to set “intent.”
Host: Automation costs are decreasing exponentially. What about the “cost of verification”? Will it always be constrained by human biological limitations?
Christian: Currently, it is biologically constrained. So many companies are releasing a large amount of AI-generated code, but there simply aren’t enough human resources to read and verify them, which inherently hides risks.
Host: Can’t AI verify AI?
Christian: If AI can verify correctly, then that part itself is automatable. After exhausting all AI verification, what remains is the truly un-verifiable stuff by AI, which represents the bottleneck for human intervention.
Host: If verification is the new scarce resource but is constantly shrinking, how should one work and invest in this economy?
Christian: We created a 2x2 matrix based on “automation costs” and “verification costs.” The lower left corner is replaced workers: automation is easy, verification is easy, and you definitely don’t want to be here. The other three quadrants are:
Meaning creators: automation is difficult, verification is difficult. They are dedicated to social consensus, status games, and human connections. For example, taste makers in the fashion industry, crypto KOLs on Twitter who create narratives and coordinate attention.
Responsibility underwriters: automation is easy, verification is difficult. They are top experts in their fields, such as elite lawyers, doctors, or venture capitalists. They leverage AI on a large scale but provide responsibility and verification services for final edge cases.
Directors: automation is difficult, verification is easy. The core is “intention.” They deal with “unknown unknowns,” directing agents like entrepreneurs, setting direction, sensing deviations, and constantly correcting course.
Host: What should young graduates do who want to enter the workforce? On one side are worthless entry-level jobs, and on the other are top experts who need ten years of industry honing; there is a huge gap between the two. If AI can do entry-level tasks, how can young people grow to the other end?
Christian: The gap does exist. But the good news is that you can compress the learning time. You can skip traditional training steps. A junior engineer can now do the work of an entire team with the help of tools. Although they may make mistakes at first, as newcomers, they can question traditions from a very novel perspective, which is an advantage. They can realize ideas in ways that we could not at their age. There are pros and cons.
The past pathway of “getting a degree, finding an internship, and working hard to get promoted” no longer exists, which will bring a huge cultural shock. This is very difficult for recent graduates. If you are still in college, you still have time to clarify your direction. If you are in a predicament, my advice is to use these tools to create something. Your ambition should be 100 times greater than what we had at that age.
Host: Will the disappearance of a large number of “button-pushing” jobs lead to social chaos in the short term?
Christian: Society will always recreate “button-pushing” jobs when needed to maintain stability. But many people engaged in such work actually have the ability to do more; they were just constrained by the environment in the past. When physical labor is no longer necessary, we invented going to the gym; now, facing the liberation of mental labor, people will develop various side hustles and creator economies to gain a sense of challenge. That’s also why I think “unconditional basic income (UBI)” is completely wrong; people need meaning and the motivation for self-actualization. Moreover, even if a significant portion of your work is automated away now, if you make good use of AI as a super tool, a junior employee just entering the field can output as much as an entire team did previously.
Host: What advice do you have for companies and investors?
Christian: For companies, invest in verification infrastructure, providing “responsibility as a service” (not just providing agents but also underwriting consequences). Also, master “exclusive factual sources,” as AI can be easily fooled. Companies that can provide exclusive real data or in-depth assessments like Bloomberg hold tremendous value. For investors, in addition to investing in these, focus on “immeasurable” hardcore R&D. Traditional network effects may fail, and new network effects will be built on how you make your agents more reliable through better real feedback, because what people really want to buy is verified intelligence.
Host: Is cryptography useful in this verification process?
Christian: The underlying infrastructure built in the cryptographic field over the past decade is crucial. When we need to confirm the authenticity of identities and prevent accounts from being taken over, on-chain technologies like “proof of personhood” can provide strong verification. Additionally, data provenance and cryptographic regulatory chains need to ensure that information generation and model compliance have stringent cryptographic guarantees.
Host: What should people do in the coming year? Are you optimistic about the future of humanity?
Christian: First, don’t panic. Experiment a lot, and leverage tools to “eliminate” and automate your current self as much as possible. Many amateur explorations of the future may be the most meaningful ventures. At worst, you can figure out the boundaries and shortcomings of the model. For many online creators, hobbies have become careers, which will be the mainstream direction in the future. If you have children, discovering their talents and immersing them in their passions is the most important thing. There are no fixed professional templates; new AI tools can better help you find the path that is uniquely yours.