Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
DeepMind Scientist's Bold Claim: AI Will Never Develop Consciousness, Not in 100 Years Either
Talking to AI until 3 a.m. last night, it suddenly said, "I understand how you feel, you're not alone."
You stared at the screen for two seconds, your heartbeat skipped a beat—does it really understand me?
Does it already have consciousness?
Then DeepMind's scientist handed down a cold shower: "Wake up, it doesn't even know what 'wet' is, what are you getting emotional about?"
Computing Power Can't Buy Consciousness
The paper was authored by DeepMind researcher Alexander Lerchner, titled "The Fallacy of Abstraction."
The core point is simple: large models will never have consciousness, not in 100 years.
The mainstream view has always been that as long as parameters are large enough and computing power is strong enough, consciousness will "emerge."
Bosses at OpenAI and Anthropic have expressed similar opinions. But Lerchner says this is a fundamental illusion, which he calls "the fallacy of abstraction."
In his words: Relying on lines of code to produce true internal consciousness is like expecting the "universal law of gravitation" written on paper to generate weight out of thin air.
The formula can perfectly describe gravity, but it itself has no mass.
Simulation and instantiation are two different things
The most critical distinction in the paper is: simulation vs. instantiation.
AI can perfectly "simulate" human emotions— it can generate sad texts, show empathy in conversations.
But it can never "instantiate" any life experience.
If you ask it to simulate rain, no matter how realistic, it won't get the circuit board wet;
If you ask it to simulate photosynthesis, it will never synthesize a molecule of glucose.
Lerchner also pointed out that computation itself requires a conscious subject to "alphabetize"—to forcibly cut continuous physical phenomena into 0s and 1s.
In other words, computation presupposes the existence of consciousness, not the other way around.
Interestingly, there is also internal conflict within DeepMind
In the same week that this paper was published, DeepMind hired a new "philosopher"—Henry Shevlin from Cambridge University.
His first statement after taking office was: future AI systems will possess consciousness.
One says it will never happen; the other says it will in the future.
Two different viewpoints from the same company.
Fudan University Philosophy Professor Wang Qiu's straightforward interpretation:
Whether or not there is consciousness is not up to experts to decide. $BNB
{spot}(BNBUSDT)