A researcher from Google DeepMind has published a paper arguing that AI can never achieve consciousness, emphasizing that this is not a matter of time or scale but rather a fundamental categorical distinction. The researcher posits that computation requires a conscious agent to interpret reality through symbols and meanings; without such an agent, only non-conscious processes exist, which he labels the “Abstraction Fallacy.” This insight challenges the prevailing framework that links computation to consciousness, asserting that if consciousness were to arise in a system, it would be due to its physical constitution rather than its algorithms. The research emerges amid ongoing debates at DeepMind, a leader in developing advanced AI capabilities.

Google DeepMind: Google DeepMind is a leading AI research laboratory owned by Alphabet, specializing in developing advanced systems to solve complex scientific problems and pursue artificial general intelligence.17 In March 2026, the lab published a cognitive framework for measuring AGI progress by breaking down intelligence into human-comparable faculties.1 A researcher from Google DeepMind authored the paper ‘The Abstraction Fallacy,’ which argues that computational processes can simulate but not instantiate consciousness, forming the core of this news event.20

`json
{
“Internal Debate”: “The discussion on AI consciousness originates from a researcher at DeepMind, a lab recognized for developing advanced AI models.”,
“Abstraction Fallacy”: “The paper introduces the Abstraction Fallacy, highlighting the mistake of equating computational simulations with the actual physical processes they are meant to represent.”
}
`