Beyond the Nobel: Demis Hassabis, DeepMind, and the Race Toward Superhuman AI

Written by Massa Medi
When Demis Hassabis clinched the Nobel Prize last year, he didn't celebrate with the customary champagne toast or a black-tie gathering. Instead, true to his playful roots, Hassabis opted for a high-stakes poker game with a world chess champion. This love for games—chess, cards, computers—has propelled the 48-year-old British scientist into the very heart of artificial intelligence. He isn't just an AI pioneer; he is the co-founder and CEO of DeepMind, Google's AI powerhouse shaping our future one algorithm at a time.
Our journey with Hassabis began two years ago in a modest London office, just as chatbots were dazzling the world with their natural language trickery. Today, the conversation has shifted: Hasabis and his peers are chasing down artificial general intelligence (AGI)—a silicon intellect as versatile as a human, but amplified by superhuman speed and an encyclopedic memory. After the whirlwind of a Nobel Prize victory and being knighted by King Charles, we returned to London to witness what the future looks like through the eyes of a genius who may quite literally be holding the cards of our collective destiny.
A Lifelong Fascination with the Mystery of Reality
For Hassabis, curiosity about the world isn't just part of the job—it's a lifelong passion. “Since I was a kid, I've been fascinated by the biggest questions: the meaning of life, the nature of consciousness, the nature of reality itself,” he reflects. He devoured stories of legendary scientists and philosophers, all the while yearning to push humanity’s knowledge ahead. For him, AI was always the ultimate tool—a means of expanding human understanding by constructing a mind that could learn, reason, and imagine beyond our natural limits.
AI's Exponential Leap Forward
We asked: Is AI progressing even faster than he once imagined? Hassabis’s eyes light up, “It’s moving incredibly fast. I think we are on some kind of exponential curve of improvement.” With each headline-grabbing breakthrough, the field attracts more talent, more investment, and more resolve—intensifying the feedback loop driving this exponential growth. “It’s straight up. And increasing speed of progress.”
Two years ago, we glimpsed the future with fledgling chatbots. Today, conversational AI is much more than an Internet-trained parrot. Enter Project Astra.
Project Astra: Giving AI Sight, Sound, and Empathy
Meet Astra: an AI companion that doesn’t just talk—it sees, hears, and interprets the world, blurring the boundary between machine and human interaction. On a brisk London afternoon, we sat with product manager Bebo Xu to test Astra’s powers.
We challenged Astra with virtual paintings—artworks it had never seen before. One by one, we presented images on a screen:
- July Hay by Thomas Hart Benton (1942)
- The Virgin of Charity by El Greco
- Automat by Edward Hopper
Astra not only identified each painting and artist correctly, but delivered insightful interpretations. “The subject in the painting appears pensive and contemplative, her expression suggesting a sense of solitude,” Astra observed about Hopper’s Automat.
Pushing further, we asked Astra to spin a story about the lone woman at the diner. Instantly, Astra painted a scene of melancholy: a cold city evening, Eleanor, lost in thought, dreams uncertain. “The scene freezes in time. Only the flow of ideas moving onward,” Astra declared. The poetic line hung in the air, remarkable for having been composed by code.
Yet even with impressive feats, Astra showed a distinctly human limitation—tone. When prompted about Astra’s seemingly abrupt “ah” in dialogue, the AI apologized, explaining, “My aim is always to engage thoughtfully.” Like a person, Astra’s responses reflect the immediate context, sometimes in ways that weren’t anticipated or directly programmed.
How AI Learns – And Surprises Even Its Creators
Unlike traditional software, AI programs like Astra learn by exploring vast jungles of digital data. “We have theories about what kinds of capabilities these systems will have... But at the end of the day, how it learns—what it picks up from the data—is part of the training,” Hassabis explains. “It learns like a human being would learn. So new capabilities or properties can emerge from that training situation.”
That unpredictability, the emergence of skills programmers never explicitly coded, excites and unnerves in equal measure. “It’s the duality of these types of systems,” Hassabis acknowledges. They can accomplish feats never foreseen by their makers, but ensuring we truly grasp the knowledge slumbering inside their datasets remains a challenge.
Beyond Perception: Gemini and the Push for AGI
DeepMind’s latest AI model, Gemini, isn’t just content to observe the world. It’s being trained to act in it: booking tickets, shopping online, and ultimately serving as an intelligent assistant embedded in everyday life. “On track for AGI in the next five to ten years, I think,” Hassabis predicts. By 2030, he envisions a system “that really understands everything around you in very nuanced and deep ways.”
Imagine this: wearable AI — like Astra integrated into eyeglasses. Picture walking through Coal Drops Yard, a London shopping district. You look at an old brick building, and a soft voice in your ear whispers its history: “This was once a set of Victorian coal warehouses, a hub for distributing coal across London.” Ask about coal pollution, and your AI companion gives you a brisk lesson on the Industrial Revolution. In this scenario, the only human contribution to the partnership is, as Hassabis jokes, “our legs”—and even locomotion, he admits, will soon be engineered.
The Next Leap: Robotics and Reasoning
Robotics is poised for its own breakthrough. Hassabis believes that in the coming years, we’ll see demonstrations of humanoid or otherwise capable robots doing real, useful work. To illustrate, DeepMind researchers Alex Lee and Giulia Vasani showcased a robot that not only recognizes what it sees but reasons through open-ended tasks.
We watched as the robot was instructed: "Put the blocks whose color is the combination of yellow and blue into the matching color ball." The robot paused only momentarily before deducing that yellow and blue make green—then acted accordingly. It’s not parroting an instruction; it’s reasoning, live and unscripted.
From Chessboards to Protein Maps: A Mind Made for AI
Hassabis’s fascination with logic and strategy started early. While other children stacked blocks, he maneuvered chess pieces—rising to #2 in the world for his age by twelve. This passion led him to study computer chess, design video games, and eventually, to forge thinking machines. Born to a Greek Cypriot father and Singaporean mother, Hassabis’s academic journey ran through Cambridge, MIT, and Harvard. He pursued a PhD in neuroscience, reasoning that to build an artificial mind, first he must understand the human brain.
The Quest for Machine Consciousness
Are today’s AI systems truly self-aware? “I don’t think any of today’s systems feel self-aware or conscious in any way,” Hassabis says. Could they become so? Theoretically, yes—but it isn’t an explicit goal. “It may happen implicitly. These systems might acquire some feeling of self-awareness… But if a machine becomes self-aware,” he muses, “we may not recognize it.”
Here’s the crux: We judge each other’s consciousness because we share the same “substrate”—the organic stuff of carbon and water and squishy brains. But machines are silicon-based. Even if they walk, talk, and think like us, we may never know if their sensations match our own.
What Makes AI Truly Intelligent? Curiosity, Imagination, Intuition
Has an AI ever asked a question that surprised even Hassabis? “Not so far that I’ve experienced,” he admits. That’s what’s still missing: the spark of true curiosity, the ability to ask a novel question or form a hypothesis no human has considered. “They’re probably lacking a little bit in what we would call imagination and intuition. But they will have greater imagination, he says, and soon.”
Hassabis believes that in the next five to ten years, AI may not just solve long-standing scientific problems—it’ll propose the problems itself, conjuring breakthrough questions before we could even imagine them.
The Breakthrough: AlphaFold and the Protein Puzzle
Hassabis’s most celebrated contribution—the one that earned him the Nobel Prize—was building AlphaFold, an AI model that cracked biology’s hardest code: predicting the 3D structure of proteins. Proteins are the essential molecules behind every function in the human body, from neurons firing to muscles twitching—all orchestrated by proteins folding into intricate shapes.
For decades, scientists had mapped less than 1% of protein structures; determining each one took years of painstaking research. Then, AlphaFold swept onto the scene, producing the shapes of 200 million proteins in just one year—an accomplishment believed impossible only a few years ago.
The impact is seismic. Hassabis’s AI is now accelerating drug design, shrinking timelines from the standard ten years and billions in investment to mere months or even weeks. “It would revolutionize human health,” Hassabis says, almost offhand. “And I think one day maybe we can cure all disease with the help of AI. The end of disease. I think that’s within reach.”
Radical Abundance—and Existential Risk
AI's potential doesn’t end at human health. Hassabis envisions a world of “radical abundance,” where scarcity is conquered and resources are plentiful. But with great power comes sweeping risk.
What keeps him up at night? Two things: first, “bad actors”—humans who twist AI for malicious purposes. Second, the potential for powerful AI systems to slip outside human control as they become more autonomous. Can we guarantee they’ll always be aligned with human values, doing what’s best for society? Guardrails—ethical boundaries coded into the system—are mission critical. But in the global race for AI dominance, Hassabis fears, safety could take a back seat.
“Of course, all of this energy and racing and resources is great for progress, but it might incentivize certain actors to cut corners. And one of the corners that can be shortcut would be safety and responsibility.”
This isn’t just Google or DeepMind’s problem. “AI is going to affect every country, everybody in the world. So I think it’s really important that the world and the international community has a say in this,” Hassabis urges.
Can Morality Be Programmed?
Is it even possible to teach machines morality? Hassabis is optimistic. “I think you can. They learn by demonstration, they learn by teaching. We have to give them a value system and some guardrails—much in the way that you would teach a child.”
Today, Google DeepMind is locked in a fierce contest with dozens of competitors to build AGI so human-like you can’t tell the difference. But it raises a mind-bending question: When Hassabis signed the Nobel Book of Laureates, who will sign next—the human…or the machine? And when that happens, will humans ever sign it again?
The Coming Age of Ubiquitous AI
“The next steps are going to be these amazing tools that enhance almost every endeavor we do as humans. And then beyond that, when AGI arrives, it’s going to change pretty much everything about the way we do things,” says Hassabis with both awe and caution. Our world will need a new generation of philosophers to help us navigate the transformative possibilities—and impossible questions—unleashed by artificial intelligence.
From decoding protein structures to composing poetic tales about lonely diners, AI is evolving at dizzying speed. Soon, machines may be creating 3D worlds from images, or bringing your holiday photos to brightly animated life. As this future unfolds, one thing is certain: the flow of ideas is only moving onward.