Is AI Making Us Dumber? Navigating the Cognitive Costs of Automation in the Knowledge Age

Written by Massa Medi
“Thank. Thank you. Thank you.” — What once might have sufficed as a hasty email reply is now an entry point to a profound question in today’s tech-driven world. Imagine, instead, taking the time to type, “Dear John, I wanted to say thank you for taking the time to meet for lunch last week. Please feel free to use this as a reference in the future. All the best.”Even these small acts are shifting as artificial intelligence (AI) takes root in every facet of daily life.
Welcome to another episode of ColdFusion, where we examine a future — the year is 2035 — where every office is powered by AI. That’s right: every email, presentation, and even playlist is generated by algorithms. Gone are the days when students needed to study the fundamentals; now, school is about mastering domain-specific AI tools and learning to prompt effectively. In this near-future vision, knowledge, answers, and even entertainment are just a prompt away.
Is This Progress — or Peril?
In decades past, such a scenario might have sounded like an episode straight out of The Jetsons, but as of 2025, it isn’t far-fetched. AI is part and parcel of just about every device we use today. So, the pressing question emerges: Are we slowly offloading our cognitive responsibilities? Is AI silently making us dumber?
To clarify, AI is already driving revolutions in science, physics, and medicine – the story today is about "consumer AI," or as some have labeled it, "AI slop": the run-of-the-mill models populating our daily tools and feeds. The real issue is overuse. Don't worry—the story isn't all doom and gloom. We'll close with practical ways to avoid falling into the mental quicksand of over-automated life.
How Technology Changes Our Brain: The Google Maps Effect
Consider how technology has already re-wired our minds. Take Google Maps, for example. A 2020 study revealed that, although GPS apps provide great economic benefits, heavy users showed weakened spatial memory. Ironically, these individuals didn’t even realize their sense of direction had deteriorated, even when the data made it clear. Convenience, it turns out, often comes at a price.
And keep in mind: GPS isn't even AI; it's just an app. If mere map apps can diminish our memory, what might today's interactive AI systems be doing to our ability to think?
AI in Academia: When Tools Replace Skills
Around the same time as the GPS study, David Raffo, a professor and academic observer, noticed a sudden, dramatic improvement in his students’ written work during the lockdown. The leap in quality was so striking, it seemed unnatural. When he confronted his class, the truth surfaced: students were turning to AI writing tools.
“I realized it was the tools that improved their writing—not their writing skills.” —David Raffo
Raffo didn't shame his students. Instead, he called it a mixed bag: AI can help us work more efficiently and gather information, but, "Our mental and cognitive abilities are like muscles… they need regular exercise to remain strong and vibrant. It takes an extraordinary person to resist the temptation of easier answers."
The Risk of Mental Atrophy
This is echoed by Dr. Ann McKee, Alzheimer’s researcher, on The Diary of a CEO podcast. She revealed that half the population who reach 85 will show signs of Alzheimer’s pathology in their brain. The mitigation? Stay mentally active. Challenge your mind. That "cognitive reserve" can help circumvent symptoms even when disease is present.
Cognitive Offloading: The Downside of Delegation
To illustrate the pitfalls of AI-driven convenience, consider a simple, real-world example. During the episode, the host glances back at a bookshelf, then asks Gemini, Google’s AI, “Did you catch the title of the white book behind me?” Gemini nails it: Atomic Habits by James Clear.
This fun exchange underscores a deeper risk — one that’s easy to miss as tech seeps into our routines. The more we outsource recall and understanding to AI, the less we’re compelled to use our own brainpower.
Studies have measured the broader effects, too. Research on calculators and autocorrect has already shown negative side effects: as students rely on spell check, their spelling and punctuation skills atrophy. Next comes the AI writing assistant, which doesn’t just suggest — it thinks for you.
Cognitive Offloading Goes Mainstream
As AI rolls into everything from data entry to customer service, so too does “cognitive offloading”—the process of outsourcing mental labor to technology. Over 600 people, from different walks of life, were surveyed to see how AI impacts critical thinking. The results weren’t reassuring: heavy users grew dependent, relinquishing problem-solving and decision-making to AI rather than engaging their own minds. The result: weaker critical evaluation and diminished nuance in conclusions.
When AI Replaces Judgement — Even in the Justice System
This isn’t just an academic worry. In 2023, Detroit police arrested Portia Woodruff, 8 months pregnant, after their AI-driven facial recognition flagged her as the culprit in a robbery. The AI based its claim on a years-old mugshot from an expired license infraction; Woodruff could not have committed a violent crime at that time. She suffered not just through wrongful arrest, but dehydration and labor complications. Ultimately, the charges were dropped, but not before tremendous personal cost.
Detroit’s police face multiple lawsuits for similar arrests — all due to overreliance on DataWorks Plus, their AI vendor. The technology, marketed as a revolutionary aid, became a shortcut to erroneous thinking. This is the dark side of convenience: errors slip by unnoticed, especially when automation is trusted by default.
“Algorithmic Complacency”: Letting the Internet Decide for Us
The pattern plays out every day, on social media and beyond. On platforms like X (formerly Twitter), people routinely ask Grok AI to explain even the simplest posts. Instead of thinking, users outsource basic comprehension.
This extends to how we consume content. As Alec Watson (Technology Connections) states, “Algorithmic Complacency” is when users let recommendation engines choose what they see, read, or watch — even when alternatives are available. Just recall how, two decades back, users searched for and bookmarked favorite sites manually. Today, platforms curate nearly everything, and we’re largely unaware of how rarely we select for ourselves.
Working smarter or sabotaging ourselves? While delegating repetitive tasks to AI is efficient, using it to handle all thinking makes us dull. Younger generations, who grew up clicking “recommend” or “auto-complete,” are particularly vulnerable; recent graduates bring these habits to the workplace, often hiding their weak skills behind AI tools.
From the Information Age to the Knowledge Age — Risks and Realities
The internet once offered raw data, now synthesized into bite-sized, AI-crafted knowledge. In theory, this should empower users — if that “knowledge” is reliable. But early missteps abound: Google’s AI Overviews have delivered everything from factual errors (“Obama was America’s first Muslim commander in chief,” or “Snakes are mammals”) to unsafe advice (“Eat a rock daily for health!”).
Still, usage rises: 70% of people trust AI news summaries, and 36% believe the models are consistently factually accurate. Yet, a BBC investigation revealed that more than half of AI-generated summaries were flawed.
Even simple editing tasks, like asking ChatGPT to “make this passage nicer,” can subtly alter the original meaning. Most don’t catch the mistake. The underlying issue is deeper still: as Oxford researchers showed, when AI repeatedly rewrites other AI-generated content, quality declines with each pass — a phenomenon they dubbed “model collapse.” After only two rounds, output quality drops; by the ninth, the text becomes nonsense. This degradation, while initially subtle, is insidious and particularly damages inputs representing minority viewpoints or lesser-known subjects.
The Self-Devouring Internet and the “Dead Internet” Theory
According to Amazon Web Services, an estimated 60% of all internet content in 2024 has been generated or translated by AI. This echo chamber effect means the internet is slowly “eating itself,” each AI-generated output becoming training fodder for ever more garbled content. This is central to the so-called “Dead Internet Theory,” which posits that most online content is now created by bots rather than humans.
We face two possible futures: either AI evolves rapidly enough to stabilize and improve knowledge, or we descend further into a swirling pool of “AI slop”—inaccurate, incomprehensible data.
Don’t Panic—Understand the Limitations
So, should we swear off AI? Not quite.
As Geoffrey Hinton, the so-called godfather of AI, cautions: “ChatGPT is an idiot savant. It doesn’t know the difference between truth and lies, because it is trained on inconsistent data and tries to predict what someone might say next.” Unlike humans, language models blend myriad opinions with no coherent worldview.
For generations raised in this always-on, always-assisted era, unquestioning trust in AI is dangerously easy. But as history (and professors everywhere) show, surrendering foundational thinking skills leads to long-term consequences.
The Office of the Future: AI Everywhere
Today, surveys show that Gen Z and younger Millennials use two or more AI tools every week at work. Businesses find that, especially for repetitive tasks like finding the “right tone” for emails or recalling dense meeting details, AI boosts efficiency. But these same users risk never strengthening their own creative or reasoning muscles. The result? Overreliance, atrophy, and a growing blind spot to AI error.
Travel Tip: Avoiding Pitfalls with the Right Tech
Speaking of smart tech, a quick aside: if you’re traveling, don’t gamble with getting connected overseas. Services like Saily from Nord Security let you grab an affordable best-in-class eSIM for over 150 countries. Setup is effortless: just buy a plan in the app or on the website, activate, and you’re online—no swapping SIMs or hunting for WiFi. Personally, the author shares a story of a colleague who, unable to avoid excessive roaming charges in the U.S., relied on this tool—a real reminder that not all digital advances are negative, if used wisely. (And with the code ColdFusion, readers can snag 15% off.)
A Historical Perspective: Fear of Automation Isn’t New
The spreadsheet revolution of 1979, courtesy of Dan Bricklin and Bob Frankston’s VisiCalc, drew tears from accountants: no more manual recalculations over typo-induced errors! Far from ending the need for human accountants, VisiCalc let those who understood the basics become vastly more productive. AI can—if handled responsibly—play a similar role.
The Path Forward: AI As a Companion, Not a Replacement
So, what's the key? Use AI as a companion—a helpful tool that augments, not replaces, your thinking. Always take answers with a skeptical eye and a grain of salt. As Professor Thomas Dietrich clarifies: “Large language models are statistical models, not true knowledge bases. They answer at length, even if they have nothing useful to say, and rarely refuse a question.”
Until models can self-assess their competence, users must provide the discernment. Neural networks are limited by what they’ve been exposed to—their “knowledge” mirrors the data, including its gaps and biases.
Lessons From the Past—Calculators and Critical Thinking
A 1988 newspaper image, shown during the episode, depicts elementary school teachers protesting the use of calculators in the classroom—not demanding a blanket ban, but arguing that basic skills should come first. The same should apply to AI. It's a powerful tool, but we must ensure we understand the underlying concepts before outsourcing too much.
No matter how advanced AI gets, the human mind—its authentic experience and critical judgment—remains irreplaceable. Until the age of hypothetical AI overlords, our most valuable resource is our ability to think independently. As philosopher René Descartes famously put it: “I think, therefore I am.”
Final Thoughts: Guard Your Cognitive Freedom
The episode concludes with an invitation: “I’d be interested to hear what you have to think.” After a deep dive into the risks and realities of consumer AI, the message is clear: choose tech wisely, keep those mental muscles strong, and never let convenience dull your mind. If you enjoyed this deep-dive, be sure to subscribe to ColdFusion for more explorations on technology, science, and the future of our digital world.
My name is Dagogo. Thanks for joining me for this extended episode. Until next time, keep thinking. ColdFusion out.