A 12-year-old asks ChatGPT to do her homework. A senior executive drafts a strategy presentation using Claude. A PhD student can’t recall the last time they memorized anything. Are we getting smarter—or just better at faking it?
In the age of generative AI, we’re witnessing a paradox at the heart of human cognition. On one hand, tools like ChatGPT and Claude seem to amplify our capabilities, offering instant information, fluid collaboration, and synthetic creativity. On the other, we may be eroding the very mental muscles we once depended on: memory, reasoning, and internal knowledge.
Three recent pieces encapsulate this tension. Barbara Oakley and colleagues, in their provocative essay "The Memory Paradox: Why Our Brains Need Knowledge in the Age of AI," argue that over-reliance on digital tools risks weakening our ability to think deeply. In a similar vein, MIT Media Lab’s Nataliya Kosmyna and colleagues in their paper “Your Brain on ChatGPT: Accumulation of Cognitive Debt When Using AI Assistant for Essay Writing Task” report the findings of an essay writing test that indicates that whilst AI reduces immediate cognitive load, it may simultaneously diminish critical thinking capabilities and lead to decreased engagement in deep analytical processes. Meanwhile, cognitive scientist Andy Clark, in "Extending Minds with Generative AI," takes a more optimistic view. Building on his theory of the extended mind, he suggests AI is simply the next natural layer in the evolution of human cognition.
What does this mean for how we learn, work, and grow in the age of artificial intelligence?
Oakley et al. make a powerful case: even in a world where information is at our fingertips, internal knowledge still matters. Drawing on neuroscience, they explain how memory isn’t just about storing facts—it’s the substrate for reasoning, intuition, creativity, and transfer of learning.
In their view, memory builds the mental "scaffolding" (schemas, engrams, procedural fluency) that allows us to solve new problems and think critically. Without this internal architecture, we become overly dependent on tools that supply answers but don’t cultivate understanding.
And the consequences may already be visible. IQ scores in many wealthy nations have been declining since the 1970s—a reversal of the long-standing Flynn Effect. Oakley’s team suggests that the reduction in memorization and overuse of cognitive offloading (relying on external tools for mental tasks) may be a contributing factor. The work of Oakley et al is in the tradition of Nicholas Carr’s The Shallows: What the Internet Is Doing to Our Brains (2010). In what now seems like an earlier age, he argued that the internet eroded our attention and memory, creating a "pancake mind" spread wide and thin. He explored how digital technologies, especially the internet, reshaped the way we think, read, and remember … rewiring our brains in ways that prioritized speed, efficiency, and surface-level engagement—at the expense of deep reading, memory consolidation, and critical thinking.
“What the Net seems to be doing is chipping away my capacity for concentration and contemplation.”
Carr drew on neuroscience to show that our brains are plastic—they change in response to how we use them. Just as musicians’ brains adapt to instruments or taxi drivers’ to navigation, our brains have adapted to the medium of the internet. Heavy use of online media leads to more neural pathways for scanning, multitasking, and fast switching—but fewer for reflection, synthesis, and deep memory.
One could actually argue that the internet (in the early years of the 21st century) was just the latest technology to undermine reflection and synthesis. Consider the arrival of CNN in 1980 can be seen as an earlier genesis of the cognitive shifts Nicholas Carr later describes in The Shallows. While Carr focused on the internet, the underlying concern — that media formats shape cognitive habits — finds an earlier expression in the transformation of news consumption driven by CNN and similar broadcasters. Pre-CNN, news was periodic, curated, and often read (e.g., newspapers, long-form journalism) but with the arrival of CNN it became continuous, visual, reactive, and compressed into soundbites and headlines. CNN rewarded speed, brevity, and emotional immediacy over context and depth.
In the MIT Media Labs study run by Nataliya Kosmyna and colleagues, 54 adults were randomly assigned to three groups (1) LLM group (using ChatGPT); (2) Search Engine group (using Google); and (3) Brain‐only group (no external aids). Each performed essay‐writing sessions under their assigned condition. Participants were monitored with Electroencephalography (EEG) to gauge brain connectivity and engagement, and their work assessed for linguistic diversity and homogeneity as human teachers and an AI judge scored their essays. Participants were also questioned on perceived ownership and recall of their own text.
The study provides empirical evidence that heavy reliance on AI assistants in writing tasks can accumulate “cognitive debt,” diminishing neural engagement, memory recall, and originality. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels and the study concludes that AI tools may inadvertently contribute to cognitive atrophy through excessive reliance on AI-driven solutions and diminished prospects for independent problem-solving.
Unlike conventional search engines that present diverse viewpoints for user evaluation, LLMs provide synthesized, singular responses that may inadvertently discourage lateral thinking and independent judgment. This shift from active information seeking to passive consumption of AI-generated content can have profound implications for how we process and evaluate information.
These findings reinforce the findings of Lindebaum and Foster who warn that excessive reliance on AI can erode human reflexivity—the critical self-awareness and contextual judgment that underpins responsible interpretation. They argue in their critique of ChatGPT, that outsourcing reflection and meaning-making to machines undermines the essence of scientific and managerial responsibility; and “handling ourselves ‘reflexively’ is indispensable for nurturing the unique attributes of human knowledge in the AI age”
The “cognitive offloading” phenomenon raises concerns about the long-term implications for human intellectual development and autonomy, and underscore the need for balanced, mindful AI integration that preserves critical cognitive processes
Andy Clark work comes at AI from a somewhat different angle. In his influential theory (developed with philosopher David Chalmers), he proposed that the mind isn’t confined to the brain. Instead, it extends into the world through tools, language, gestures, and now—AI. In his recent essay, Clark argues that generative AI isn’t eroding our cognition; it’s reshaping and expanding it.
He points to historical parallels: The historian in Plato’s Phaedrus records Socrates lamenting the invention of writing, fearing it would lead to forgetfulness. Later, people feared calculators would ruin our maths skills. Yet in each case, new tools eventually became embedded in human cognition, altering how we think without necessarily degrading our capacity. To Clark, AI is a co-thinker. It enhances our ability to brainstorm, prototype, and reflect. The challenge, then, is not whether we use AI, but how we integrate it into our cognitive workflows. AI, like the internet and cable news before it, is not going anywhere and so we need to understand its impact and shape our human cognitive response.
The science of "cognitive offloading," as explored by researchers like Risko and Gilbert (2016), shows that using external aids can enhance performance—but only when balanced. Overuse can reduce our ability to encode, retrieve, and synthesize information internally. The studies by Kosmyna and Oakley raises the alarm that current trends in education and knowledge work are skewing that balance. Instead of using AI to support memory, we are increasingly substituting it. And that, they warn, risks atrophying the very skills that make us uniquely human.
But Clark’s vision is not naïve. He acknowledges the risks but insists they are manageable. His proposed solution is to treat AI not as a crutch, but as a partner. This means building systems where AI augments thinking without replacing the slow, effortful work of encoding knowledge. It means designing tools that encourage reflection, memory retrieval, and synthesis—not just instant answers.
Some educators and designers are already moving in this direction. Tools like Anki use spaced repetition to strengthen long-term memory. Andy Matuschak’s concept of "evergreen notes" moves away from static, one-time-use summaries, and toward living, reusable, and interconnected ideas that encourages the construction of deep, interconnected knowledge. In the age of generative AI and fast content, evergreen notes represent a practice of intentional knowledge-building. They support the internal memory structures that Barbara Oakley champions, while also embodying Clark’s extended mind by living outside the brain but within a system of meaning. AI-augmented note-taking apps can help build this kind of cognitive scaffolding—but only if we use them intentionally.
Can we chart a course between Clark, Kosmyna and Oakley? How do we navigate this tension between memory and extension? The answer may lie in embracing a "Hybrid Mind": a mode of cognition that actively balances internal development with external augmentation. This involves:
Using AI to practice, not bypass, recall: Before asking the model, try to remember.
Designing friction into workflows: Slow thinking is often better thinking.
Deliberately building schemas: Through retrieval practice, writing, teaching others.
Curating, not hoarding, information: Treat knowledge as a garden, not a warehouse.
The Hybrid Mind learns with AI, but never delegates thinking entirely.
In education, this means resisting the temptation to turn AI into a student’s proxy. Instead, use it as a tutor, a drill partner, a guide. Retrieval practice, spaced learning, and reflection still matter. Memorization isn’t medieval—it’s neurological. In work, it means being aware of what we’re offloading. Do we understand the slide deck our assistant generated? Can we explain our decisions without the tool in front of us? Do we know, or just know how to ask? This is about more than productivity. It’s about epistemic security: confidence that we genuinely grasp the knowledge we act on.
While educators wrestle with how AI changes learning, businesses face an equally urgent challenge: how AI reshapes thinking on the job. As generative AI automates research, writing, analysis, and even strategy formulation, there’s a risk that thinking becomes prompting. If employees rely on AI outputs without deeply understanding the inputs or reasoning, decision-making becomes somewhat brittle—fast, but shallow. Companies can reframe AI as a cognitive amplifier, not a surrogate. That means training people to collaborate with AI—question it, edit it, challenge it—not just accept what it says.
Professionals in finance, strategy, design, and law often learn by doing—repeating, revising, reflecting. If AI tools bypass this friction, junior staff may skip the internal scaffolding that senior experts once built. This risks developing a generation of workers who can operate tools—but can’t reason under pressure or transfer skills across contexts. Businesses need to embed memory-building loops in workflows. Ask employees to recall before prompting, to write summaries in their own words, and to teach others—all proven to strengthen internal cognition. AI often delivers fluent, confident answers which this breeds overconfidence and may blind us to alternatives (or indeed the truth as fluency does not necessarily equal truth). Companies need to cultivate epistemic humility—an awareness of what they don’t know, and a culture of slow thinking where appropriate otherwise
Just as we once trained people in digital literacy, we now need cognitive literacy for the AI era. We need to understand when to trust the model, when to step back and reflect and how to structure thinking before automation. Senior leaders need to model thoughtful, intentional use of AI, not just delegate tasks to it.
Are we outsourcing ourselves—or are we reimagining what it means to be smart? The answer may depend not on the tools we use, but on the habits we cultivate. Generative AI can extend our minds, but only if we continue to build the minds it extends. In the end, memory still matters. Not because we can’t Google facts, but because knowing things is the foundation for connecting ideas, generating insight, and thinking well. We must stop thinking of memory, reflection, and understanding as “academic” concerns. In the AI-shaped workplace, they’re competitive advantages.
Further Reading:
Clark,A (2024) Extending Minds with Generative AI. Nature Communications
Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X.-H., Beresnitzky, A. V., Braunstein, I., & Maes, P. (2025). Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task
Lindebaum, D. & Fleming, P. (2023) ChatGPT Undermines Human Reflexivity, Scientific Responsibility and Responsible Management Research.
Matuschak, A (2019) ‘Why books don’t work’
Nicholas Carr (2010) The Shallows – What the Internet Is Doing to Our Brains
Oakley, B, Johnston, M, Chen, K, Jung, E & Sejnowski, T (2025) The Memory Paradox: Why Our Brains Need Knowledge in an Age of AI.
Risko, E.F. & Gilbert, S.J. (2016) Cognitive Offloading. Trends in Cognitive Sciences