‘I think, therefore I am’ … the value of metacognition in the age of AI
In the late 1970s, a developmental psychologist named John Flavell (1928-2025) introduced the term metacognition in a short paper titled ‘Metacognition and cognitive monitoring’. The prefix “meta” meaning “about,” “beyond,” or “above,” so metacognition literally means “thinking about thinking”.
Flavell was interested in how children learn — how they become aware of what they know, what they don’t know, and how they decide when to slow down, check, or change strategy. But while his work was rooted in developmental psychology, the idea has since travelled far beyond the classroom.
At first glance, it seems like an unlikely place to look for insight into the modern workplace. But Flavell’s core idea turns out to be quietly central to one of the most pressing challenges facing knowledge work today.
Metacognition, as Flavell defined it, is the capacity to monitor and regulate cognitive activity as it unfolds. It is not intelligence. It is not expertise. It is not even reflection after the fact. It is the internal supervisory system that notices how thinking is going, assesses whether it is appropriate to the task at hand, and intervenes when something feels off.
For decades, this capacity operated largely in the background of professional life. And then AI arrived—and quietly changed the conditions under which metacognition works. Before generative AI, thinking at work - drafting, analyzing, synthesizing, and framing - took time. Cognitive effort played an important role. Difficulty, hesitation, and friction were signals. When something felt (too) hard, we paid attention. When something felt (too) easy, we often became suspicious. These sensations acted as crude but effective cues, helping us monitor whether we actually understood what we were doing.
Generative AI disrupts that relationship. It produces output at speed, with fluency and confidence, often wrapped in structures that look reassuringly familiar. The work that once required visible effort now appears instantly, polished and coherent. And in removing friction, AI removes many of the cues that metacognition has historically depended on.
In some recent interviews we’ve conducted with management consultants, participants didn’t describe AI as a threat to thinking. They described it as exciting, powerful, and oddly disorienting. Again and again, the language circles around uncertainty rather than resistance. People talk about being “at ground zero,” about needing to “learn how to ask,” about checking whether they’ve “missed something obvious.” These aren’t technical problems. They’re metacognitive ones.
What’s striking is that the anxiety isn’t about generating ideas. AI does that easily. The unease is about supervising those ideas—about knowing when to trust them, when to challenge them, and when to step back and think more slowly. People sense that something important has shifted, even if they don’t yet have the language for it.
“In many real-life situations, the monitoring problem is not to determine how well you understand what a message means, but to determine how much you ought to believe it” (Flavell, 1979)
Metacognition, Flavell argued, relies on indirect signals. We don’t have direct access to the quality of our own thinking. Instead, we infer it from cues like fluency, familiarity, and confidence. If something comes easily to mind, we tend to assume we understand it. If it sounds coherent, we assume it’s well-reasoned. These shortcuts usually work well enough. But they are vulnerable to distortion. AI is exceptionally good at generating exactly the kinds of cues that metacognition mistakes for quality.
One way to see what metacognitive monitoring actually does is to think about how we decide what level of explanation we’re prepared to accept. Take something ordinary like boiling a kettle.
At one level of explanation, getting hot water from a kettle is trivial. You pick it up and pour. For most purposes, that explanation is perfectly adequate. It allows you to make tea and get on with your day.
But that same explanation is obviously incomplete. If you step one level down, you realize the kettle only works because it’s plugged into a power socket. Now electricity enters the story. Step down again and you’re talking about power stations, transmission grids, and fuel. Step down further and you find yourself digging coal out of the ground, or splitting atoms, or harvesting wind and sunlight.
None of these explanations is wrong. They are answers to different versions of the question … how do you get water from a kettle? The skill lies in knowing which level of explanation is appropriate — and when a higher-level answer is no longer sufficient.
Metacognition is what lets us make that call.
Working with AI creates the same problem at speed. AI complicates this process by defaulting to plausible and ready to use answers— even when the task demands something deeper. The danger is not that the explanation is false, but that we fail to notice when we need to ask for more.
Outsourcing cognition—using tools to store, retrieve, or process information—is not new. We’ve been doing it for centuries, from writing to calculators to spreadsheets. But generative AI extends cognitive offloading into areas that were previously central to professional identity: drafting arguments, structuring analysis, generating hypotheses, even simulating expertise.
Cognitive systems adapt to how they are used. When certain forms of effort are repeatedly outsourced, the skills associated with monitoring and regulating those efforts can weaken. If AI consistently handles first-pass thinking, pattern generation, and synthesis, humans are pushed into a supervisory role. But supervision is not automatic. It requires a well-developed metacognitive capacity—one that many professions have never explicitly trained. In knowledge work, this matters enormously.
Consulting provides a particularly sharp illustration. Long before AI, consulting depended on fast judgment under uncertainty. Expertise was never just about knowing more; it was about knowing when something was “good enough,” when a story held together, and when a line of reasoning would stand up in front of a client. Metacognition operated quietly beneath the surface, helping consultants sense when confidence was justified.
AI accelerates this dynamic while hollowing it out. Output arrives fully formed, often better structured than anything a human would produce under time pressure. The temptation is to move straight to judgment: to accept, refine, and present. But judgment without metacognitive monitoring becomes brittle. Confidence rises while calibration falls. Errors don’t disappear; they just become harder to spot.
Several of our interviewees articulated this tension without naming it. They describe checking and rechecking, not because they distrust the machine, but because they no longer trust their own internal signals. When everything looks fluent, how do you know when to slow down? When output arrives instantly, how do you tell whether thinking has actually happened? This is the metacognitive crisis of the AI workplace.
Flavell never anticipated AI, but his framework makes something clear: metacognition is not a luxury add-on to thinking. It is what prevents thinking from drifting into autopilot. And autopilot, in an environment saturated with fluent machine-generated output, is dangerous. Mistakes will happen.
If cognitive offloading continues unchecked, we risk a subtle form of professional deskilling—not in the ability to produce content, but in the ability to evaluate it. Over time, this can lead to cognitive atrophy: a weakening of the mental muscles involved in monitoring, questioning, and regulating thought. People still appear productive. Output still flows. But the internal capacity to notice when thinking is shallow, misaligned, or incomplete begins to erode.
The response to this is not to reject AI, nor to romanticize effort for its own sake. It is to deliberately dial up metacognition in knowledge work. To treat thinking about thinking as a core professional capability rather than an incidental by-product of experience.
This means creating space for thinking … slowing down not because technology is slow, but because judgment demands it. It means developing new cues to replace the ones AI has dislocated—cues that help us distinguish between fluency and understanding, confidence and competence. Flavell’s insight was that good thinkers are not those who never make mistakes, but those who notice when they are likely to be making one. In the AI-shaped workplace, that capacity is under pressure. But it is also more valuable than ever.
The future of knowledge work will not be secured by those who can generate the most output with AI. It will belong to those who can still tell when that output deserves to be questioned. And that, was Flavell’s point all along.
Flavell, J. 1979. Metacognition and cognitive monitoring. A new area of cognitive- developmental inquiry. American Psychologist, Vol. 34, No.10, 906-911



