Stupidity as strategy …
How organizations can be their own worst enemies
In the last post, we argued that metacognition — thinking about thinking — has become a critical capability in the age of AI. As generative systems remove friction from knowledge work, they also remove many of the cues that once helped us monitor whether our thinking was any good. Fluency and coherence are no longer reliable signals as outputs arrive fully formed.
The response, we suggested, has to begin at the individual level. People need to become more deliberate supervisors of their own cognition: more aware of when to slow down, when to question, and when not to trust what looks plausible. That argument still stands. But it is not the whole story.
Because even if individuals develop stronger metacognitive habits, the organizational environments they work in may actively undermine them. Thinking better is hard to sustain in systems that quietly reward not thinking too much. This is where a less comfortable body of work becomes relevant — one that shifts the focus away from individual capability and towards organizational design.
In 2012, Mats Alvesson and André Spicer published a paper with an intentionally provocative title: A Stupidity-Based Theory of Organizations. This draws attention to the ways that organizations may (wittingly or unwittingly) actively discourage reflexivity, or metacognitive questioning even amongst skilled professionals. Their argument was not that organizations are full of stupid people. Quite the opposite. Their claim was that modern organizations are often highly effective at producing intelligent unthinking. They called this functional stupidity.
By “stupidity”, Alvesson and Spicer didn’t mean low intelligence or lack of skill. They meant the systematic suppression of three things: Reflexivity … questioning assumptions, roles, and goals; Justification … asking why something is being done and Substantive reasoning … engaging with meaning rather than surface logic
Functional stupidity is what happens when capable people learn — often quite rationally — not to ask certain questions, not to challenge certain narratives, and not to slow things down with too much thought. Importantly, this isn’t usually enforced through overt coercion. It emerges through norms, incentives, career signals, and the subtle social grammar of organizations. People learn what is welcomed, what is tolerated, and what is quietly penalized. And over time, they adapt.
This is where the argument intersects directly with metacognition. Metacognitive awareness allows an individual to notice when something feels off … when reasoning is thin, assumptions are shaky, or when confidence outpaces understanding. But acting on those signals requires a context in which doing so is safe and valued. Many organizations do not provide that context.
Most professionals recognize this pattern intuitively. There are meetings where questions are invited in theory but discouraged in practice. Strategies that are presented as inevitable or decisions framed as already settled. In these environments, metacognitive discomfort becomes something to manage privately rather than surface publicly. You might notice that a proposal doesn’t quite make sense — but you also notice that challenging it will slow things down, make you look difficult, or mark you as “not commercial”.
You may suppress your own judgement following the advice (of former U.S. Speaker of the House Sam Rayburn) that “If you want to get along, go along”. By conforming to the consensus and aligning with the organization’s narrative, you are maintaining harmony, avoiding conflict, and ensuring acceptance. So whilst metacognition is key to judgement, organizations and context may discourage metacognitive questioning or reflexivity – especially where authority, legitimacy or speed matter and where GenAI increases the pressure to accept outputs.
This is not a moral failure. It is an adaptive response to organizational reality. As Chris Argyris showed decades ago, highly educated professionals are often exceptionally good at avoiding learning when it threatens competence, status, or identity. Organizations, in turn, are very good at rewarding this avoidance. Functional stupidity keeps things moving. It greases the wheels. It reduces conflict. It creates the appearance of alignment. And in the short term, it can look like effectiveness.
The problem is what it crowds out. One of Alvesson and Spicer’s most uncomfortable insights is that critical thinking is often treated as a form of inefficiency. It introduces delay and complicates narratives. It unsettles collective confidence. From a managerial perspective, it can look like friction.
Generative AI intensifies this dynamic. When output is instant and fluent, the organizational temptation is to accelerate judgment rather than deepen it. Critical review becomes refinement, and the first draft quickly and effortlessly becomes the final output. The individual may feel a flicker of doubt — a metacognitive signal that says “this feels too neat” — but the system around them rewards speed, polish, and alignment. And over time, the signal is ignored. This is how deskilling happens without anyone noticing. Not because people stop thinking, but because they stop intervening in their own thinking.
Collective blindness
The literature on groupthink makes this pattern visible at the collective level. Irving Janis showed how cohesive groups suppress dissent not out of malice, but out of a desire to maintain harmony and momentum. Warning signs are reinterpreted, and critics are marginalised. Alternatives are dismissed as unrealistic or unhelpful. ‘Functional stupidity’ is a spikier version of Groupthink.
Harry Frankfurt’s short essay On Bullshit adds another layer. Bullshit, he argued, is not lying; it is indifference to truth. Much organizational language is not designed to be false or true, but to sound right, signal alignment, and keep things moving. When this kind of language dominates, metacognitive monitoring becomes harder, because coherence replaces substance as the primary cue. Again, AI acts as an accelerant. It is extremely good at producing bullshit in Frankfurt’s precise sense: fluent, confident, context-appropriate language that is untethered from concern for truth. Without strong individual metacognition and organizational permission to use it, this is a dangerous mix.
It would be comforting to frame this as a purely personal challenge: learn better prompts, develop stronger critical thinking habits, cultivate reflective distance from AI output. All of that matters. But it places the burden entirely on individuals operating inside systems that may quietly penalize exactly those behaviors. You can teach someone to notice when thinking is shallow. You can even help them articulate why something feels wrong. But if the organization rewards compliance, speed, and narrative coherence over challenge and sense-making, that capacity will wither.
Metacognition is not just a mental skill. It is a social one. It depends on norms, incentives, and expectations about what thinking is for. This is why Alvesson and Spicer’s work is such an important counterpoint. They remind us that organizations don’t just host thinking; they shape it.
The implication is not that organizations need to become more cerebral, academic, or endlessly reflective. That would be its own form of dysfunction and probably7 lead to paralysis. The challenge is to stop actively getting in the way of individuals thinking (about thinking). That means loosening the grip of the “go along to get along” culture. It means recognizing that some friction is productive — that hesitation, challenge, and temporary uncertainty are not signs of failure but prerequisites for judgment.
James March captured this in his essay on “the technology of foolishness”. Organizations, he argued, need spaces where exploration, play, and even apparent foolishness are tolerated — because without them, learning collapses into exploitation of what already exists. In AI-shaped workplaces, this balance becomes critical. If humans are left only with the role of approving, polishing, and presenting machine-generated output, their distinctive value erodes. Not because AI is smarter, but because the system has decided that thinking is optional.
What would a different organizational bargain look like? Well, its not one where everyone questions everything all the time. But rather one where metacognitive signals are allowed and taken seriously. The risks of not thinking critically are recognized. Where slowing down is sometimes legitimate. Where asking “what are we assuming?” is not career-limiting. Where disagreement is not automatically reframed as negativity. This is definitely not about vacuous value statements as wall posters, but everyday signals: whose contributions are noticed, what counter-points are encouraged in discussions, what kinds of questions get airtime, and which ones are quietly parked.
The deepest risk of AI in knowledge work is not mass automation or sudden displacement. It is something more insidious … the gradual erosion of our capacity to notice when thinking is sidelined. Individuals can and should develop stronger metacognitive habits. But unless organizations also change — unless they make space for those habits to be exercised — the effort will be unsustainable. Alvesson and Spicer’s point was never that organizations should become smarter in some abstract sense. It was that they should become less effective at suppressing intelligence.
Seen this way, polymathy stops being a personal aspiration and starts to look like a structural response to how modern organizations actually work. Polymathy is not about knowing many things. It is about moving between frames — shifting levels of explanation, connecting ideas across boundaries, and recognizing when a familiar story is no longer sufficient. That capacity depends on metacognition. You have to notice how you are thinking before you can choose to think differently.
But it also depends on permission. Highly specialized organizations reward fluency within a frame, not movement between them. Over time, this makes them efficient but cognitively inflexible. Functional stupidity thrives because polymathic behavior is inconvenient. It introduces doubt, slows decisions, and unsettles neat narratives. In other words, polymathy runs directly against the logic of “go along to get along.”
When machines can generate competent output across domains, narrow expertise becomes easier to replicate. What remains valuable is judgement: knowing which questions matter, which explanations are sufficient, and when confidence is misplaced. These are polymathic traits, and they rely on strong metacognitive supervision.
The implication is not that everyone should become a polymath. It is that organizations need more polymathic capacity than they currently allow. Individuals can cultivate it, but unless systems stop quietly penalizing cross-boundary thinking, it will not stick. In an AI-shaped workplace, polymathy will not flourish because people try harder. It will flourish only when organizations stop treating thinking as friction and start recognizing that it is the point.




Thanks David and Adam, perhaps these views go some way towards more insight to current examples of groupthink being rewarded Rgds Jim