Your Colleague Is An Algorithm
The rise of collective intelligence at work
“The company is rapidly remaking itself around artificial intelligence. We now have a workforce of 65,000, which is made up of 40,000 humans and 25,000 agents. Every employee will be enabled by at least one or more agents.” Bob Sternfels, CEO, McKinsey & Company
Twenty-five thousand AI agents working inside one organization. A hybrid workforce where humans and machines collaborate to produce insight. It sounds extraordinary. A consulting firm quietly building a parallel workforce of digital analysts.
But if McKinsey really had created 25,000 artificial consultants, the consulting industry would already look very different. So the interesting question isn’t how McKinsey built 25,000 agents, but rather what those agents actually represent.
And once you start pulling on that thread, the story becomes much more interesting. It becomes a story about how intelligence itself is being reorganized inside organizations. If you strip away the rhetoric and most of these agents are doing something fairly prosaic. They retrieve documents from internal knowledge bases; they summarize reports and interview transcripts; they generate charts and draft slide decks; they scan datasets and flag patterns and they critique the structure of arguments.
In other words, they perform many of the cognitive tasks that junior consultants have traditionally performed. And nor is the underlying technology particularly exotic. Most organizations building these systems rely on the same large language models available to the rest of the world through platforms developed by companies such as OpenAI, Anthropic and Google. The models themselves are not McKinsey’s secret sauce, which raises an obvious question … if the AI is broadly available, what exactly is McKinsey gaining from deploying 25,000 agents? What’s the sustainable competitive advantage here? The answer lies not in the AI but what the AI is connected to.
(there is also a discussion to be had about the ‘billable hours’ revenue model – but that is for another post)
Consulting firms have always possessed a potentially powerful asset. Institutional knowledge. Every project generates knowledge. Market analyses, cost benchmarks, strategic frameworks, interview transcripts, financial models and slide decks accumulate year after year. Taken together, they form an enormous institutional memory — a record of how complex problems have been analyzed and solved across industries and geographies. For decades this archive has been one of the consulting industry’s most valuable assets.
AI can search archives by meaning rather than keyword. It can synthesize insights across dozens of engagements. It can recombine fragments of previous work into a coherent narrative within seconds. Seen this way, the real function of McKinsey’s agents is not to replace consultants (although it may well do), it is to unlock and operationalize institutional memory at scale. AI becomes the interface between people and decades of accumulated knowledge. And that shift matters far more than the headline number of agents.
It would be easy to treat this as a story about the consulting industry. In reality it is something much larger. Consulting simply provides a great example because the industry is built entirely on knowledge work. When the economics of thinking change, consulting feels the shock first. But the same pattern is emerging everywhere.
Law firms are using AI to review contracts and generate legal drafts. Financial analysts are deploying AI to synthesize research and model scenarios. Doctors are working alongside diagnostic systems that interpret medical images. Across the economy, the architecture of knowledge work is shifting.
Humans are no longer the only cognitive force inside organizations. Cognition is no longer a biological phenomenon. Machines are becoming part of the thinking process. A smart analyst gathers information, analyses it and produces insight. Organizations succeed by hiring enough capable people and coordinating their efforts.
Research on collective intelligence and mega-cognition shows that groups can display stable differences in performance across many tasks — differences that cannot be explained simply by the intelligence of the individuals involved. Some teams consistently outperform others because of factors such as social sensitivity, balanced participation and the ability to integrate diverse perspectives (Woolley et al., 2010).. In other words, intelligence can exist at the level of the group. Historically that group intelligence emerged through human interaction: discussion, disagreement, persuasion and shared interpretation. Artificial intelligence changes the architecture of that system.
In a traditional meeting, the cognitive resources available to a group are limited to the knowledge, memory and attention of the people in the room. Each brings their own expertise and perspective. But when AI enters the system the topology changes. Each participant effectively gains access to an additional cognitive node … Person A + AI, Person B + AI, Person C + AI
This is not equivalent to adding another person to the room. Instead, the entire cognitive system expands. AI can retrieve relevant knowledge instantly. It can summarize complex material, generate alternative framings and simulate possible outcomes. It acts as an external cognitive resource that increases the working memory of the group and expands the space of ideas the group can explore.
This idea aligns with the extended mind thesis, which suggests that cognitive processes can extend beyond the brain into tools and environments that become functionally integrated into thinking (Clark & Chalmers, 1998).
Cognition becomes distributed across people, machines and information systems. This is why the concept of collective intelligence becomes so important in the AI era. The unit of performance is no longer the individual analyst, it is instead a networked system of humans and machines thinking together. As Thomas Malone argues in Superminds, intelligence increasingly emerges from systems in which people and computers combine their capabilities to solve problems together (Malone, 2018).
“Groups of people can act collectively in ways that seem intelligent, and computers can help make these groups smarter. […] The future will bring more and more combinations of people and computers working together in superminds.” Thomas W Malone
Human groups can be remarkably effective. They pool expertise, check each other’s blind spots and generate richer interpretations of complex problems. But they are also prone to predictable failure modes. Groupthink can suppress dissent. Status dynamics can silence quieter voices. Teams often focus discussion on information that everyone already knows while failing to surface unique insights held by individuals (Janis, 1972; Stasser & Titus, 1985). These dynamics can degrade the quality of decision-making even when the necessary information exists somewhere within the group.
Artificial intelligence has the potential to both improve and worsen these dynamics. AI can surface overlooked information, reduce cognitive load and allow teams to explore more counterfactual scenarios. But it also introduces new risks.
Teams may become over-reliant on algorithmic outputs, a phenomenon sometimes described as automation bias (Parasuraman & Riley, 1997) where humans fail to notice automation errors because they assume the system is correct. AI-generated narratives can appear so coherent that they discourage critical questioning. And if everyone relies on the same prompts and the same models, thinking can become homogenized. So whilst AI may increase the cognitive power of the system, it does not automatically make groups wiser. Whether this greater cognitive power produces better decisions depends on the humans inside it the decision making process and their judgement.
If machines can retrieve information, summarize material and generate first-pass analysis, what are humans to do?
Recent research reinforces this point. A field experiment studying consultants using GPT-4 found that AI significantly increased productivity and quality on many knowledge tasks — but also led to worse outcomes when people relied on it in areas where the model performed poorly. The authors describe this as navigating a “jagged technological frontier”, where human judgement remains essential (Dell’Acqua et al., 2023).
The skills that matter most are no longer the ones associated with the early stages of analysis, but rather those that shape and interpret the outputs of these new collective intelligence systems. We see three categories of capability become especially important.
Thinking: Clear thinking becomes the anchor of the entire system. Humans must frame problems, question assumptions and evaluate the outputs produced by AI systems. They must decide which signals matter and which patterns are noise. In other words, judgement becomes more valuable than information.
Creating: Creativity becomes central as well. AI can generate variations on existing ideas. But humans remain essential in reframing problems, connecting ideas across domains and asking the questions that unlock new perspectives. The most powerful prompts are not technical instructions but intellectual reframings.
Communicating: Finally, communication becomes decisive. Insight only matters if it influences action. Humans remain uniquely capable of constructing narratives, building trust and persuading others to move in a particular direction. Even the most sophisticated analysis still needs a story that elicits ‘lean-in’.
So what should we make of the claim that a consulting firm now has tens of thousands of AI agents inside its workforce? The headline is slightly misleading, but it points toward something real. Organizations are quietly evolving into collective intelligence systems — networks where human judgement and machine capability combine to analyze problems and make decisions.
But the future of knowledge work will not be defined by artificial intelligence replacing humans, it will be defined by how effectively humans learn to think, create and communicate inside systems where intelligence is distributed across people and machines.
The most valuable professionals of the next decade may not be those who know the most. They will be the ones who understand how to work inside these new cognitive systems — guiding them, questioning them and extracting insight from them. In other words, the real challenge of the AI era is not building smarter machines, but building smarter forms of collective intelligence.
Clark, A., & Chalmers, D. (1998). The extended mind. Analysis, 58(1), 7–19.
Dell’Acqua, F., McFowland, E., Mollick, E., Lifshitz-Assaf, H., Kellogg, K., Rajendran, S., Krayer, L., Candelon, F., & Lakhani, K. (2023). Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality. Harvard Business School Working Paper.
Janis, I. L. (1972). Victims of Groupthink Victims: A psychological study of foreign-policy decisions and fiascos. Houghton Mifflin.
Malone, T. W. (2018). Superminds: The surprising power of people and computers thinking together. Boston: Little, Brown Spark.
Parasuraman, R., & Riley, V. (1997). Humans and automation: Use, misuse, disuse, abuse. Human Factors, 39(2), 230–253.
Stasser, G., & Titus, W. (1985). Pooling of unshared information in group decision making. Journal of Personality and Social Psychology, 48(6), 1467–1478.
Woolley, A. W., Chabris, C. F., Pentland, A., Hashmi, N., & Malone, T. W. (2010). Evidence for a collective intelligence factor in the performance of human groups. Science, 330(6004), 686–688.



