Beyond the Noise: Why AI’s True Direction Lies in Collective Human–AI Intelligence
If you’ve been following the AI debate, it’s easy to feel lost in the noise.
Sundar Pichai, CEO of Google has stated that “AI has the potential to be more transformative than electricity or fire” whilst Reid Hoffman, Co-founder of LinkedIn in a recent essay outlined his view that “AI is going to reshape every industry and every job”.
But on the other side of the debate we have Prof. Emily Bender (Director, Computational Linguistics Laboratory, University of Washington) who famously framed LLMs as just “stochastic parrots” which “repeat without understanding”, and Gary Marcus (professor emeritus of psychology and neural science at New York University) who called LLM outputs “authoritative bullshit”. Both reflect the views of Journalist, author & entrepreneur, Julia Angwin, that “AI looks less like an all-powerful being and more like a bad intern whose work is so unreliable that it’s often easier to do [it] yourself”.
The dichotomies keep coming! David Sacks, White House AI and Crypto Czar believes “AI models still heavily depend on human input, which undercuts fears of massive job displacement” and Yann LeCun, Chief AI Scientist at Meta has boldly claimed that “AI won’t take your job. But it will transform it and create new ones”. But as a counterpoint Mo Gawdat, ex-Google X executive has stated that “The idea that AI will create new jobs is 100% crap” and Dario Amodei, CEO of Anthropic is on record as claiming that “AI could wipe out half of all entry-level white-collar jobs spiking unemployment to 10-20% in the next 1 to 5 years”.
How can we as business leaders, or just members of society, pick our way through this landscape? Are we to believe, as Geoffrey Hinton does that – if we are not careful – the lived experience of AI will be one of further polarization and dispossession or possibly something of a more existential threat if a digital superintelligence slips from human control. The other side of the debate sees AI curing disease, eliminating drudgery, and ushering in an era of abundance.
No wonder we are all confused. Is AI the end of the world, the dawn of a golden age, or just another over-hyped technology?
Part of the reason it’s so difficult to see the ‘direction of travel’ is that we’re looking in the wrong place. We fixate on AI as an individual technology — the chatbot, the coding assistant, the autonomous car. But the real transformation is more subtle, more systemic, and harder to capture in a headline. The true revolution is not AI replacing humans but AI changing how humans think together.
Recent research from MIT’s NANDA group captures this tension. They call it the GenAI Divide: nearly every large company is piloting AI, but only a tiny fraction are getting measurable value. The reason? Most systems don’t adapt, don’t learn with their users, and don’t reshape how people work together. It’s a powerful reminder that the future isn’t about sprinkling AI on top of business as usual — it’s about weaving it into the collective way humans think and act.
This gap between adoption and impact also explains why so much attention drifts back to Artificial General Intelligence (AGI) — because the present feels underwhelming compared to the hype. But let’s not get bogged down with the whole ‘AGI … are we there yet?’ debate. It might happen in five years, in 20 years, in 50 years or never. Some believe its achievable when we have enough processing capability to turbo charge LLMs, or it happens when we give LLMs more human context, or it will never happen with the current crop of models and will need a completely different approach to intelligence. And what is AGI anyway – everybody has their own definition and therefore their own test for whether or not it has been achieved, which they themselves administer.
The recent launch of ChatGPT-5 was something of an underwhelming event vis-à-vis the plateauing of performance and may (MAY!) presage the limit of the capabilities of the current crop of AI models
In the aftermath of GPT-5’s launch, it has become more difficult to take bombastic predictions about A.I. at face value, and the views of critics like [Gary] Marcus seem increasingly moderate… Post-training improvements don’t seem to be strengthening models as thoroughly as scaling once did. A lot of utility can come from souping up your Camry, but no amount of tweaking will turn it into a Ferrari.” Cal Newport, The New Yorker
For business leaders, the current crop of AI models offer quite enough challenges and runway to be getting on with without investing too much mental bandwidth in the spectre of AGI.
Why the signals are so mixed? The confusion isn’t accidental; it’s baked into the dynamics of AI development. First, the exponential pace of change means our cultural imagination is always running behind. A decade ago, deep learning breakthroughs were confined to research labs; now, generative AI is in the hands of schoolchildren. What’s ‘sci-fi’ one year becomes banal the next.
Second, different stakeholders push different stories. Tech CEOs want investors excited, regulators cautious, and consumers trusting. Policymakers want guardrails without stifling innovation. Academics warn of risks while also highlighting opportunities. Media outlets amplify extremes — catastrophe or salvation — because that’s what draws attention.
Third, we still look at AI through the lens of the individual tool rather than the collective shift. We ask: Can ChatGPT write my essay? Can Midjourney replace graphic designers? Rarely do we ask: What happens when whole groups of people start collaborating with AI at once? That’s where the real story lies. Which is why we need to look not at individual tools but at the emergence of something bigger: a collective polymathic intelligence.
The hidden direction of travel: collective intelligence
Collective intelligence is not new. Human groups have always displayed a kind of ‘group IQ’. MIT’s Thomas Malone has shown that some groups consistently outperform others, not because they have the smartest individuals, but because of social sensitivity, balanced participation, and diversity of thought. In other words, intelligence emerges from the dynamics of the group.
Now add AI into the mix. Not just as a tool we query, but as a teammate, mediator, and amplifier of group cognition. This is the frontier of human–AI collective intelligence.
Examples are already appearing:
Medical swarms: In experimental studies, groups of radiologists connected via ‘swarm intelligence’ software made ~33 % fewer diagnostic errors than individual radiologists, and ~22 % better than leading AI models, when diagnosing pneumonia from chest X-rays. (In one controlled trial involving 50 cases, radiologists ‘thought together’ using a real-time swarm interface.) While not yet part of mainstream hospital practice, these experiments point toward a future in which human groups, mediated by AI, can outperform both solo experts and standalone models.”
Civic assemblies: In experimental and pilot settings, AI has been used to structure citizen deliberations — helping scaffold debates, cluster viewpoints, propose compromise statements, and reduce deadlock. Platforms like Pol.is have already influenced real policy discussions; experimental systems such as conversational swarm intelligence and AI-facilitated mini-publics show how AI can help large groups converge more coherently. While not yet widespread, these experiments hint at a future where deliberation itself becomes a human–AI joint process.
Workplace teaming: The Brookings concept of vibe teaming argues that AI can help orchestrate better group flow — nudging participation, bringing forward neglected perspectives, and aligning collective attention so teams can think more coherently together. Early experimental work supports this: in controlled studies, teams augmented with generative AI outperform purely human teams. While not yet deployed at scale in every enterprise, these experiments illustrate how AI could shift from a tool to a subtle orchestrator of group intelligence.
These examples are early and experimental, but they reveal a trajectory: AI is quietly reshaping how groups generate insights, make decisions, and create knowledge. The direction of travel is not towards AI replacing us, but towards a hybrid intelligence where human and machine thinking are deeply interwoven.
Why this is hard to see?
First, because collective intelligence is subtle. It doesn’t produce dramatic headlines like ‘AI beats humans at Go’. Instead, it shows up in smoother meetings, better group decisions, or unexpected flashes of shared creativity.
Second, because results are mixed and context dependent. MIT Sloan reviewed over 100 studies of human–AI collaboration: sometimes human–AI teams outperformed humans alone; sometimes they underperformed, especially when people deferred blindly to AI. Success depends on calibrating trust: knowing when to follow the AI, when to challenge it, and when to blend its logic with human intuition.
Third, because AI already shapes our thinking in ways we barely notice. Studies show AI can subtly steer the language of group conversations — what people pay attention to, which options are explored, and even the emotional tone of discussion. These invisible nudges make AI less like a tool in your hand and more like a hidden participant in the group mind.
In short, collective intelligence is already here, but we don’t see it because it doesn’t fit our mental model of ‘AI as competitor’.
The human challenge: thinking differently
If the future lies in collective intelligence, the human challenge is clear: we must learn to think differently and more creatively. We need to think more like modern day polymaths.
The education system has long rewarded specialization — narrow expertise, linear logic, mastery of a single discipline. That made sense in an industrial world where knowledge was stable and jobs were predictable. But in the AI era, where knowledge changes faster than curricula can keep up, and where machines excel at narrow technical tasks, this model breaks down.
What we need instead is interdisciplinary, integrative thinking: the ability to connect dots across fields; to synthesize perspectives from science, art, philosophy, and lived experience and to balance logic with creativity, and data with empathy.
This is the essence of modern-day polymathic thinking: the ability to connect knowledge across different fields to generate new insights using a range of power skills – sensemaking, critical thinking, mental time travel and more.
Importantly polymaths are not ‘jacks of all trades’; they are deep connectors of knowledge, able to see nuanced patterns others miss, to bridge silos, and to generate fresh insights. In the age of AI, this polymath skillset becomes not just useful but essential — because working with collective intelligence requires holding the meta-view: seeing the whole, not just the parts.
A Brave New World reframed
Aldous Huxley’s Brave New World imagined a rigid social hierarchy from Alphas down to Epsilons. It was a warning about control and conformity. But in today’s AI context, the hierarchy is being reshuffled in unexpected ways. We looked this in an earlier article and identified certain emerging modern-day strata.
There are now the newly disadvantaged: those who invested heavily in narrow degrees or professional credentials that AI can now replicate or outpace. A law graduate trained to draft contracts may find their skill commoditized by AI legal tools. An accountant may see large parts of their role automated. The promise of education as a one-time passport to lifelong security no longer holds.
But there are also the newly advantaged: people once sidelined by traditional, left-brain-dominated education systems. Those who struggled with rigid memorization, who thought visually, creatively, or divergently. Neurodiverse thinkers, artists, systems-thinkers. Freed from the constraints of ‘rote learning’ and empowered by AI to handle mechanical tasks, these individuals can now bring their right-brain strengths — imagination, synthesis, empathy — to the fore.
In this reframed world, the winners are not the credentialed specialists, but the polymathic connectors — those able to work fluidly with AI, across domains, with curiosity and creativity.
The positive vision: collective polymathic intelligence
This notion of collective polymathic intelligence is where the strands come together. AI brings scale, memory, and pattern recognition. Humans bring values, creativity, empathy, and synthesis. Together, in collective frameworks, they form something greater: a collective polymath.
Imagine a climate task force where AI models crunch global data, while human teams integrate cultural wisdom, ethical concerns, and creative solutions. Or a healthcare collective where AI identifies molecular patterns, while doctors, patients, and ethicists collaboratively decide what interventions align with human dignity.
The point is not that AI will out-think us, or that humans must out-think AI. The point is that together, we can think in ways neither could achieve alone.
This is the true direction of travel: not the replacement of human intelligence, but its expansion into a new form of collective, polymathic intelligence.
Seeing the AI path ahead
The reason AI feels so confusing today is that we’re still trapped in the wrong story. We look for signs of victory or defeat: Will AI replace us? Will it save us? Will it destroy us? This way of thinking is why we are struggling to agree a core narrative about where AI is taking us that is broadly universally accepted.
The key is recognizing that the more important story is quieter, subtler, and more profound. AI is already reshaping how humans think together. The future will not belong to the narrow specialist or the lone genius, but to those who can embrace collective intelligence and embody the spirit of the modern polymath.
Yes, there will be disruption. The challenges facing the newly disadvantaged are real, and the potential of the newly advantaged needs realizing. But within that disruption lies an extraordinary opportunity: to reinvent education, work, and society around the skills that make us most human — empathy, creativity, synthesis, and wisdom.
The direction of travel is clear once you know where to look. Not AI versus humans, but AI with humans. Not a machine intelligence replacing us, but a collective polymathic intelligence enabling us to become more fully ourselves. The real test is whether we can grow into the kind of polymathic, collective intelligence that makes both more than the sum of their parts.