Mind the gap!
The human operating model is now playing catch-up to AI. (We have data!)
This week we had some data to play with. We recently had an opportunity to ask business leaders in large organizations a couple of questions (see appendix) about the use of AI in their business. Here is what we found.
Artificial intelligence is not sitting on the sidelines of organizational life. AI is becoming part of the cognitive machinery through which organizations interpret information, weigh options and act. Forecasts, recommendations, prioritization, workflow design, customer targeting, procurement, staffing and operational choices are all increasingly touched by some form of AI support.
In our survey of senior and c-suite decision-makers, agreement across nine core statements about AI and decision-making was consistently high (circa 80% to 90%). This is not a story about organizational resistance, of executives rejecting AI or denying that it will shape organizational performance. The broad direction of travel is accepted, and that finding mirrors wider industry research. Large global studies show that AI adoption is accelerating across organizations, with a majority of companies now reporting some level of operational deployment.¹
But it would be a mistake to read this as uncomplicated confidence. The survey points to a kind of qualified assent. Leaders broadly accept that AI will matter. They broadly accept that organizations will need to work with it. They broadly accept that the future will depend on some form of human–AI collaboration. But when you move from that general direction of travel to the practical realities of judgement, oversight, governance and skill, the picture becomes less clear. Senior leaders agree on the destination, but what is less clear is whether the human operating model needed to get there is yet in place.
The survey shows strong alignment around AI’s growing role. 90% agreed that the organization’s future performance will depend on collective intelligence — how humans and AI think together. It suggests that the dominant frame among respondents is not simple automation, nor that of AI replacing people wholesale, but a more blended model in which value comes from the interaction between machine capability and human judgement.
This view is increasingly reflected in management research. Studies of knowledge work show that AI tends to augment rather than eliminate many forms of professional decision-making, often improving productivity while simultaneously creating new cognitive demands for supervision and interpretation.² Other strong signals in the study …
84% agreed that as AI becomes more capable, human skills need to evolve, not diminish, for the organization to perform well.
82% agreed that even where AI automates decisions or actions, human judgement remains clearly accountable for outcomes.
The same number of people (82%) agreed that their organizations are actively discussing which thinking tasks humans should retain as AI capabilities expand.
So the survey does not support the idea that leaders are surrendering judgement to machines. On paper at least, the respondents are still placing humans firmly at the center of accountability and future performance. The interesting question is how firm are these assertions? These agreement levels tend to halve when focusing on the strongest level of agreement which points to broad agreement without overwhelming conviction. Leaders mostly think AI is going to matter. But many appear to be holding that view with a degree of caution.
The open comments are remarkably consistent on one point. Most respondents are not expressing fear of AI in the abstract. They are expressing concern about the human and organizational conditions surrounding it. Again and again, the anxieties are not about whether the tools are useful, but t whether organizations are becoming capable of using them well.
One respondent worried that people may start “trusting AI outputs too quickly without fully questioning the data or assumptions behind them.” Another talked about “blind following” and “copying without evaluating core logic.” A third warned that “speed gains in decision making could come at the expense of deeper judgement and critical thinking.”
These concerns are not unique to this survey. Experimental research on human–AI decision-making shows that people frequently over-rely on algorithmic recommendations, particularly when those recommendations are presented with high confidence or appear technically sophisticated.³
In other words, the risk is not simply that AI might be wrong. It is that organizations may normalize decision-making processes in which people become too passive, too dependent or too underprepared to challenge systems that appear fluent and authoritative. That is a human capability problem, not a purely technological one.
The survey also points to an important organizational pressure. Nearly 80% agreed that leaders expect efficient use of AI to speed up decision-making. That should not surprise anyone. Speed is one of the most obvious and most saleable benefits of AI. But the comments show that speed is also where some respondents feel unease.
Several worry that decisions are being accelerated before organizations have built enough capability to challenge them properly. Others talk about pressure to adopt AI because competitors are moving quickly. Some fear that rapid deployment is outrunning governance. The quantitative data does not prove that speed is displacing judgement, but it does show that leaders are aware of the tension.
Many respondents also refer to the “black box” problem — the difficulty of understanding how complex AI systems reach particular conclusions. That challenge has been widely recognized in AI research. Lack of ‘explainability’ is one of the main barriers to trust and effective oversight in real-world deployments.⁴ If people cannot interrogate the reasoning behind a system’s output, their ability to challenge it becomes limited. And if they cannot challenge it, accountability becomes harder to maintain.
Respondents also worry about the quality of the data feeding AI systems. AI may be impressive, they suggest, but it is still only as good as the data, context and assumptions beneath it. Concerns about bias appear repeatedly in the responses. That concern is also well documented in policy research, which shows that algorithmic decision systems can unintentionally replicate or amplify historical biases if the training data contains them.⁵
These issues — transparency, data quality, bias, governance — do not invalidate AI. But they do mean that effective use requires organizational maturity. If all this sounds cautious, it should. But the survey is not pessimistic. The most constructive result in the dataset is the combination of three findings:
(to repeat) 82% say organizations are actively discussing which thinking tasks humans should retain
83% say human skills need to evolve, not diminish
89% say future performance depends on collective intelligence
The future is not a simple choice between human judgement and machine intelligence. Nor is it a story in which human skill becomes irrelevant as AI improves. As AI becomes more capable, the human contribution shifts. Less of the value may come from routine synthesis or first-pass analysis. More of it may come from interpretation, challenge, judgement, context, ethical reasoning and communication. That is a more demanding human role.
So to summarize …
Senior leaders are broadly on board with AI, they are not rejecting its role in decision-making.
They increasingly see the future in terms of collective intelligence rather than simple automation.
But they are not naïve. Their concerns reveal an awareness that the technology is moving into organizational life faster than the habits, capabilities and governance structures needed to use it well.
The organizations that succeed in the next phase of AI adoption will probably not be those that simply move fastest or automate most aggressively. They will be the ones that build a stronger human operating model around the technology: clearer judgement, better challenge, stronger governance and the kinds of cognitive skills that make human–AI collaboration productive rather than passive.
References
McKinsey & Company (2024). The State of AI: How organizations are rewiring to capture value.
https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-aiDell’Acqua, F., McFowland, E., Mollick, E., et al. (2023). Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality. Harvard Business School Working Paper.
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4573321Bansal, G., et al. (2021). Does the Whole Exceed Its Parts? The Effect of AI Explanations on Complementary Team Performance.
https://arxiv.org/abs/2102.09692Barredo Arrieta, A., et al. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges.
https://arxiv.org/abs/1910.10045Centre for Data Ethics and Innovation (2020). Review into Bias in Algorithmic Decision-Making.
https://www.gov.uk/government/publications/cdei-publishes-review-into-bias-in-algorithmic-decision-making
Q.1: Thinking about how AI systems are used in your organisation today — including automation and AI agents — to what extent do you agree with the following statements? Please answer on a scale of 1 to 5, where ‘1’ means “Strongly Disagree” and ‘5’ Means “Strongly Agree”
Even where AI automates decisions or actions, human judgement remains clearly accountable for outcomes.
People are confident about when human judgement should challenge or override AI outputs, and do not just defer to them.
Teams routinely interpret, contextualise, and question AI-generated outputs before acting on them.
We actively discuss which thinking tasks humans should retain, as AI capabilities expand.
The use of AI has changed how people think about problems ... they have become more reflexive and thoughtful
Leaders value and reward informed human judgement in the application of AI
Above all else, leaders expect efficient use of AI to speed up decision-making
As AI becomes more capable, we believe human skills need to evolve, not diminish, in order for the organisation to perform well.
We view the organisation’s future performance as dependent on collective intelligence — how humans and AI think together.
Q.2: What concerns you most about how AI is currently shaping decision-making in your organisation? Please explain in as much detail as possible



