Our Brave New World Revisited
Does the latest data vindicate the model or consign it to the dustbin of history?
Last year we published a two-part series in Research World magazine called Winners and Losers in the Game of AI. We borrowed from Aldous Huxley’s Brave New World to identify six social strata that may evolve in the AI-shaped, knowledge workplace.
Our central thesis was blunt: as artificial intelligence matures, the labor market won’t adapt in a smooth, even way. Instead, fault lines will emerge. Some groups, individuals, and professions will gain outsized advantage, while others risk being stranded.
At the time, this was largely extrapolation. We argued that AI is a redistribution machine — amplifying some forms of work while eroding others. But we didn’t yet have many “in the wild” examples to point to.
Fast-forward and the news cycle is now supplying those examples almost daily. Last week Accenture made headlines such as ‘Accenture plans on ‘exiting’ staff who can’t be reskilled on AI’ … when it announced 11,000 layoffs with CEO Julie Sweet telling staff to “use AI or leave.”. The Financial Times just coined the phrase “jobpocalypse” to describe the sharp contraction in graduate hiring. The BBC has run stories interviewing graduates who cannot get jobs after 200, 300, 500 applications!
And just when we think we are starting to get a handle on it, Yale and Brookings released study showing that, at the economy-wide level, AI has not yet dramatically reshaped the US labor market.
Together, these stories show both the validity of our original winners-and-losers model and the nuance we need to add. The big picture is not yet an apocalypse, but the local picture — especially in knowledge-heavy work — is already shifting.
The Winners and Losers model in brief
Our original model rested on three simple points:
AI is asymmetric. It doesn’t lift all boats equally. Some roles and industries gain leverage, others are hollowed out.
Winners and losers sit side by side. Two analysts in the same firm may diverge: one becomes an AI-augmented super-producer, the other is quietly sidelined.
The dividing line is not fixed. People and organizations can cross it through deliberate adaptation.
We argued the real differentiator wouldn’t be narrow technical skills alone, but what we called the Power Skills — the durable, human-centric meta-skills that allow people to orchestrate AI, not be replaced by it. The sorts of skills that the consultants at Accenture need to get to grips with, post haste, if they are not to find themselves out of a job
At the time, some readers asked: isn’t this a bit dramatic? Isn’t AI just another wave of automation that will wash through in familiar ways? Now, with fresh evidence on the table, the answer is clearer: both views have truth.
Accenture and the “AI or leave” doctrine
Accenture’s recent announcement was remarkable less for the number of jobs cut (although stark enough) than for the reasoning attached. This wasn’t framed as cyclical cost-cutting. CEO Julie Sweet said openly that those being let go were “not capable of being retrained in AI.” This is important because it makes explicit what many firms still only whisper: AI is now baseline in knowledge-intensive industries. It’s no longer optional or peripheral. Those who can’t or won’t adopt it are being selected out.
This validates the core of our model. The line between winners and losers is no longer speculative. It is being operationalized in corporate policy. At the same time, it raises a deeper question: who decides whether retraining is “viable”? That boundary is not purely objective. It reflects management judgement, investment appetite, and time horizons. One firm may decide it is worth retraining; another may not. That means “losing” is not just about personal capacity — it’s also about institutional will.
The graduate jobpocalypse: the vanishing rungs
The FT has described the collapse in graduate hiring as a “jobpocalypse.” Graduate-level job postings in the UK are down by more than half since 2022, a steeper decline than the overall labor market.
The reasons are multi-factorial: macroeconomic slowdown, higher borrowing costs, employer risk aversion. But AI is part of the picture. Entry-level work has traditionally been about grunt tasks — data prep, slide-deck drafting, first-cut analysis. Generative AI can now do much of that faster and cheaper.
The result: the bottom rung of the career ladder is being sawed off just as new graduates arrive. This is a classic loser dynamic. Not because graduates lack talent, but because the tasks that used to give them entry are disappearing. And this creates a systemic risk. If the rungs are missing, the entire talent pipeline becomes fragile. Even the “winners” may eventually suffer from a shortage of seasoned mid-career talent.
Yale/Brookings: the macro lens
Then came the Yale Budget Lab and Brookings study: Evaluating the Impact of AI on the Labor Market: Current State of Affairs.
Their conclusion: no evidence yet of economy-wide disruption. Occupational mixes look broadly similar to pre-ChatGPT trends. Employment and unemployment rates are not showing AI-specific shocks. That’s an important corrective to the more breathless predictions of 10–20 per cent unemployment within five years. But here’s the nuance: the largest shifts the researchers did find are exactly where we’d expect them — in information, finance, and professional/technical services. In other words: knowledge work.
The report is clear: macro averages may look calm, but pockets of turbulence exist. And those pockets align closely with the domains our model treats as the frontline of the winners-and-losers divide.
The FT covered the Yale/Brookings report under the sweeping headline: “AI is not killing jobs, finds new US study!” That headline is technically accurate — but misleading. It collapses the study’s nuance into a blanket reassurance.
The report is conditional: “not yet at the macro level.” But the headline is categorical: “AI is not killing jobs.” The report flags sectoral turbulence. The article buries that detail beneath economy-wide averages.
This matters because it shapes public perception. If you are a graduate trying to enter consulting, or a junior analyst in finance, the “AI is not killing jobs” reassurance feels hollow. Your rung of the ladder is already wobbling.
This is where we find Harvard’s “Jagged Technological Frontier” framing useful. The argument: new technologies don’t spread evenly. They create sharp edges — some tasks automated away quickly, others untouched for years. Looking only at macro indicators is like averaging the jagged peaks and valleys into a flat line. It hides the pain of those already caught on the sharp edge.
In practice, the average nurse sees little AI impact yet, the average junior management consultant or lawyer sees a lot. The macro data blend those experiences together into a misleading calm. That’s why headlines like the FT’s are problematic. They reassure the majority, but they underplay the acute disruption already underway in smaller but strategically important niches.
Power Skills revisited
Against this backdrop, our Power Skills framework looks more relevant than ever. Technical retraining alone is never enough, because the frontier keeps moving.
The ‘Winners and Losers’ model we put forward was not static, and people can move between strata - moving up or dropping back down. The trigger for these movements up or down is developing the skills one needs to stay relevant and, indeed, thrive in the AI workplace. It’s about acquiring the set of skills to identify which we argued are most important to dial up because they are the ones that allow us to work most productively alongside generative AI. These are the skills that leave people best placed to succeed and be an AI winner rather than an AI loser.
As the ‘Human-AI Interface’ you will pair with technology, knowing when to lean on AI and when not to. You will ask targeted questions, challenge deterministic answers, and use computational outputs in conjunction with human judgment. You will treat AI as an amplifier and a challenger. You will be comfortable with what Wharton professor Ethan Mollick refers to as “working with the wizard” i.e. grappling with an increasingly opaque but really clever - cleverer than us – AI. The goal is co-judgment: you bring context, values, and risk appetite; the system brings recall, patterning, and speed.
As ‘Sense-Maker’ and reframer – you will interpret AI outputs, filling in what’s missing, extracting meaning from complexity, which works alongside your ‘Critical Thinker’ persona, challenging flawed models and assumptions, not deferring blindly but bringing a range of skills to identify nuance. The ability to reframe the game—ethically—often creates the solution. In AI-heavy workflows, the critical edge shifts from finding information to interrogating it. The more fluent your tools, the better your questions must be.
As ‘The Audacious Creative’ you will be using AI to amplify, not homogenize, imagination. We talk a lot of the power of polymathic or diverse thinking. Creativity for polymaths is integrative novelty: new combinations that fit the system you’re in—barely. As ‘The Forward Thinker’ you will be anticipating where the frontier cuts next, you will be able to hold multiple futures lightly and act to keep the best ones open. Strategy is option management. Polymaths preserve and expand the cone of viable futures while committing just enough to learn.
As ‘The Inspiring Communicator’ you are turning machine outputs into human resonance - transmitting powerful psychological messages that tap into peoples’ emotions. You offer clarity. Polymathy without clarity of purpose and communication runs the risk of collapsing into ‘smart- aleck-ness’. The work isn’t done until your diverse audience can act on it. And finally, as ‘The Galvanizing Leader’ you are building trust amid uncertainty, providing empathic and social architecture: norms that let specialists collide productively and will increasingly be managing collective intelligence working patterns / systems. A polymath is a force multiplier—not an encyclopedia. Build the conditions for other people’s depth to connect.
These skills are not washed out by macro averages. They are sharpened by the very turbulence the averages conceal.
Why winners should care about losers
A final point. Even if you count yourself among the winners, the fate of the losers matters. If too many graduates can’t get on the ladder, the future pipeline of skilled talent erodes. If too many workers are stranded, political and social instability rises. If too many sectors hollow out, demand in the wider economy weakens. The Yale/Brookings study shows no apocalypse today. But ignoring the early tremors in knowledge work would be reckless.
So where does this leave us? The Accenture case shows firms are already drawing hard lines: use AI or leave. The graduate “jobpocalypse” suggests the bottom rungs are crumbling (in important sectors of knowledge work). The Yale/Brookings study shows no economy-wide catastrophe — but flags turbulence exactly where our model predicts it. Headlines can wash out the nuance, obscuring the reality for those on the jagged frontier.
The story of AI and work is not a flat average. It is jagged, uneven, asymmetric. The winners and losers are already emerging. The task now is not to debate whether disruption is “real,” but to invest in the Power Skills that allow more people to cross the divide — and to do so before the jagged edge cuts too deep.