While much of the AI discourse is fixated on speculative extremes—full job collapse or utopian productivity—there is a shift happening quietly and incrementally ... especially at the bottom of the talent ladder. It’s not framed as a mass firing or a bold transformation. Often, it’s not framed as anything at all.
The critical question is no longer can AI do the job—but when does it become more rational to buy the machine than hire the person? What are the triggers that lead firms to substitute junior human talent with AI? And if we are substituting AI for a layer (or layers) of the workforce, what gets lost in the process and what might this mean for the future of learning, leadership, and the human development pipeline.
Automation has always had a place in business. Spreadsheets replaced clerks. CRMs replaced Rolodexes. But AI is different. It doesn’t just do calculations—it mimics cognition and it doesn’t just support humans—it holds out the possibility of directly replacing their contributions.
There is no single trigger or catalyst for switching from people to AI … there is, as always, of mix of reasons, of economic signals, organizational behaviors, and technological maturity. Below are five potential triggers that might show up in firms making the switch.
Cost Pressures and Margin Protection: Let’s start with the obvious. AI is becoming cheaper, more scalable, and—crucially—good enough. If you’re a CFO reviewing the cost of a junior hire versus a GPT-powered assistant, the financial case quickly becomes compelling. Take Klarna, the fintech company that recently replaced two-thirds of its customer service agents with an OpenAI-based assistant. It didn’t just reduce response time—it cut costs significantly. AI becomes most appealing when margins are thin, market volatility is high, or hiring is frozen. In those moments, cost-saving solutions that don’t require HR, training, or retention programs get fast-tracked.
Talent Bottlenecks and Burnout: Certain roles are historically difficult to fill and even harder to retain. High-churn, low-recognition positions—like paralegals, admin assistants, or entry-level consultants—are especially vulnerable. Law firms like Allen & Overy now use generative AI (via the OpenAI-backed Harvey platform) to do everything from summarizing legal documents to drafting contracts. That work once trained junior associates. Today, it's automated—and done faster. In professional services, where “time is money,” AI isn’t just efficient. It’s dependable. It doesn’t burn out, get poached, or ask for promotions.
Process Codification and Predictability: AI thrives in environments where workflows are structured and repeatable. When a task can be mapped, it can be mimicked. At Goldman Sachs, generative AI is being used to write and test code—often the domain of junior software engineers. These tasks are relatively contained and rule-bound. Once the system learns the pattern, it outpaces the novice human. Any company with mature processes—documented SOPs, standardized outputs, well-governed workflows—is already AI-ready. Many just don’t realize it yet.
Cultural Comfort with Machines: Substitution isn’t just about capability—it’s also about cultural readiness. Organizations with high digital fluency or leadership champions for AI are much more likely to trial and adopt AI in place of people. That’s what’s happening at WPP, the global advertising giant. Rather than scaling their creative teams, they’ve partnered with Nvidia to build generative AI pipelines that create ad assets, draft copy, and iterate campaign content—replacing the need for junior creatives. In firms where AI tools are already woven into the stack—Slack bots, Notion AI, GitHub Copilot—the idea of AI as “just another teammate” is already normalized.
We attended a talk a couple of years ago, where one Advertising Agency Executive highlighted AIs ability to reduce “friction” and hence cost. The example he gave was of the gestation of an advert. Moving from the initial creative to actually filming the ad … you have talent agencies to deal with, actors to book, locations to scout and arrange (and a back-up in case something goes wrong). You have travel, accommodation and restaurants to book. And people to handle all these logistics. You have the process of filming and then post-production. Lots of people and lots of complication. But AI offers you an opportunity to create the same ad from your laptop – actors and locations are generated by AI. You need none of the expensive logistics. You have none of the friction and hence the cost of actually having to deal with people. People = friction = cost! YOU are friction! It’s this pure substitution of labor by capital – which will lead to tension.
Platformization of Talent: The more AI becomes plug-and-play, the more it resembles hiring. But without the friction. Why post a job, screen candidates, and schedule interviews when you can subscribe to a specialized GPT model that does 80% of the same work? Companies like DoNotPay are already offering AI-driven legal support at scale—competing directly with junior legal staff. And we’re only scratching the surface of domain-specific AI talent marketplaces. The frictionless nature of these platforms—combined with increasing capability—tilts the equation toward substitution. Because speed beats tradition. And as we have seen in the insight industry speed and low cost is an alluring combination that in most cases trumps other (often important) considerations.
All of this might make perfect business sense. But it also raises a quieter, more existential question. What happens to the human development pipeline when the bottom rung disappears? Junior roles are not just low-cost labor—they are learning environments. They provide context, feedback, mentorship, and exposure. They are the proving ground where future leaders are shaped through proximity to experience. When AI absorbs that work, the pipeline narrows. And with it, so does an organization's ability to grow its own talent.
In his book The Skill Code, Matt Beane explores how complex skills—those traditionally learned through apprenticeship, observation, and repetition—are being disrupted by AI and robotics. His central argument is that new technologies are quietly dismantling the conditions that made skill acquisition possible, especially for those at the entry level of organizations.
Beane introduces the concept of “shadow learning”: the informal, often invisible process by which junior workers gain capability by watching, imitating, and gradually participating in skilled work. Whether it’s a surgical resident watching procedures, a warehouse worker learning logistics patterns, or a junior analyst reviewing reports, this hands-on exposure is essential for developing expertise. If we think back (way back) to our own early days in our industry (which was insight, then called market research) 90% of what we learnt was learnt by osmosis! Being in the room with others during discussions and meetings, sitting one desk away from somebody who knew more than we did.
However, as AI and automation increasingly take over tasks at the lower end of the hierarchy, the opportunity to engage in this learning disappears. Junior workers no longer "see" the work being done, because the system does it silently. They are left with either menial leftovers or are cut out altogether. Beane warns that this creates a dangerous paradox: technology makes organizations more efficient in the short term but starves the talent pipeline needed for future resilience.
Rather than rejecting technology, Beane advocates for deliberate design choices that preserve or even enhance skill-building—what he calls “reengineering the code” of learning. This includes creating hybrid environments where humans can learn alongside AI, with transparent systems, coaching loops, and staged participation.
So … how do we develop future managers if they never shadow their seniors? How do people build judgment if they never face low-stakes decisions? What replaces the corridor conversations, quiet observations, and "watch-and-learn" moments that form so much of tacit knowledge?
Without junior talent, we risk building organizations with brittle succession plans and shallow benches. Short-term efficiency might undermine long-term capability.
We can envisage a scenario whereby organizations trim the bottom and middle. Senior professionals work directly with AI tools to execute tasks. Over time, even mid-level roles are thinned. Firms become leaner, faster—but in the hollowing out they become more fragile.
In some organizations, there may be the development of a human-AI partnership pathway. Here new roles may emerge—“AI curators,” “insight editors,” “contextual analysts”—where human judgment partners with machine scale. These roles aren’t entry-level in the traditional sense, but they offer a new path into the workforce.
And there may be other organizations that, recognizing the loss of learning, double down on structured apprenticeships and shadowing programs. These are more intentional, less transactional. Human learning is re-valued—but only in select, forward-thinking companies. And to prevent their investment walking out the door to other organizations that have not fostered or developed junior talent, maybe the apprenticeship revival comes with a commitment to work for a specified period (although this is akin to indentured service, with all the negative connotations that this conjures up)
If we accept that AI will increasingly take over junior tasks, then we must ask ‘how do we design work differently to still grow human capability’? This is not just a challenge for HR or L&D. It’s a strategic leadership question.
We need to:
Design hybrid workflows that leave room for human growth.
Invest in mentorship ecosystems, not just performance systems.
Shift from task-based hiring to potential-based development.
Build cultures that reward learning, not just output.
Because the real risk isn’t that AI replaces people. It’s that we stop developing people altogether.
If you had to train the next generation of leaders—without giving them the traditional “grunt work” to cut their teeth on—how would you do it?
That question is no longer hypothetical.
It’s the next great design challenge for the AI-shaped workplace. And those who solve it first may be the ones who build the most resilient, adaptive, and truly human organizations of the next decade.