A lot of the excitement these days around AI is being driven by senior leaders, people with years of experience, deep critical thinking skills, and sharp instincts built over time. And it’s no wonder tools like ChatGPT and Claude feel powerful in their hands. These people can spot when the output seems to be incorrect, when context is missing or nuance has been flattened into generic advice.
But these are not tools of understanding, they’re sophisticated prediction engines, generating what they think you want to see based on patterns in their training data. They work brilliantly when you already know what “right” looks like, but they become dangerous when you don’t. There is a foundation problem around when juniors, still learning the fundamentals, rely heavily on these tools. It’s like giving a calculator to someone who hasn’t yet learned basic arithmetic, it speeds you up, but only if you understand the logic behind the numbers.
We’re already seeing this play out across multiple industries. In software development, junior developers are using AI to generate code they can’t debug. In marketing, entry level professionals are producing content they can’t evaluate for brand alignment. In consulting, analysts are creating presentations without understanding the underlying frameworks. We’re creating a generation of AI dependent workers who never had the opportunity to build foundational skills. Knowing how to prompt a model is not the same as knowing when to trust it. AI literacy and AI dependence are fundamentally different things.
But I am reassured that the market is already showing signs of correction. Duolingo’s CEO recently walked back comments suggesting AI would replace contract workers after facing significant backlash. Klarna, after aggressively hyping its AI-first customer service strategy, has quietly begun rehiring human agents after the AI approach led to “lower quality”. Even tech giants like Meta and Amazon have slowed their automation rollouts in certain divisions.
Shortsightedness
But many leaders remain focused solely on immediate efficiency gains. Recently, I spoke to a very senior board member at a global media agency who delivered a presentation on how agentic AI was transforming their industry. The talk was full of impressive metrics about delivering insights to clients faster and cheaper than ever before. When I asked him afterwards about the growing gap in junior talent development and whether they were addressing it, his response was telling: “It’s not down to us to sort out the outside world’s problems.”
This mindset of treating talent pipeline disruption as someone else’s problem is precisely why we’re heading towards a crisis. These reversals aren’t just about public relations, more they’re about operational reality. Organisations are discovering that tools alone aren’t enough when problems become complex or when stakes are high.
In law firms, partners are realising that while AI can draft basic contracts, it takes experienced lawyers to understand when those contracts might fail in court. In healthcare, AI can identify patterns in medical images, but it takes trained diagnosticians to understand what those patterns mean for individual patients.
But there are other consequences than just productivity metrics. When we automate away junior roles, we’re cutting off critical entry points into industries, and these entry points are often the most accessible for people from underrepresented backgrounds.
Traditional career ladders provided structured learning environments where professionals could build expertise gradually, make mistakes safely, and develop judgement through experience. Remove the bottom rungs, and you don’t just eliminate jobs, you eliminate the entire pathway to senior leadership.
This creates a diversity crisis waiting to happen. The same senior leaders who benefit most from AI tools are the ones making decisions about which roles to automate. Without deliberate intervention, we risk creating more homogeneous, less innovative organisations just when we need the opposite.
At this very moment, the dominant narrative around AI seems to be focused just on efficiency: faster output, fewer people, higher margins. But nobody’s seriously discussing reinvestment. The self-checkout didn’t fund shorter working weeks for supermarket staff, it funded higher profits for shareholders and less people on payroll. I’m not sure if AI will be any different.
There is light coming from some forward-thinking organisations though. I’ve started to see new roles like “AI Trainers” and “Human-AI Interaction Designers” for junior professionals, and apprenticeship programmes that pair junior consultants with both senior mentors and AI tools. These companies understand that the question isn’t whether to use AI, but how to use it whilst still developing human capability.
We’ve seen this shortsightedness before. After lockdown, senior leaders praised remote work from their well equipped home offices with dedicated spaces and established networks. Meanwhile, junior professionals were trying to learn and collaborate from noisy flatshares with poor internet connections. AI presents a similar asymmetry in that tools that amplify existing capability whilst potentially disadvantageous to those still building it.

Leadership evolution
It’s interesting to see the leadership evolution within AI too. I listened to a great talk from David Lindop at the Manchester Tech Festival Leadership Evening recently where he spoke about how traditional organisational pyramids are becoming diamonds, reshaped by AI agents, copilots, and robotics. As these systems become more autonomous, simply putting “a human in the loop” won’t be sufficient.
Future leaders will need to be hybrid professionals who bring uniquely human capabilities:
Business Context: Understanding not just what the data says, but what it means for strategy, culture, and competitive positioning.
Emotional Intelligence: Reading between the lines of stakeholder concerns, managing change, and building trust in human-AI collaboration.
Ethics and Political Instincts: Navigating the complex moral and social implications of AI decisions, especially when they affect people’s lives and livelihoods.
Creative and Lateral Thinking: Solving problems that don’t fit established patterns and identifying opportunities that algorithms might miss.
AI and Data Literacy: Understanding model limitations, bias risks, and when human judgment should override automated recommendations.
These “generalist-specialists”, part strategist, part technologist, part diplomat are rarely found in traditional data science or IT departments. They emerge through diverse experiences, cross-functional learning, and crucially, junior opportunities that let them experiment and grow.
If we want organisations that are truly resilient and future-ready, we need to move beyond viewing AI purely as a cost-cutting tool. Instead, we should see it as an opportunity to reinvent how we develop talent:
Instead of eliminating junior positions, redesign entry level roles around human & AI collaboration. Create roles like “AI Quality Assurance Analyst” or “Algorithm Ethics Coordinator” that require both technical understanding and human judgment.
Invest and develop training programs that teach both AI tools and the foundational skills needed to use them effectively. This means understanding the domain knowledge, critical thinking, and ethical frameworks that make AI outputs valuable rather than dangerous.
Pair junior professionals with both senior mentors and AI tools, ensuring they learn when to rely on each. We should be trying to develop judgment, not just technical proficiency.
Don’t just measure immediate productivity gains or cost savings, but look at longer term indicators like innovation rates, employee development, and organisational adaptability.
The companies that thrive in the AI era won’t be those that simply adopt the latest tools fastest. They’ll be the ones that use AI to amplify human potential rather than replace it. Building the next generation of leaders while solving today’s problems.
Because in the end, AI may be able to predict what we want to hear, but it can’t replace the wisdom that comes from learning what we need to know. And that wisdom still requires a human journey that shouldn’t skip a generation.