Record levels of investment show that business leaders recognise AI’s game-changing potential — yet the gap between ambition and impact remains stark. Just 1% of organisations describe their AI deployments as “mature,” and 74% still struggle to translate AI into measurable value. In fact, 42% of firms abandoned most AI initiatives in 2025, a steep rise from 17% the year before.
By contrast, Deloitte’s year-end State of Generative AI pulse paints a very different picture at the leading edge: 74% of advanced Gen-AI initiatives are already meeting or exceeding ROI targets. Yet across the board, only one in five projects sees that level of success — evidence of a widening leader–laggard gap.
These insights were explored in a recent live event hosted by the creators of the LSE AI Leadership Accelerator, featuring LSE faculty experts and an industry leader from Deloitte to explore what it really takes to translate AI strategy into business impact. The event also introduced the new AI Leadership Accelerator, a programme designed for senior professionals and consultants who need to lead or advise on AI transformation. Developed in collaboration with FourthRev, experts from Deloitte’s Office of Gen AI, and Heads of AI at Inchcape Digital, Thredd and others, the programme focuses not on technical training, but on strategic leadership, organisational change, and responsible implementation.
Watch the full event recording here to hear directly from industry leaders on what’s really holding AI back — and what’s working. According to BCG’s 2024 AI Radar, 70% of adoption challenges stem from people and process issues, not technology. As Dr Dorottya Sallai of LSE’s Department of Management put it, “AI adoption is a cultural transition,” one that requires overcoming psychological and leadership barriers more than technical ones.
In many organisations, capable models are built — but the business case isn’t clearly communicated. Meanwhile, leaders cite employee resistance, but McKinsey's 2025 findings reveal a more sobering truth: “The biggest barrier to scaling AI is not employees — who are ready — but leaders, who are not steering fast enough.” Their data shows that leaders are twice as likely to blame employee resistance than to acknowledge their own strategic shortfalls.
Drawing on insights from the event and ongoing research across multiple industries and feedback from organisations actively implementing AI today, this article identifies seven key leadership practices that consistently enable AI transformation. These practices address the three most common executive pain points: lack of strategic vision, organisational resistance, and difficulty delivering tangible results:
1. Foster trust and transparency
Trust is the foundation of successful AI adoption. When employees don't understand AI's purpose or implementation, resistance naturally follows. Harvard Business Review puts it succinctly: "Employees won't trust AI if they don't trust their leaders."
Effective leaders:
- Communicate openly about AI initiatives and their intended impact
- Acknowledge uncertainties and address concerns proactively
- Involve employees in AI experiments and decision-making
- Demonstrate how AI decisions are made and can be verified
- Establish formal ‘responsible AI’ processes
The impact is measurable: According to IBM research, organisations that implement formal responsible AI frameworks report significantly higher workforce adoption and engagement.
What’s more, employees back what they understand. Deloitte’s C-suite ethics survey shows 88% of companies now communicate openly about how they use AI, and 52 % involve the board when drafting AI-ethics policy.
In short, when leaders communicate clearly and implement responsible AI principles, they create the trust and clarity teams need to adopt AI confidently.
2. Lead with a clear vision and business case
LSE researchers warn that “AI strategies without a compelling ‘why’ rarely survive first contact with reality.” The best leaders link every AI use case to a strategic outcome and publish an enterprise AI roadmap.
Deloitte finds C-suite alignment is a top-three predictor of scaling success. AI initiatives succeed when they're connected to strategic business outcomes rather than implemented as technology experiments. Leaders must articulate a compelling ‘why’ behind AI adoption.
Successful organisations:
- Connect AI directly to business strategy and priorities
- Develop compelling narratives about AI's purpose
- Establish clear ROI metrics and milestones
- Focus on solving real business problems rather than deploying technology for its own sake
IBM emphasises that the most successful AI adopters ground their initiatives in clearly defined business strategies, supported by structured roadmaps that align AI efforts with enterprise goals — a sharp contrast to the ad-hoc approaches seen in less mature organisations.
Strategic programme managers emphasise that developing validated narratives around AI adoption — supported by concrete business cases — is essential for earning stakeholder buy-in, particularly in organisations grappling with legacy systems or recovering from previous failed technology initiatives.
3. Establish strong governance and ethics
Ethics is now a revenue issue. Deloitte reports that 55% of C-level executives say robust AI guidelines are very important to growth, and 49% already have formal policies in place (another 37% are nearly ready). When the CEO or board takes direct oversight, McKinsey observes a 3.6× boost in bottom-line impact.
As AI becomes more embedded in critical business processes, governance becomes increasingly important. Leaders must establish appropriate guardrails while enabling innovation.
Effective governance includes:
- Clear "guardrails" for appropriate AI use cases
- Multi-disciplinary oversight spanning technical and business perspectives
- Responsible AI principles addressing bias, privacy and transparency
- Proactive compliance management
- Executive-level accountability
This level of executive oversight has become a consistent marker of maturity. Companies that embed governance at the highest levels tend to achieve stronger alignment, greater impact, and more sustainable results.
IKEA provides an instructive example, having established a multidisciplinary AI governance team, comprising technologists, legal experts, policy professionals, and designers, that ensures AI initiatives align with business priorities and uphold responsible AI principles.
4. Invest in people and skills
AI fluency must extend across the enterprise. Nearly two-thirds of organisations prefer up-skilling existing employees over external hiring for new AI roles; almost half are already reskilling staff for Gen-AI. LSE’s “human-centred” guidance stresses training in critical thinking, change management, and prompt engineering to turn fear into curiosity.
AI fluency is rapidly becoming as important as digital literacy. Organisations must build capabilities across all levels, not just among technical specialists.
Leading organisations:
- Develop organisation-wide data literacy programmes
- Provide role-based AI capability training
- Establish AI academies and learning paths
- Build communities of practice to share knowledge
- Train employees in "prompt engineering" for generative AI
BCG research indicates that organizations with a strategic focus on AI—allocating substantial resources and upskilling their workforce—achieve significantly higher ROI on their AI investments compared to their peers.
Technical leaders increasingly highlight the importance of soft skills alongside technical fluency — including the ability to frame AI solutions in terms of business value, communicate outcomes to stakeholders, and align innovations with strategic goals. As AI becomes more integrated into decision-making, this blend of business and technical acumen is proving essential for driving adoption and delivering real impact.
5. Encourage experimentation and learn from failure
Innovation demands controlled testing environments. Deloitte’s Gen-AI survey shows 76% of leaders will give AI projects at least 12 months to resolve ROI or adoption challenges before shrinking budgets, signalling patience for iterative learning.
Innovation requires experimentation, and experiments sometimes fail. How organisations handle failure often determines their long-term success with AI.
Successful approaches include:
- Creating safe spaces for AI experimentation
- Destigmatising failure as an essential part of the learning process
- Applying agile methodologies to AI projects
- Systematically reviewing lessons learned across the organisation
- Celebrating both successes and valuable failures
BCG's research identified a counterintuitive insight: Organisations that acknowledge and ‘celebrate failures’ in AI pilots correlate with higher long-term value creation, likely because they learn faster and iterate more effectively.
The hidden cost of hesitation
While experimentation is essential, many organisations remain cautious — particularly those navigating economic pressure or still carrying the weight of past transformation failures.
George Johnston from Deloitte and offered a candid reflection on this dilemma during the LSE webinar:
“You don't experiment for nothing… there is a cost to these things. You’ll ask, is now the time to be spending money on something that may not work? Or shall we hold on to that for the time being, understand what others are doing? We don't necessarily need to be the first mover…”
Pausing may feel prudent, but in fast-moving fields like AI, the bigger risk is falling behind while others build capability, confidence, and momentum.
6. Show visible leadership and align from the top
Culture follows example. High-achieving companies are three times more likely to trust AI insights over “gut feel,” yet they also invest heavily in change-management and training to channel that confidence productively. Cross-functional steering committees, and leaders who personally use AI tools, signal that transformation is non-negotiable.
George noted in the event:
“When I think about some of the experimentation that's not worked so well... probably the most common factor that we've seen is that there has not been sufficient senior sponsorship, and/or there's a limited path to that scaling. That tends to be where things fail..not understanding the end-to-end process of how you are going to transform, and having the senior buy-in.”
When leaders demonstrate personal commitment to AI adoption, it signals importance to the entire organisation. Alignment at the top is particularly crucial given AI's cross-functional nature.
Effective leadership alignment includes:
- Ensuring C-suite consensus on AI strategy and priorities
- Establishing cross-functional steering committees
- Creating dedicated transformation offices when appropriate
- Leaders personally using and championing AI tools
- Regular board-level engagement on AI progress
IBM found that in the most successful AI organisations, the C-suite and IT leaders work in lockstep — a sharp contrast to companies where AI remains siloed in technical teams.
7. Prioritise ethical leadership & responsible AI
Beyond compliance, ethics is a brand differentiator. According to Deloitte, board-level involvement in AI-ethics policy is becoming standard practice, with 52% of boards always engaged. Organisations with mature ethical frameworks are also 2.5× more likely to earn customer trust. LSE experts add that open dialogue about risks creates the psychological safety needed for rapid, responsible experimentation. As Dr. Dorottya Sallai put it:
“My advice would be — equip yourself with the knowledge. If you have the knowledge and you understand what's happening, you will be able to take leadership. I think the biggest challenge for leaders today is to understand what's going on.”
As AI systems become more powerful and influential, ethical leadership has emerged as a critical practice for sustainable success. Organisations leading in this area recognise that responsible AI isn't just about risk management — it's about competitive advantage and long-term viability.
Effective ethical leadership includes:
- Defining and enforcing clear ethical boundaries for AI use
- Ensuring diverse perspectives in AI development and governance
- Proactively addressing potential biases in data and algorithms
- Creating transparent processes for addressing ethical concerns
- Aligning AI initiatives with organisational values and societal expectations
Ethical leadership in AI isn’t just about doing the right thing — it’s increasingly essential for driving adoption, earning customer trust, and unlocking long-term value. Deloitte finds that organisations with clear AI governance structures are more likely to see real business impact, while the World Economic Forum warns that consumers are paying closer attention to how companies design and deploy AI.
As AI scales, so do the consequences of getting ethics wrong — making proactive, transparent leadership non-negotiable.
The business impact of effective AI leadership
Combine these practices and the numbers tell a compelling story: effective AI leadership isn’t a soft skill — it’s a measurable differentiator. Organisations that lead in this space grow revenue 1.5× faster, achieve higher ROI, and stay ahead of disruption.
The gap between leaders and laggards continues to widen, creating urgency for organisations to address leadership capabilities around AI implementation. As Dr. Sallai highlighted, leadership today demands not just strategy — but the willingness to stay ahead of fast-moving change.
Breaking the glass ceiling: Developing AI leadership capabilities
Most AI leadership programmes only focus on coding or data science. The LSE AI Leadership Accelerator is different: it equips executives to apply these seven practices, turn strategic intent into business value, and join the elite 20% of transformations that succeed. Participants leave with:
- Practical tools to implement the seven leadership practices in their organisations
- A board-ready AI business case and implementation roadmap
- Toolkits for responsible AI governance and culture change
- Direct feedback from Deloitte and LSE faculty on live projects
- A peer network of leaders closing the AI value gap
Transforming organisational approaches to AI requires intentional development of leadership capabilities. Many professionals are now aiming to break through the next leadership ceiling — shifting from tactical roles into strategic influence by mastering AI implementation.
The LSE AI Leadership Accelerator helps leaders close the implementation gap by focusing on real-world business cases, responsible governance, and the human of AI transformation — exactly where most initiatives fall short.
As the research clearly demonstrates, the gap between AI potential and business value is primarily a leadership challenge, not a technical one. Organisations that develop strong AI leadership capabilities position themselves to capture the substantial value that AI offers while avoiding the pitfalls that have derailed so many initiatives.
Ready to bridge the gap between AI potential and real business impact?
To learn more about the LSE AI Leadership Accelerator and how it can help your organisation bridge the AI implementation gap, explore the programme page.