Eighty-eight percent of organizations now use AI in at least one business function. That number is up ten percentage points from just a year earlier, according to McKinsey’s State of AI in 2025 report, which surveyed nearly 2,000 participants across 105 countries.
And yet, almost two-thirds of those organizations are still stuck in experimentation or pilot mode. Only about one-third have begun scaling AI across the enterprise. Just 6% of respondents—the “high performers”—report that AI contributes more than 5% to their EBIT.
McKinsey calls it a paradox: AI is everywhere, but its impact is still shallow.
So what separates the 6% from the other 94%? It’s not better algorithms. It’s not bigger budgets. According to the same research, high performers are nearly three times more likely to have fundamentally redesigned their workflows—and they are far more likely to have defined processes that determine how and when AI outputs require human validation.
In other words, the bottleneck isn’t technology. It’s leadership.
This is where the human-in-the-loop leadership model comes in—and why it may be the most important AI governance framework your organization hasn’t built yet.
If you’ve searched “human-in-the-loop” before, you likely found technical definitions about data labeling, model training, and algorithmic validation. In its original machine learning context, human-in-the-loop (HITL) describes a design pattern where human judgment is embedded at critical stages of an AI system’s lifecycle—training, validation, and real-time operation—so that machines don’t operate as autonomous black boxes.
But here’s what that technical concept reveals about leadership: the same principles that make AI systems trustworthy also make organizations trustworthy. In both cases, the key ingredient is deliberate human oversight at the moments that matter most.
Human-in-the-loop leadership applies this framework to how organizations actually make decisions with AI. It’s a model where leaders don’t just adopt AI tools or passively review machine-generated outputs—they actively govern the intersection between technology, people, and business outcomes. They define the boundaries within which AI operates, interpret its recommendations through contextual and ethical lenses, and make the final calls on decisions that shape culture, talent, and strategic direction.
Think of it as moving from “human in the loop” as a technical checkpoint to “human in the lead” as an organizational philosophy.
[Insert Leadership Loop Diagram Here]
The evidence is converging from multiple directions: organizations that treat AI adoption as purely a technology initiative are misdiagnosing the challenge. What’s really needed is a comprehensive approach to workforce transformation that puts leadership at the center.
McKinsey’s research makes this explicit. High-performing organizations don’t just deploy more AI—they lead differently. They are three times more likely to report that senior leaders demonstrate ownership of and commitment to AI initiatives. They don’t just bolt AI onto existing workflows; they redesign how work gets done through deliberate operating model and organizational design—which is an act of leadership, not engineering.
Meanwhile, the organizations trapped in “pilot purgatory” share a common profile: scattered experiments across functions, no shared platforms, no redesigned workflows, and—critically—no leadership model for governing AI-driven decisions at scale.
The World Economic Forum’s Future of Jobs Report 2025, based on input from over 1,000 employers representing more than 14 million workers, paints a striking picture. While AI and big data top the list of fastest-growing skills, leadership and social influence saw one of the largest increases in importance—rising 22 percentage points as a core skill compared to the previous report.
This isn’t a coincidence. As AI automates routine management tasks, the remaining work of leadership becomes almost entirely relational and judgment-based. The WEF describes the emerging paradigm as “human-led, AI-enabled teams” where productivity comes from orchestration, not substitution. Organizations that want to stay ahead must rethink their skills architecture to reflect this new reality.
When an algorithm automates hiring screening, performance reviews, or customer interactions, it doesn’t just execute faster. It amplifies whatever assumptions, biases, and values are embedded in its design. Without leaders who understand this dynamic, organizations risk scaling harmful patterns at machine speed.
Governance is rapidly becoming a leadership function, not just a compliance checkbox. Regulatory frameworks like the EU AI Act are demanding algorithmic transparency and human oversight in high-stakes decisions. Leaders who can’t explain what their AI systems are doing—and why—will face regulatory, reputational, and cultural exposure. (For a deeper look at how to prepare your workforce for AI without disrupting culture, read our related guide.)
Research consistently shows that employees want to feel seen, heard, and valued by people—not dashboards. When organizations over-automate people decisions, employee engagement drops. When leaders use AI to inform their judgment while maintaining the human relationship, trust strengthens.
The organizations in McKinsey’s high-performer category understand this: they don’t remove humans from the loop. They elevate humans to lead the loop.
If the model is clear, the question becomes: what does it actually look like in practice? Based on converging research from McKinsey, the World Economic Forum, and emerging patterns across industries, five capabilities define leaders who operate effectively in AI-enabled environments.
This doesn’t mean coding or building machine learning models. It means understanding enough about how AI works—its strengths, limitations, and failure modes—to make informed decisions about where and how it should be deployed.
Leaders with strategic AI fluency can evaluate vendor claims critically, anticipate unintended consequences, and ask the questions that engineers and data scientists may not think to ask. They understand concepts like model drift, training data bias, and hallucination risk—not at a technical depth, but at a strategic one. This is one reason why HR technology strategy decisions can’t be delegated to IT alone—leaders must be equipped to ask the right questions about the systems their organizations adopt.
As McKinsey’s recent leadership research puts it: the leaders who thrive in the AI era will use AI to think with them, not for them.
AI compresses decision-making timelines. That creates pressure to act faster, often with less deliberation. Human-in-the-loop leaders develop the ability to make values-aligned decisions quickly without sacrificing depth.
This involves cultivating a strong ethical compass, understanding organizational values well enough to apply them instinctively, and knowing when to slow down even when the system says go. In a world where AI-driven decisions increasingly sit between humans, ethical judgment becomes the leadership skill that protects organizational integrity.
As AI handles more routine management tasks—scheduling, reporting, basic performance tracking—the remaining work of leadership becomes almost entirely relational. Resolving conflict, navigating ambiguity, coaching through change, building psychological safety.
The WEF’s data reinforces this: skills like emotional intelligence, empathy, and active listening have among the lowest AI substitution rates of any professional capability. They remain fundamentally human. And because the pace of change is relentless, resiliencebecomes a non-negotiable companion skill.
Every AI implementation is, at its core, a change management challenge. Human-in-the-loop leaders understand that technology adoption fails not because of the technology, but because people weren’t brought along.
They lead culture transformation alongside digital transformation, ensuring that values, norms, and behaviors evolve in step with the tools. They recognize that AI is reshaping not just workflows but identity—how people understand their roles, their contributions, and their value.
Perhaps the most underappreciated capability: the ability to design appropriate levels of human oversight for different AI-driven workflows.
Not every process needs the same level of human intervention. A practical governance framework assigns every AI-enabled workflow to one of three tiers:
• Human-in-the-loop (high-risk): Direct human approval required before action. Think hiring decisions, safety-critical operations, financial commitments.
• Human-on-the-loop (medium-risk): AI operates autonomously within defined boundaries, with human monitoring and escalation triggers. Think customer service routing, internal reporting, scheduling.
• Human-out-of-the-loop (low-risk): AI operates with minimal oversight where errors are easily reversible. Think document formatting, data entry validation, routine notifications.
The leader’s job is to know the difference, build governance structures accordingly, and ensure accountability remains with people—not algorithms.
Many companies are investing heavily in AI tools while underinvesting in the leaders who need to wield them responsibly. These are the most common missteps:
Treating AI literacy as a technical skill rather than a strategic fluency. Sending leaders to a half-day workshop on prompt engineering doesn’t prepare them to govern AI-driven workforce decisions. Real AI literacy means knowing when to override the algorithm—and having the judgment to explain why.
Assuming existing leadership competencies will transfer. The skills that made someone effective in a pre-AI environment—operational excellence, functional expertise, stakeholder management—are necessary but insufficient. The AI age demands new capabilities layered on top of traditional ones. (This is why leadership development must evolve beyond the traditional playbook.)
Neglecting the middle. Senior leaders get the strategy briefings. Frontline employees get the tool training. But middle managers—the leaders most directly responsible for day-to-day decisions about people, process, and performance—are often left without the frameworks or skills to lead in an AI-augmented environment. This is where the human-in-the-loop model either holds or breaks.
Delegating governance to IT. Oversight of AI in people decisions can’t live in the technology function alone. HR leaders, people managers, and business unit heads must all understand and participate in the governance model. For many organizations, this means rethinking the HR operating model itself. When accountability is unclear, no one intervenes—even when intervention is needed.
Understanding the model is one thing. Building it into your organization is another. Here’s a practical framework for getting started:
Before you can govern AI, you need to see it. Conduct an inventory of every workflow where AI influences or automates decisions about people: talent acquisition, performance management, workforce planning, learning recommendations, compensation modeling, succession planning.
For each workflow, answer three questions: What is the risk if this decision is wrong? Who currently bears accountability for the outcome? Is there a defined point where a human validates the AI’s recommendation?
If you can’t answer all three, you’ve found your first vulnerability.
Generic leadership development won’t close the gap. You need targeted training that builds the specific capabilities human-in-the-loop leadership demands: strategic AI fluency, ethical judgment, emotional intelligence, change navigation, and oversight design.
This training should be experiential, not theoretical. Leaders need to practice making judgment calls with AI-generated recommendations in simulated high-stakes scenarios. They need to build the muscle of overriding a model’s output when context demands it. And they need to do this in environments where it’s safe to get it wrong—before the stakes are real.
Using the three-tier framework (in-the-loop, on-the-loop, out-of-the-loop), assign every AI-enabled workflow to the appropriate oversight level. Define who is accountable at each tier. Build escalation paths. Establish review cadences.
Then make governance visible. Publish your AI oversight model internally. Train managers on their specific responsibilities within it. And measure compliance—not just adoption.
The organizations that do this well don’t treat governance as bureaucracy. They treat it as leadership infrastructure. If you’re a smaller organization navigating these challenges, our guide on how small business owners can build a future-ready workforce offers a practical starting point.
The gap between where most leadership teams are today and where they need to be isn’t going to close with a webinar or a self-paced course. It requires structured, facilitated development designed around the realities of leading in an AI-driven environment.
At Transforma, we work with organizations to develop leaders who don’t just survive the AI age—they define it. Our Leadership Training programs are built around the capabilities that matter most right now:
• Strategic thinking and decision-making — including how to evaluate and govern AI-driven recommendations
• Emotional intelligence and resilience — the human skills that AI elevates rather than replaces
• Coaching, feedback, and accountability — maintaining trust in teams experiencing rapid change
• Leading through change and ambiguity — navigating the cultural disruption that AI adoption creates
• Conflict navigation and performance conversations — the high-stakes human moments no algorithm can handle
• Inclusive leadership and culture-building behaviors— ensuring transformation strengthens rather than fractures your culture
Through interactive workshops, real-world simulations, and facilitated learning experiences led by certified facilitators, we help organizations close the gap between where their leaders are today and where they need to be. Every program is custom-aligned to your organization’s strategy, values, and specific challenges—because the 6% didn’t get there with off-the-shelf solutions.
For organizations that need ongoing strategic guidance rather than a one-time engagement, our Fractional HR Expert service provides C-suite advisory support to help leadership teams navigate AI adoption with confidence.
The question isn’t whether your organization will adopt AI. It’s whether your leaders are ready to govern it. Book a consultation with Transforma and let’s build the leadership capability that turns AI adoption into AI impact.
Engage with experienced advisors who bring focus, alignment, and momentum to your most critical organizational challenges. Let's transform your business today.
Office location
2173 Embassy Dr, Lancaster, Pennsylvania, 17603Give us a call
(717) 828-1662Send us an email
[email protected]