The cost of inconsistent managers across your org

Manager variance shows up in your engagement scores, exits, and L&D budget. Here's what it costs and how to compress it at scale.

Sign up for new content and updates from Huckelberry

Thank you, you're now signed up
Oops! Something went wrong while submitting the form.

Last updated: 2026-05-06

You can see it in the engagement scores. You can see it in the exit interviews, the skip-level feedback, the patterns of which teams ship on time and which don't. Manager quality is inconsistent across your org. You know which teams have a great one. You know which teams have the other kind. The employee experience is a lottery, and the lottery is a cost line you can measure.

Most People functions don't have a way to fix it scalably. L&D budgets cover a fraction of the org. Executive coaching reaches the people least likely to leave. Manager training was helpful for the managers who were already going to be good at it. Gallup's 2024/25 data shows fewer than 44% of managers globally have received any training at all. The variance compounds because nothing in the system is built to compress it.

This is the case for AI coaching at the manager layer. (What "AI coaching" actually means is worth defining first, since the term now covers a lot of products that are not coaching at all.) Across the six places the cost actually lands.

Retention: the same exit themes, the same managers

You see the same patterns in exit interviews. Different employees, different teams, the same manager-related reasons. People stay when they're growing, when their manager invests in them, when feedback is honest and useful. They leave when the opposite is true. By the time someone hands in notice, the decision was made months ago, in a 1:1 that did not happen, a piece of feedback that did not land, a development conversation that was always next quarter.

The cost of that pattern is not abstract. Industry estimates of voluntary turnover replacement cost typically range from one-half to two times annual salary, depending on role and seniority (SHRM and Work Institute have published in this range). For a 500-person org with 12% voluntary turnover and an average salary of $90,000, the annual cost band sits between $2.7 million and $10.8 million. Manager-driven turnover is the variable inside that range you can actually move.

AI coaching at the manager layer changes the inputs. Every manager has support before the moments that drive retention. Development conversations happen because they were prepped. Feedback lands because it was rehearsed. The exit interview themes start to converge on fewer issues, more specifically named.

Performance: variance you can already see, but cannot scale a fix to

Engagement scores and eNPS by manager. 360 results. You can already see manager performance variance on a dashboard. What you don't have is a way to pull the bottom-quartile managers up without pulling them out of role.

Gallup's 2026 State of the Global Workplace puts manager attribution at 70% of the variance in team-level engagement. The same report shows only 20% of employees worldwide are engaged at work, the lowest level since 2020, and global manager engagement has dropped to 27%, with the steepest declines among managers under 35. The cohort closest to your ICs is collapsing fastest. The variance is not noise. It's the largest single factor inside your control. Manager training was three months ago. The management book on their shelf does not coach them through Tuesday's 1:1. The variance keeps producing the same outcomes.

When every manager has access to in-the-moment coaching applied to their actual situations, the variance compresses. The bottom quartile gets practical support, not abstract concepts. The top quartile gets a development engine that does not stop at competence.

Development: most of your L&D budget never reaches the layer that matters

A typical mid-market L&D budget covers a handful of senior leaders with executive coaching, runs the manager training programs that reach a portion of the cohort, and provides a course library for the rest of the org. Training Magazine's 2024 industry report puts US per-employee training spend at $774 a year. For large enterprises, $398. Less than two hours of an executive coach. Coverage drops off sharply below the senior manager line, and LinkedIn's 2025 Workplace Learning Report shows a third of organizations have no formal career development program at all. The gap between what the top of the house has access to and what your manager-of-managers cohort has access to is the gap that shows up in your succession planning.

Human coaching is excellent and not the layer to disrupt. It is also, by the published pricing of platforms like BetterUp and CoachHub, prohibitive for whole-org coverage. The math has always forced a choice: deep coaching for a few, or shallow training for many. AI coaching changes the math without replacing the human layer. It is how you give the cohort you've been underinvesting in something that compounds.

Exposure: the conversations are happening anyway, and not where you'd want

Your managers are figuring out their hard conversations somewhere. If you've watched the data, you know they're doing it on ChatGPT, Claude, Gemini, or Copilot. Those conversations are visible to your IT admin via the standard enterprise compliance and audit paths these platforms expose by design. The "safe space" your training program promised does not exist where the conversations are actually happening. We covered the architectural risk in detail in Why generic AI assistants aren't safe for employee coaching.

This is also an exposure under the ADA (the US Americans with Disabilities Act) and equivalent disability-disclosure frameworks in other jurisdictions, which employment counsel are increasingly evaluating. Sensitive disclosures made into a corporate AI tenant can become discoverable corporate records. Whether they carry the same weight as a disclosure to a manager or HR is a developing area, and the architectural exposure is real and it is in your tenant today.

Architectural privacy at the coaching layer is what lets you offer your managers somewhere to be honest without exposing them, or your org, to that risk. No compliance API. No admin override. HR sees aggregate engagement metrics across the org, never individual transcripts and never team-level topic detail. The privacy promise stops depending on policy and starts depending on architecture.

Readiness: by the time it reaches HR, it's already a fire

The conflict that escalates, the grievance that lands, the resignation you didn't see coming. Your managers needed support before the conversation went wrong, not after. You can see the pattern in retrospect. You don't have a way to get there in time. HR cannot scale to be in every 1:1 before it happens, and the manager training program is a quarter behind the moment.

AI coaching is the layer that gets there in time. Available to managers before the conversation, not three days into the fallout. The fires that reach HR get smaller. The ones that would have escalated, often, do not. The HR team's escalation queue stops being driven by predictable manager skill gaps and starts being driven by the genuinely hard cases that warrant your time.

Coverage: the economics that have always blocked you, change

Scaled coaching has always been a budget question. Human coaching reaches a small number of people in a typical mid-market budget, at the seniority where coaching delivers the lowest marginal lift. The other 95% of the org gets a course library they do not finish.

The pricing comparison is simple. Human executive coaching is published at hundreds of dollars per hour. Huckleberry is $20 per seat per month for teams. For roughly the cost of one human-coached executive, you can give every manager and IC in your org their own coach. The cost gap closes the access gap. The 95% becomes coverable.

The ROI proof for coaching at scale already exists. ICF/PwC's research shows 86% of organizations recoup their coaching investment, with an average return of 7x cost, and lifts to individual performance of 70% and team performance of 50%. Microsoft's coaching culture program is on record at 670% ROI. Intel's contributes around $1B a year in operating margin. What never existed was an economic model that let you offer this to the whole org. AI coaching is that model.

What this means for the people on the receiving end

The variance has a name: people.

It is the high-potential IC who stopped getting development conversations because their manager didn't know how to run them. It is the team member who hasn't had honest feedback in two years because their manager hasn't been taught how to give it without breaking the relationship.

Inside an org of 1,000 people with the kind of manager variance most companies carry, hundreds of careers are running at the speed of the worst manager their owner happens to report to. The fairness implications are obvious. The retention implications follow. The high-potential ICs you most want to develop are the ones most likely to feel the variance, because they're paying the most attention.

When manager quality consistency improves, the experience of being an employee improves at the same time. Feedback gets honest. Development conversations actually happen, and the path to the next role becomes real instead of aspirational. People stay because they're growing, and they're growing because they finally have a manager who is being coached too.

There is a philosophical case here alongside the business one. Coaching has historically been reserved for the people who least needed it: the senior leaders who had already proved themselves. The 95% below that line have always been left to figure it out alone, with the manager they happened to draw as their proxy for development. Scaled AI coaching reaches the layer that needs it and, through them, the rest of the org. The lottery stops being a lottery because the floor under everyone moves up at once.

This is the case you can make to your CFO using the cost numbers above. It is also the case you can make to yourself when you remember why you took the People role in the first place.

What changes when manager variance compresses

The outcome is not "every manager becomes great." It is "the floor moves up." The bottom-quartile managers get practical support before the conversations they were going to mishandle. The mid-tier managers get a development engine they previously didn't have access to. The top-quartile managers stop carrying the variance for the org alone.

The downstream metrics are the ones you already measure. Engagement scores rise where they were lowest. Exit interview themes consolidate around fewer, more specific issues, and time-to-promotion accelerates because development is actually happening. The patterns you couldn't fix from HR start fixing themselves at the manager layer.

How to roll this out

The plan is shorter than most enterprise rollouts:

  1. Connect your context. HRIS, values, competency model, handbook, uploaded once from the admin dashboard.
  2. Activate the org. Managers and ICs activate themselves on any device from session one. No training cohorts, no rollout project.
  3. See it working. Aggregate engagement metrics and org-wide coaching themes, without individual transcript access or team-level topic detail. Privacy by architecture, signal by design.

You can book a demo to walk through what activation looks like for an org your size, or explore the HR leader use case for the buyer's view.

Frequently asked questions

Q: How does this fit alongside our existing manager training?

A: Training transfers concepts. AI coaching applies them in the moment. They work together. Most customers find that AI coaching is what makes their existing training programs finally translate into manager behavior, because the application happens at the moment of the actual situation rather than three months after the workshop.

Q: What does HR actually see in the analytics?

A: Engagement metrics and high-level aggregate themes across the org. That includes session volume and engagement trends, plus broad coaching themes that emerge at the org level. Never individual transcripts, and never anything specific enough to identify what a particular manager or team is working on. The architecture forecloses individual access by design, which is what makes the aggregate signal trustworthy.

Q: How does this satisfy our privacy and compliance review?

A: Huckleberry publishes a Data Protection Addendum covering personal data handling, processing roles, and security commitments. The architectural privacy posture (no compliance API, no admin override to session content) often closes objections that block other AI tools at legal review. We do not train models on customer conversation data, and audio is not stored.

Q: Can we measure manager improvement?

A: Yes. Engagement-by-team movement, eNPS shifts, exit theme consolidation, internal mobility rate, and time-to-promotion are the typical signals. Many customers also use 360 feedback (which Huckleberry runs as voice-based 5-minute conversations rather than text surveys) to track manager-level change directly.

Q: How does this compare to BetterUp or CoachHub?

A: BetterUp and CoachHub are human coaching platforms with AI-augmented features. They are excellent for the senior leadership tier where their pricing model fits. Huckleberry is purpose-built for whole-org coverage, voice-first AI coaching at $20 per seat per month for teams. Most organizations adopting AI coaching at scale also retain human coaching for the executive layer. They work in combination.

Q: What's the rollout time and effort?

A: Days, not months. The HRIS sync and context upload happen in the admin dashboard. Managers and ICs activate themselves. There is no training cohort, no facilitation requirement, no project plan. The activation pattern looks more like rolling out a productivity tool than a learning program.

When manager quality stops being a lottery

The metrics you're measured on stop being volatile when the manager layer stops being volatile. The variance is the variable. AI coaching at scale is how the variance compresses.

Book a demo to walk through how Huckleberry deploys for an org your size. Or read how Huckleberry handles privacy before legal raises it.

Sources and references

  • Gallup, State of the Global Workplace 2026. Manager attribution to engagement variance, manager engagement decline, and global engagement floor.
  • Gallup, State of the Global Workplace 2024/25. Manager training coverage data.
  • Training Magazine, 2024 Industry Report. Per-employee US training spend.
  • LinkedIn, 2025 Workplace Learning Report. Career development program coverage by organization.
  • ICF/PwC, Global Coaching Study. 86% recoup, 7x ROI, performance lift figures.
  • ICF, Microsoft and Intel coaching ROI Prism Award case studies.
  • SHRM, Work Institute. Voluntary turnover replacement cost estimates.
  • Huckleberry privacy architecture at Why generic AI assistants aren't safe for employee coaching.
  • AI coaching category definition at What is AI coaching? A working definition.
  • Coaching framework references: GROW (Whitmore, 1992); SBI (Center for Creative Leadership); Radical Candor (Scott, 2017); Situational Leadership (Hersey and Blanchard, 1969).