Training Can't Fix a Broken System
Most performance problems aren't skill gaps. Learn when training is the wrong solution and how to diagnose root causes before wasting budget.
There's a meeting that happens in every organization at least once a quarter. Someone identifies a performance problem. The room nods. And then someone says the words: "We need a training on this."
The request lands on an instructional designer's desk. They scope it. They build it. They launch it. People complete it. Completion rates look fine. Three months later, nothing has changed. The same errors. The same complaints. The same performance gap.
The instinct is to blame the training. Maybe it wasn't engaging enough. Maybe we need to add gamification. Maybe we should try microlearning instead. So the team iterates on the training. Better slides, shorter modules, more interactive elements. And three more months go by with the same results.
This cycle burns through millions of dollars every year. Not because the training is poorly designed, but because training was never the right intervention in the first place.
The 75/25 rule nobody talks about
Thomas Gilbert's Behavior Engineering Model, one of the foundational frameworks in Human Performance Technology, makes a claim that should fundamentally change how organizations approach performance problems: roughly 75% of performance issues are caused by environmental factors, not individual capability gaps.
That 75% includes broken processes, inadequate tools, unclear expectations, missing feedback systems, and misaligned incentives. Only the remaining 25% are actual knowledge or skill deficiencies that training can address.
Think about that ratio for a moment. Three out of four performance problems that land on an L&D team's desk cannot be solved by training. They require process redesign, better tools, clearer communication, or incentive restructuring. But because training is the most visible, most controllable intervention available, it becomes the default response to everything.
ATD's 2023 State of the Industry Report puts the scale of this mismatch in dollar terms. Organizations spend over $100 billion annually on training in the U.S. alone. Yet 45% of L&D leaders report that their training programs fail to deliver measurable business impact when systemic issues aren't addressed first. That's potentially $45 billion spent on solutions that can't work because they're solving the wrong problem.
Five signs training won't fix it
After building over a hundred interactive learning experiences, I've developed a diagnostic instinct for when training is being asked to do something it can't do. Here are the patterns that show up most reliably.
1. The process itself is broken
A healthcare system asked for training on their new patient intake workflow. Nurses were making errors, and the assumption was a knowledge gap. When we looked at the actual workflow, the intake form required information from three different software systems that didn't communicate with each other. Nurses had to copy-paste data between screens, manually cross-reference patient IDs, and complete 23 fields, eight of which were redundant.
No amount of training makes a broken process work smoothly. The nurses knew what to do. The system made it unreasonably difficult to do it correctly under time pressure. The fix wasn't a course on "how to use the intake system." The fix was redesigning the intake system so it didn't require humans to compensate for bad software integration.
2. People lack resources or authority, not skills
Sales teams that "need negotiation training" but don't have authority to offer discounts. Customer service reps who "need empathy training" but are measured solely on call handle time. Teachers who "need differentiation training" but have 35 students and no aide.
When the constraint is structural, training creates a cruel double bind: people are taught the right approach but denied the resources to execute it. They come out of training more frustrated, not more capable.
3. Incentives reward the wrong behavior
This is the most insidious version of the problem. A company wanted training to reduce siloed decision-making across departments. The training was excellent: collaborative frameworks, cross-functional communication strategies, real scenarios. But the performance review system measured individual departmental metrics. Managers were rewarded for optimizing their silo, not for collaboration.
People are rational. When the incentive structure contradicts the training content, the incentive structure wins every time. Training people to collaborate while rewarding them for competing isn't a training problem. It's an organizational design problem.
4. People know what to do but the environment prevents it
This shows up in safety compliance constantly. Workers know the safety protocol. They can recite it. But they skip steps because the equipment layout makes proper procedure take three times as long, or because production quotas make full compliance mathematically incompatible with expected output.
The ISPI (International Society for Performance Improvement) standards for performance analysis consistently find this pattern: roughly 40% of performance issues trace to process and workflow problems, and another 25% to tool and technology barriers. The actual skill gap accounts for maybe 10% of cases.
5. The "skill gap" only appears in one dysfunctional context
If an employee performs well in most situations but struggles in one specific context, the problem usually isn't their capability. It's something about that context: a difficult team dynamic, a confusing tool, unclear role boundaries, or conflicting directives from leadership.
Training the individual is treating the symptom. Fixing the context is treating the cause.
Why organizations default to training anyway
If 75% of performance problems can't be solved by training, why does training remain the default response? Three reasons.
Training is visible and controllable. When a VP says "we need to fix this," they want to see action. Commissioning a training program is concrete. It has a timeline, a deliverable, a launch date. Process redesign, incentive restructuring, or tool replacement are messier, slower, and harder to champion. Training is the path of least organizational resistance.
Training protects the status quo. Saying "our people need training" implies the system is fine and the people need to improve. Saying "our system is broken" implies leadership made poor decisions about processes, tools, or incentives. The first framing is comfortable. The second is threatening. Organizations reliably choose the comfortable framing.
L&D teams are positioned as order-takers. McKinsey research from 2021 found that 70% of organizational change programs fail, often because training is deployed without addressing underlying structural, process, or cultural barriers. Training alone has a 10-15% success rate when system problems exist. But L&D teams are rarely given the mandate (or the organizational power) to push back on training requests and recommend systemic fixes instead.
What to do instead: the diagnostic before the design
Before building anything, I run through what I call the root cause filter. It's not complicated, but it requires discipline to actually complete before jumping into content creation.
Step 1: Define the performance gap in behavioral terms
Not "improve customer service." Instead: "Reduce average resolution time from 12 minutes to 8 minutes while maintaining customer satisfaction scores above 4.2." The specificity matters because it forces you to identify what observable behavior needs to change.
Step 2: Ask "why" five times
This is borrowed from Toyota's manufacturing methodology, and it works just as well for performance problems. Start with the observable gap and trace backward through causes.
"Why is resolution time averaging 12 minutes?" "Because reps spend 4 minutes searching for account information." "Why does the search take 4 minutes?" "Because the CRM search function doesn't index by phone number, and most customers identify themselves by phone number." "Why doesn't the CRM index by phone number?" "Because it was configured for email-first lookup when we launched, and nobody changed it."
You've gone from "reps need customer service training" to "we need to reconfigure the CRM search index." That's a 30-minute IT fix, not a $50,000 training program.
Step 3: Map causes to the right interventions
Gilbert's model gives you a clean framework here. For each root cause you identified, ask: is this a knowledge/skill issue (training can help), or an environmental issue (training can't help)?
Environmental fixes include: redesigning the process, providing better tools, clarifying expectations through documentation and communication, restructuring incentives, removing barriers to performance. These aren't the L&D team's job to implement, but they are the L&D team's job to identify and recommend.
Step 4: Isolate what training actually needs to address
After you strip away the environmental causes, you're left with the genuine skill gaps. These are usually narrower and more specific than the original training request. Instead of "customer service training" (vague, expensive, slow), you might need "training on the three new CRM features added in the Q2 update" (specific, cheap, fast).
This is where training shines: targeted skill development for specific, well-defined gaps where people genuinely lack knowledge or practice.
How to push back without getting fired
Identifying that training isn't the answer is one thing. Communicating that to the stakeholder who requested it is another. Here's the language that works.
Lead with the business outcome, not the diagnosis. Don't say "training won't fix this." Say "I want to make sure we hit the resolution time target. I've done some analysis, and I found that the biggest driver of slow resolution is the CRM search configuration, not rep knowledge. If we fix the search, we'll get 70% of the improvement without any training at all. Then we can add targeted training on the remaining gaps."
You've done three things: validated their goal, provided evidence, and offered a faster path to the outcome they care about. Most stakeholders care about results, not about whether the solution is called "training."
Propose a diagnostic phase. Instead of pushing back on the training request outright, propose spending two weeks on performance analysis before committing to a solution. Frame it as "I want to make sure we invest in the intervention that will actually move the needle." This buys you time to do the root cause work and builds stakeholder confidence in your analysis.
Use Cathy Moore's action mapping language. Cathy Moore, whose work with over 500 instructional designers has documented this pattern extensively, uses a powerful reframe: "What do people need to DO differently?" Not "what do they need to know." This shifts the conversation from content (what should the course cover?) to performance (what behavior change drives the business outcome?). When stakeholders are forced to articulate the target behavior, they often realize the barrier isn't knowledge.
Building systems that don't need heroic training
The deeper insight, the one that changes how you think about your role, is that the best training intervention is a system that doesn't need one.
When I build interactive educational applets, I'm obsessive about making the interaction itself teach the concept. A student learning place value doesn't watch a video about place value and then take a quiz. They drag base-ten blocks, see numbers update in real time, and build intuition through manipulation. The learning is embedded in the design.
The same principle applies to workplace systems. When a process is designed well, when the tools are intuitive, when the expectations are clear, when the feedback is immediate, people perform well without training. They learn by doing, because the system supports learning by doing.
This isn't idealistic. It's practical. Every hour you spend making a process more intuitive or a tool more usable saves hundreds of hours of training across the organization. And unlike training, which degrades as people forget, good system design persists.
The best organizations I've worked with treat training as a signal, not a solution. When a training request shows up, they treat it as an alert that something in the system might be broken. They investigate before they build. They fix environmental factors first. And then they deploy training for the genuine skill gaps that remain.
The cost of getting this wrong
When organizations repeatedly throw training at systemic problems, the damage compounds.
Employees lose trust in L&D. Workers know when training is pointless. They sit through a compliance module on a process they can't follow because the tools don't support it. They complete a customer service course and then return to the same broken CRM. Each wasted training erodes their belief that the organization takes their performance seriously. Eventually, "mandatory training" becomes synonymous with "corporate box-checking."
L&D loses credibility with leadership. If every training intervention fails to move business metrics, leadership eventually concludes that training doesn't work. This is devastating for L&D teams that actually have the skills to drive performance improvement, if only they were given the mandate to diagnose root causes instead of just taking orders.
Real skill gaps go unaddressed. When training budget and bandwidth are consumed by projects that can't succeed, there's nothing left for the genuine skill gaps where training would make a real difference. The team that needs specialized technical training doesn't get it because the budget was spent on a culture change program that was doomed from the start.
The organization learns helplessness. After enough failed interventions, the organization stops trying to fix performance problems at all. "We trained on this and it didn't help" becomes the excuse for accepting poor performance as normal. The learned helplessness locks in inefficiency permanently.
When training IS the answer
I want to be clear: training is powerful when it's the right tool. After building interactive learning experiences across K-8 math, corporate onboarding, and compliance, I know firsthand that well-designed training transforms performance when the conditions are right.
Training works when:
There's a genuine knowledge gap. New technology, new regulations, new processes that people genuinely haven't been exposed to. Training is the direct solution.
The skill requires practice to develop. Decision-making under pressure, diagnostic reasoning, complex problem-solving. These skills improve through deliberate practice with feedback, which is exactly what good interactive training provides.
The environment supports application. This is the critical qualifier. Training works when people leave the course and enter a workplace that enables them to use what they learned. The process supports the new behavior. The tools are available. The incentives align.
The gap is specific and measurable. "Reduce medication errors by training nurses on the new verification protocol" is a training problem. "Improve organizational culture" is not.
When these conditions are met, training delivers. The interactive simulations, the branching scenarios, the deliberate practice with feedback loops, all of the instructional design craft that L&D professionals have spent careers developing, all of it creates real, measurable performance improvement.
The discipline is knowing when those conditions are met, and having the courage to say so when they're not.
Moving from order-taker to performance consultant
The shift I'm describing isn't a small one. It changes the fundamental identity of L&D from "the team that builds courses" to "the team that improves performance." Those are very different mandates.
As a performance consultant, you start every engagement with diagnosis, not design. You spend more time analyzing the problem than building the solution. You sometimes recommend interventions that have nothing to do with learning. And you measure your success by business outcomes, not course completions.
This requires a different relationship with stakeholders. Instead of "tell me what training you need and I'll build it," you say "tell me what performance outcome you need and I'll figure out the best way to get there." The second framing is scarier because it makes you accountable for results, not just deliverables. But it's also where L&D's real value lives.
The organizations that get this right, the ones that treat L&D as a performance function rather than a content factory, are the ones where training actually works. Because when training is deployed, it's deployed against problems it can actually solve, in environments that support the application of new skills.
That's the difference between training as a budget line item and training as a strategic capability. And it starts with a simple question that most organizations skip: "Is training actually the right solution here?"
Most of the time, the honest answer is no. And that's where the real work begins.
Need interactive learning content built?
We design and ship interactive applets for K-12 math, science, and language learning. 100+ modules delivered. Let's talk about your project.
Book a Call