Did reading just the headline of this article get your heart racing? Did you wonder, “Will this be the one? Have I found the long-sought key to unlocking the secrets of Kirkpatrick’s Level 3 behavioral evaluation? Will I finally have the learning metrics to show a return on investment (ROI) for training?”
Download our FREE “Training ROI Calculator” and calculate your learning and development ROI with ease!
No pressure, though, right?
Consistent Level 3 evaluation has long been the holy grail for trainers. While other jobs and departments cruise from year to year, getting new and larger budgets in effortless style, it seems learning and development professionals, are constantly trying to find new methods for ROI analysis to justify their existence. Why? Because many in upper management think we don’t do the “real work” of the company, whatever that work might be.
So, we point to our evaluations. “See?” we say, “They love the training! And they understand the concepts!”
Management shrugs. “But are you making us any money?” they ask. “What’s the ROI for training?”
And that’s where Level 3 evaluation comes in. If we can show that they’re doing it by taking the concepts that management wants us to teach them into the real world, we can say definitively, “Yes. I am making the company money.” But Level 3 has proven elusive. Why?
I believe it’s because most trainers start their Level 3 evaluations after the training has been completed. And by then, it’s too late.
The secret to unlocking level 3 #training evaluation starts before training. Click To Tweet
Level 3 evaluation starts well before the training. This little blog may not be “The One,” but asking these four questions in the very first stages of developing the training will surely help you on your Arthurian quest for Level 3 gold.
Question #1: What can you measure before the training?
Perhaps the most important element in gathering Level 3 information is developing a baseline. How can you show results — a change in behavior that generates revenue or saves money — if you don’t have a solid set of information on employee behavior before the training is implemented?
It might be obvious but gathering pre-training information is more difficult than it seems. Your clients already know they have a problem, right? Or they wouldn’t be coming to you! It’s hard to convince them to spend the time and money on an analysis that is going to tell them something they think they already know.
But you have to do it anyway. Call it “Advanced Behavioral Analysis Research” and give it an acronym (ABAR) that they’ll love. Do whatever you have to do, but get that baseline! Use surveys. Conduct interviews. Observe.
And quantify, quantify, quantify. One of the reasons trainers have trouble showing ROI is that too often we live in the world of qualitative analysis. Put some real, hard numbers to your pre-training ABAR (Hey! I kind of like this ABAR thing!) so you can show real, hard numbers six months from now when it will really count.
Question #2: Have you truly defined the outcome?
Really? You’ve defined it? Quantified it?
Again, don’t let your outcomes be completely based on qualitative elements. Define your outcomes in quantitative terms early and often. Here are a few questions to ask:
- What exactly do you expect training graduates to do on the job as a result of this program?
- What would be considered “good performance”? To what degree is this level of performance occurring today?
- What support and accountability resources are available after training?
- How will you ensure that training graduates follow through on next steps after training? (You may also ask trusted line managers and supervisors these same questions.)
You might be saying, “I already ask those questions!” But the meat here is not in the asking, it’s in the actual questions. Instead of settling for, “They’ll make more widgets.”, be sure you know exactly how many widgets you’ll expect them to make. And be darn-tootin’ sure you know how many widgets they’re making now.
Does quantified outcomes put more pressure on you to deliver? Sure. But you’re up for the challenge, right? And you did want Level 3 evaluation, right?
Question #3: What can you add to the process to help you measure results?
A few years back, I helped to develop a training program for a group of helpdesk workers. The majority of their requests were made by and answered via email. Many of the responses were disastrous, with questions not fully answered and research only half-done. In addition to the usual training, — “Here’s how you find this. Here’s what to say when that happens.” — we added one more element: response templates.
Now, I’m not saying response templates are revolutionary. But by including the templates, we were able to tell immediately who was using the tools we gave them, and who was not. Instant Level 3 analysis.
Ask yourself: What can I add to this process that will show me immediately if the training concepts are being used in practice? The answer is your golden key to Level 3.
Contact us today for a complimentary consultation where we’ll discuss your learning needs and outline your roadmap to results.
Question #4: How can you measure without skewing the results?
Psychologists have long understood that simply conducting an experiment changes the outcome. (And quantum physics backs up this theory!) That’s why we have control groups. Unfortunately, management is probably not going to allow you to not train a group of workers so you can use the scientific method to get your Level 3 analysis. But a boy can dream, right?
As trainers, we have to find ways to quantify — there’s that word again — behavior before we can measure it. Because one thing is for certain: Just asking them won’t get it done. According to Work-Learning Research:
Having learners assess Level 3 is fraught with peril because of all the biases that are entailed. Learners may want to look good to others or to themselves. They may suffer from the Dunning-Kruger effect and rate their performance at a higher level than what is deserved.
Tests don’t work all that well, either. Many years ago, a client asked me to evaluate an early version of a computer-based training they were rolling out at a big company. Students took a test before the training, experienced the training, then took the same test afterwards. And look how much their scores improved! We removed the most important variable; some subjects simply took the test twice with no training in between — and their scores also improved by the same margins.
No, you can’t ask them, and you can’t test them. Those results will get you Level 2 at best. The only way to measure for Level 3 is to observe quantifiable behavior before and after the training. This means getting a baseline, quantifying your outcomes and having something in the process to help you measure. Now, question #4 has turned out to be a redux of questions 1-3, hasn’t it?
Did I mention, “Quantify”?
As you can see, much of Level 3 training evaluation comes down to your ability to generate quantifiable data. Don’t ask, “How well?”; ask, “How many?”
This kind of evaluation and research takes time, and much of it is spent on the front-end of a project when the clients are constantly checking their wristwatches. It’s hard to make yourself do it and hard to keep from yielding to pressure to “just get it done.” And that’s why we see so little Level 3 evaluation.
So, I’ll ask one more thing of you, dear reader: Let me know if you use any of this information. Don’t email me to tell me if you liked or didn’t like the article — I’m not insecure, and I don’t need your Level 1 praise (or condemnation). Remember this article, and when you get that baseline and present those Level 3 metrics, drop me an email at firstname.lastname@example.org and tell me your story. Can I get a little Level 3 on my Level 3?