Knowing if your training program is having a positive effect on relevant KPIs, and is helping move your company toward its business goals, is a good thing.
Trainers do this by performing what’s known as a level 4 evaluation (in the traditional four-level Kirkpatrick training evaluation model). There are other training evaluation models as well, and it’s worth exploring them too, but we’ll stick to Kirkpatrick and level 4 in this article.
By focusing on level 4, we’ll be paying attention to the real reason you’re creating training in the first place: to create desired behaviors, to improve performance, and ultimately to contribute to progress toward business goals like higher profits, lower costs, fewer accidents, etc.
Here we go.
Before you get to the point of evaluating your training program, you should:
Then you’ll be ready to get into evaluation.
For more on this, read Business Goals, KPIs, and Job Training.
The standard evaluation method is Kirkpatrick’s Four-Level Evaluation Model.
The Kirkpatrick evaluation model breaks evaluation down into four levels:
In level 1, we’re getting the opinion of the learners who attended/completed the training. These are the post-training evaluation sheets (sometimes dismissively referred to as “smiley sheets”) that are handed out after training.
See our extensive article on level 1 training evaluations and “smile sheets” and our extensive interview with Dr. Will Thalheimer about evidence-based training and smile sheets.
In level 2, we’re trying to figure out if the learners “learned.” This is typically measured with some form of assessment during or immediately after training. This might mean a paper-based test, an online quiz, or some form of skill demonstration that’s evaluated by the instructor.
See our extensive article on testing best practices.
In level 3, we’re concerned with whether or not workers are actively applying what they learned when they return to the job. This is what we’re talking about when we talk about “transfer.” Remember that a worker who passes a test may not perform the desired behaviors on the job for a number of reasons, including simply forgetting the training material but also as a result of other workplace realities that should be investigated.
In level 4, we’re talking about the effect that the training program had in helping the company reach a business goal. Progress toward those goals is measured using KPIs. And that’s what this article is about.
Two questions that come up when thinking about level four training evaluations are:
We’ll give some helpful information about each question in the sections below.
You may wonder if every company performs all four levels of evaluation for every training program they create and deliver. The short answer is no. Nobody’s got time and money to do that all the time.
Below are some benchmarks to consider for evaluating at different levels. Check them out. (The information below includes a fifth level for “ROI,” which is the business effect put into monetary value–we’ll address this in a later blog post.)
From left to right, the table shows recommended targets for each level by the authors of a book on training ROI; targets established by the GAO; real-life figures in public sector (federal, state, and local governments) as determined by a research study, and targets that Wachovia Bank has established for themselves.
Source: “Return on Investment: ROI Basics” by Patricia Pulliam Phillips and Jack J. Phillips, ASTD/ATD Press, December, 2005, page 30.
As explained above, once the training has been delivered, you can return and evaluate the change in the relevant KPI(s).
Which raises an interesting question. How long after training should you wait?
Let’s return to the book Return on Investment: (ROI) Basics, already referenced above, for some expert opinion on that issue:
“Levels 3 and 4 data collection occurs sometimes after the new performance has had a chance to occur–the time in which new behaviors are becoming routine. You do not want to wait until the new behavior becomes inherent and participants forget where they learned these new behaviors. Typically, Level 3 data collection occurs three to six months after the program, depending on the program. Some programs, in which skills should be applied immediately upon conclusion of the program, should be measured earlier–anywhere from 30 days to two months after the program. Level 4 data can be trickier, however.
While the ROI calculation is an annual benefit, do not wait a year to collect the Level 4 data. Senior executives won’t wait; the problem will either go away, executives and senior managers will forget, or a decision will be made without the data. Collect the Level 4 measures either at the time of Level 3 data collection or soon after when impact has occurred. “
Source: “Return on Investment: ROI Basics” by Patricia Pulliam Phillips and Jack J. Phillips, ASTD/ATD Press, December, 2005, page 70.
Assuming your training program has been successful, it will help move your business toward a business goal. (Yes, your program can have no effect, or even a negative effect, but we’ll ignore that sad possibility for now.)
As a result, the KPI that measures progress toward the business goal will change.
In some cases, the KPI will go “up,” meaning progress has been made. In other cases, the KPI will go “down,” but this can still mean that progress has been made. It depends on the KPI you’re tracking.
Let’s look at two quick examples.
One example of a commonly used KPI is net profit. It’s possible that this is the KPI or one of the KPIs you’ve chosen to track to determine if you training program had the desired influence.
In this case, you’d want to see the KPI go “up” after the training program was implemented, because when you’re talking about profits, going up is good.
And, as you can see above, in this case the net profits did go up after the training.
Another example, this time from safety or EHS, would be workplace injuries or illnesses (sometimes referred to as “incidents”). Because safety incidents are bad, and the goal would be to have fewer, in this case you’d want to see the KPI go “down” after the training program.
And, as you can see above, safety incidents did go down after the training.
In both examples above, the data have been simplified, but you get the idea. In each case, you’ve got some strong evidence that the training program created the desired effect for the business goal. In the first example, net profits went up after the training program was implemented. In the second example, safety incidents went down after the training program was implemented. Both were successes–good job, training team!
Savvy readers like yourself no doubt notice the sleight of hand above.
Yes, the KPI went in the desired direction (up or down) in the two examples above. And yes, the KPIs moved in the desired direction after the training program was implemented.
However, as we all know, correlation does not imply causation. Which is a fancy way of saying that even though the KPI went up in the first example and down in the second example, and the desired changes occurred after the training program was implemented, we don’t have enough evidence to prove that the training program is what caused the desired change.
That’s what you were thinking, right?
If so, congratulations. Because you’re right. You still need to do what’s called “isolating the effect of the training program.” This means controlling the effect that other variables might have had on the KPI so you can determine how much of the change is due to the training program. For example, in the first case, the rise of net profits after the training program might have been the result of a simple price change or a new advertising campaign. And, the decrease in safety incidents in the second example might have been the result of the installation of a bunch of new machine guards.
As a result of all this, we’ve also written an article about isolating the effects of your training program.
In addition to what we’ve said above, you may enjoy this related post that looks at providing graphic evidence that a training program has had a desired effect within a manufacturing training setting, providing examples based on safety, production efficiency, and quality.
Finally, we’ve seen some interesting articles on training evaluation going on at Dr. Will Thalheimer’s blog:
You’ve now read an overview of how to perform a level four evaluation of your training program, showing that the training helped your company make progress toward a business goal as measured by a relevant KPI.
What are your own experiences? How often do you do level 3 and level 4 evaluations? Which KPIs are most relevant at your workplace? How often do you go the further step of isolating the effects of the training program, quantifying the change in the KPI, and converting that to dollar figures?
We’d love to hear your thoughts below.
Learn what you need to know BEFORE you begin your search and get a free checklist to guide you, too.