In some recent articles, we’ve been looking at issues related to determining if your training program is having a desired positive effect, determining how big of a positive effect it’s having, and communicating that information internally within the training department but also externally with others in your workplace.
For example, we discussed the importance of aligning training with business goals, gave an overview of the commonly used Kirkpatrick evaluation model, and in our last article that touched on these issues, we looked at a way to evaluate the movement of a key performance indicator (KPI) after a training program was held.
In that same article, we also noted that although it’s great if you initiate a training program and see a KPI (or several KPIs) that the program is intended to effect respond in a positive manner, that’s not the whole story. Because there are other factors that may have influenced that KPI at the same time. And if that’s the case, who’s to say that the newly implemented training program truly deserves all the credit? Or how much of the credit it does deserve? Or even if it deserves any credit for the improved KPI?
And that introduction leads us straight to the point of this article. Today, we’re going to explain a few methods of “isolating the effects of your training program.” What this means is determining how much of that desired increase in the KPI your training program was responsible for–if any.
OK, let’s set the scene.
Business leaders came to you with a business problem, and they were looking for a training solution.
You performed a training needs analysis and determined that yep, training could help.
You asked the business leaders what business goal the training would support and the key performance indicator (KPI) that tracks progress toward that business goal.
You designed and delivered the training. And after the training was held, there was a positive movement in the KPI that tracks progress on the goal you were trying to affect with the training program.
Sounds good so far, right? Well, we agree, it does.
So at this point, you’ve got a few options:
Let’s take a quick look at each of these options:
You can do nothing.
But if you do nothing, you and the other members of the training department won’t know if the work you’re doing is effective. And presumably, you want to that, right?
In addition, the business leaders won’t know if the training department is carrying its weight. Maybe they’ll just assume you are and place their trust in your assurances and hard work. But then again, maybe not.
Maybe your department won’t get as much money next year if you don’t present some evidence that you’re having a positive effect on business goals. Maybe you won’t be able to hire that much-needed new trainer, or maybe you won’t get the funds for that new e-learning authoring tool you wanted.
Or maybe the head of another department will make a persuasive case that something he or she did at the same time really deserves all the credit for the positive change. Remember, training rarely happens in a vacuum, and there will be other things that might have influenced the positive change, and other people will be trying to take credit for it–rightly or wrongly.
So maybe doing nothing isn’t the greatest option. At least not every time (you may have to pick and choose when to do something and when not to).
Another option is to do no analysis but try to take credit for the entire change.
Let’s look at a simple example, illustrated with the graph below.
Let’s say you were tasked with creating a training program that rolled out in July, 2015.
The relevant KPI to track to see if the company made progress toward a desired business goal in this case was net profits. And as you can see in the graph, net profits began an uninterrupted upward trend after the July, 2015 implementation of your training program.
So you COULD do no further analysis, take the graph above to your business leaders, and say the positive trend in net profits is entirely due to your new training program.
And MAYBE they’d buy that. Hey–maybe it’s even true.
But there are some downsides to this approach.
First, you wouldn’t know if what you’re saying is really true, and so you wouldn’t know if your training program was effective or not. And we still really believe that deep down, you want to know, because you want to have a positive impact and you want to duplicate those techniques and do it again and again.
And second, there’s the risk that the business leaders WON’T buy what you’re selling. Maybe they won’t believe you. And since you’ve got no real data to back up your claims (only some circumstantial evidence–remember, coincidence doesn’t prove causality), you may lose some credibility here. And that’s not what you want to do.
So if your first option was to do nothing, and your second option was to do no analysis and try to take all the credit, then your third option is to do some analysis, try to determine how much of the positive movement in the KPI was due to your training program, and then present that information to your business leads.
There are a few benefits of this.
First, it will help you and your training team really know what’s going on. How well are you doing? When are your efforts really helping (and when you know this, you’ve got a better chance to study whey and then copy it to create future successes). And when are your efforts not helping, or not helping as much as desired? This is good information–we all benefit from reflection, self-evaluation, and continuous improvement.
And second, it will give you credible information that you can take to the business leads to show the positive effect your training program is having. You won’t have to present half-baked data, you won’t have to risk your credibility, and you just may earn their respect, appreciation, and continued or expanded funding for future projects.
We’ll show you a few ways of doing this in the next section.
What you’re trying to do–separate the positive effects that your training program had on a KPI from the effects of other factors that may have contributed to the positive movement in the same KPI–is sometimes known as isolating the effects of your training program.
The book Return on Investment (ROI) Basics, written by Patricia Pulliam Phillips and Jack J. Phillips and published by The Association for Talent Development (ATD), provides a pretty comprehensive overview of training ROI analysis.
It dedicates an entire chapter to the issue of isolating the effects of a training program, and suggests three possible techniques. They are:
Remember high school and all the talk about the scientific method and creating a control group before running an experiment?
Well, you can do that with your training, too, and set up an experiment to measure the benefits of the training.
The idea, as shown below, is to set up two groups of employees for comparison. The first group of employees won’t get your training (at least not right away). The second group will get your training. When the training program is over, you’ll measure the performance of each group (and/or the relevant KPI) and see how effective your training was.
Here’s the group of employees you’ll use as a control group (they won’t get the new training):
And here’s the group of employees who’ll get the new training:
That’s the basic idea. You probably run this kind of experiment all the time in your everyday life, so it’s probably familiar to you and we won’t belabor the point.
However, here are a few things to keep in mind.
In a sense, creating experimental and control groups like this is the “gold standard” for isolating your training program and seeing it it’s helping your company reach business goals. However, there are many cases where it’s unethical (such as withholding critical safety training), unfair or unwise (such as when it sets some employees at an unfair disadvantage or places them in a confusing situation), or simply impractical to do this.
Another option is to perform a trend-line analysis.
Let’s assume you’ve been tasked with creating a sales training program. The business goal it support is to increase sales. The KPI will be the average number of widgets sold by sales people each month.
Here are the steps for performing that trend-line analysis.
First, get the monthly sales figures for the six months leading up to the training program (shown in the graph below in the solid blue line).
Next, create a trend analysis that shows what the sales totals might have looked like if the trend had continued and there were no training (shown in the graph below in the dotted blue line).
Then, mark a point on the graph that represents when the training program was held (shown in the graph below with the solid vertical red line–July, 2015).
Next, show the average monthly sales that could have been expected if the pre-training trend had continued and no trend had resulted.
And finally, plot the actual sales figures for the months following the implementation of the training program (shown in the graph below with the red dots).
You can now make a convincing case that the sales training program is responsible for the increase in sales represented by the space between the solid green horizontal line (the monthly average of the pre-training trend) and the red dots (actual sales), or, breaking that down into single figure, an increase from 97 widgets sold by each salesperson per month to an average of 111.3 widgets.
And here’s what it looks like:
If you’re going to use this method, remember that it’s based on two assumptions:
These assumptions may not always be true, but if nothing else, this gives a quick and dirty estimate that’s better than doing nothing.
In addition, you may want to use forecasting methods that predict changes instead of relying on a consistent trend. Of course, that assumes your company has forecasting skills, which some do and some don’t, but it’s worth a consideration.
Finally, a third technique is to use experts to make an estimate of how much influence the training program had.
Those experts may include the employees who went through the training program (they are often the best source of information for this), and/or their supervisors, managers, and business leaders (who sometimes are aware of other factors the employees are not aware of).
The basic idea is this:
Here’s what that might look like in a very simple version (the table below shows the estimates by one person):
You now have some data you can present to business leads.
Sure, it’s an estimate. No, it’s not going to be 100% accurate.
But the information is based on input from experts, not just the training program. And people are used to having to work with estimates in business-it’s better than stabbing in the dark. Plus, you’ve built in a margin of error by adjusting for confidence of the estimates. Doing this, and coming in with a lower estimate, will gain you and your figures credibility.
We hope that gets you headed in the right direction on this phase of the “ROI of Training” quest.
If you’ve done some of this before, or if you have some tips to add, please add them in the comments section below.
If you’d like to learn about this in more detail, we can recommend the book we already reference and another by the same authors.
In addition, the ATD has other books on Training ROI as well. Just click that link and search their bookstore for “ROI” and you’ll find at least a few more.
And in addition, you may be interested in some of our blog posts listed below:
Good luck! And don’t forget to download our free guide to writing learning objectives, below.
All the basics about writing learning objectives for training materials.