Continuously Improving Your Workplace Training Program

Continuously Improving Your Workplace Training Program

Many organizations work hard to design and develop a training program and then implement it. And once the training program is implemented, they should deservedly give themselves a pat on the back for a job well done.

But creating and implementing a training program isn’t the end of the workforce improvement effort. Instead, you could argue it’s a new beginning. That's because the organization should begin evaluating the effectiveness of the training program to see if it’s helping the organization attain the desired outcomes, and then use that evaluation information to continuously improve the training program over time. And that’s not even to mention the fact that things at work may change, such as a process changing, the introduction of a new product, or the imposition of a new regulation, again calling for modifications to the training program).

We’ll give you a quick overview of how to do just this in the article below.

Vector Solutions & Convergence Training are performance improvement experts. Click the links below to learn more about how we can help you.

Download our Learning Objectives Guide 

With the exception of the most “check-the-box” compliance training (something we’re not in favor of—even compliance training can and should be designed to help your organization attain outcomes beyond mere compliance), all training programs should align with an organizational goal. So that’s where your continuous improvement efforts begin—before you design and develop training, when you’re analyzing a workplace problem or opportunity, determining what causal factors are contributing to it, and determining if training is or is not an appropriate part of your response. For more on this “front-end analysis,” please see our recorded, on-demand Introduction to HPI webinar.

At that point, you’ve got to identify the business goal(s) the training will support and the metrics or KPIs your organization uses to track progress toward reaching that goal. Imprint that business goal in your brain (and better yet, into your computer), get a measurement of that KPI you can use as a benchmark, and keep monitoring the KPI over time.

Next, of course, you’ll move into the process of designing and developing your training materials. There are multiple different models for doing this. ADDIE (in which you Analyze, Design, Develop, Implement, and Evaluate) is the most common, but you can pick the model or models that work best for your own purposes. Others include SAM, Llama, and Design Thinking.

The important thing to keep in mind while you’re designing and developing your training program is to keep those business goals in mind. Figure out what job tasks workers need to learn to perform to help move the organization forward toward that business goal. Figure out what the learning objectives (or performance objectives) for your training should be to help facilitate the process by which workers can acquire/develop the necessary knowledge and skills to perform those tasks on the job. Create assessment items (tests, etc.) you can use to determine if works can perform those job tasks after training is over. And then creating your training materials—including content but also demonstrations and practice opportunities for the workers with helpful, constructive feedback—to help workers satisfy those learning objectives. (Notice: we started with the end—desired business outcomes—first and kind of worked our way backwards to the creation of training materials. This is similar to what’s known as backwards training design and it’s also reflected in the ADDIE model).

At different points during the training design and development process, you should conduct what’s known as formative evaluation. Formative evaluation is evaluation of your training materials as you’re “forming” them, before you’ve implemented them and conducted training with large numbers of workers. Formative training evaluation is an ongoing, iterative process, and it includes reviews of documents, reviews and hand-offs, and any number of other opportunities where you and/or your team can identify problems with the training early and fix those problems before it’s harder and more expensive to do so once you’ve implemented the training. (Quick note: formative evaluation is often discussed along with summative evaluation—summative evaluation is evaluation after the training program is launched and is what we’ll continue to talk about below.

With the training designed and developed in a way that aligns with those all-important business goals, you’ll then deliver it. Maybe it’s classroom-style, maybe it’s elearning, maybe it’s virtual instructor-led training (VILT), or maybe you’ve developed a well-designed blended learning strategy for this particular training need (keep in mind the need for spaced practice over time as well). Now you’re going to turn to training evaluation models.

The most well-known and commonly used training evaluation model is the Kirkpatrick Four-Level Evaluation model. We’ll explain that in a little more detail, below, but before we do it’s worth noting that Kirkpatrick’s isn’t the only training evaluation model. There’s also the Brinkerhoff, Phillips, Kaufman, Thalheimer/LTEM, and more. If you’re curious, check out our recorded video discussion with Dr. Will Thalheimer on the Kirkpatrick, Kaufman, Brinkerhoff, and Phillips training evaluation models and our second recorded discussion on his own LTEM evaluation model.

The Kirkpatrick four-level model, as you guessed, is set up to evaluate training from four different levels, at four different times, and from four different perspectives. These four levels are known as (1) reaction, (2) learning, (3) behaviors, and (4) outcomes. We’ll look at each below.

Level 1, reaction, is designed to get the learner’s immediate reactions to the training. This often happens in the form of a post-training survey and these surveys are often called “smile sheets.” These can be helpful, but know that you’re not going to get the best information about the effectiveness of your training materials from level 1 reactions. For more on this, read our interview with Dr. Will Thalheimer on smile sheets.

Level 2, learning, is designed to determine if the learners can satisfy the learning objectives immediately after the training. Basically, this is a test, whether it’s a written test, a multiple-choice question, a role-playing scenario, or a skill demonstration of a job task. Level 2 evaluation is important—if learners can’t perform the job task after training, they won’t be able to do it on the job, either. For more on this, see our recorded discussions with Dr. Patti Shank on learning objectives and on performance-based learning assessments.

Level 3, on-the-job behaviors, comes next. In level 2, you tested to see if workers could satisfy the learning objective (and therefore perform the necessary job task) right after training. But having a worker pass a level 2 assessment doesn’t mean that same worker will later be able to perform that same task on the job. There may be a number of reasons for this—real conditions on the job may interfere with the worker’s performance in one way or the other, or maybe the training was good enough to help the worker satisfy the learning objective immediately after but not good enough to really solidify the newly learned knowledge and skills, so the worker simply lost or forgot the ability to do it. Level 3 training evaluation is a great reminder that the trainer’s job doesn’t end when they walk out of the training room, log out of the VILT platform, or when the elearning course closes.

Level 4, outcomes (also sometimes written as results or business results), is the final and most important of the four levels of evaluation. Remember, when you were conducting the front-end analysis (much earlier in this article), you identified the business goal the training was intended to support. That’s the whole purpose of everything you’ve done between then and now—create a training program to move your organization closer to that goal. So now it’s time to analyze the real results and outcomes of the training program and see if the training DID have the desired positive contribution. Obviously, one place to look is the KPI you identified earlier that tracks progress toward that goal. But you may also look at some qualitative (as opposed to quantitative) sources of information as well.

For the purposes of this article, although positive results in training evaluations 1-3 may well set your organization up for good results at level 4, it’s really the level 4 results you want to focus on. These tell you if your training program was a success or not and the degree to which you may have to improve or otherwise reconsider it.

Even if your training program smashes expectation and drives your organization even past your original goal, you’ll want to monitor things over time to make sure results change and/or underlying things don’t change (like the job process change we mentioned earlier that might call for a change in the training program). And likewise, if your training program doesn’t seem to have contributed to reaching the business goal, it’s time to get under the hood and see how you can improve it.

When you’re considering how to improve your training program, cast a wide net and consider a diverse spectrum of information sources. Don’t be content to simply go with your own hunch. Don’t stop at simply asking a manager or supervisor what’s wrong. Be sure to visit the job site and observe what’s going on. Ask the workers for their opinions (this is often over-looked but critical). Consider larger or unexpected systems solutions. Look into all the other sources of information you workplace captures about job performance and the associated metrics that might give you insight.

So always remember training isn’t a one-time thing. It’s about more than analysis, design, development, and delivery, and that “E” at the end of ADDIE is for evaluation that’s intended to set you up for continuous improvement efforts over time.

 

Jeff Dalto, Senior Learning & Performance Improvement Manager
Jeff is a learning designer and performance improvement specialist with more than 20 years in learning and development, 15+ of which have been spent working in manufacturing, industrial, and architecture, engineering & construction training. Jeff has worked side-by-side with more than 50 companies as they implemented online training. Jeff is an advocate for using evidence-based training practices and is currently completing a Masters degree in Organizational Performance and Workplace Learning from Boise State University. He writes the Vector Solutions | Convergence Training blog and invites you to connect with him on LinkedIn.

Contact us for more information