In the next article in our Manufacturing Training Insights article series, we'll discuss the fifth and last step or phase of the ADDIE instructional design model--evaluating the training to see if it was effective and/or met various goals.
It's our hope that this series will be an assist to people who are knew to manufacturing training or have been involved in manufacturing training for a longer time but never had the chance to ground themselves in fundamentals of instructional design and training.
With a massive skills gap in manufacturing, and manufacturing processes becoming increasingly sophisticated just as older, experienced Baby Boomer workers are retiring, training's more important than ever, right?
So let's see what we can learn about training evaluation and the different training evaluation models.
ADDIE and our Manufacturing Training Insights Series: A Quick Refresher
Our Manufacturing Training Insights series of articles is presented with the ADDIE training design method at its core. If you're new to ADDIE, its an acronym, and the letters stand for analysis, design, development, implementation, and evaluation (yep, THAT evaluation, the one we're writing this article about).
ADDIE isn't the only training design and development method. There are alternative methods and ADDIE has its proponents and its critics. But it's easily the most well-known and most commonly used method, and it's definitely been used to guide the creation of many very impactful, effective training programs that led to knowledge acquisition, skill development, behavior change, desired workplace performances, and beneficial organizational outcomes. So whether you wind up using ADDIE in the future, it will certainly help to know about it and it will certainly serve our purposes in creating a structure to our series about manufacturing training.
In earlier articles in this series that dealt with ADDIE, we discussed analysis (our article on the HPI front-end analysis), training design (including separate articles on learning objectives and learning assessments), training development, and training implementation.
Which means we're reading to discuss the E in ADDIE--training evaluation.
Feel free to download our What Is ADDIE? infographic before you go on reading.
You design, develop, and deliver training for a reason, right? In some cases, compliance might factor in, but in most cases (and even when compliance matters), you want to help workers acquire knowledge, develop skills, change their behavior at work, and help your organization reach or get closer to aligned organizational goals. Maybe societal goals even play a factor.
But you can't just deliver training and assume everything worked as plan and leave it at that. At least you shouldn't, and in particular with your biggest, most critical, or most expensive training programs. Instead, you want to due your "L&D due diligence" and see if it worked.
And, if it didn't, you want to get under the hood, revise some things, and hope to improve it so it does work.
At the 30,000-foot level, that's that training evaluation is all about. Making sure you hit the target with your training.
Trainers often talk about training evaluation in two categories: formative training evaluation and summative training evaluation.
Formative training evaluation is the process of evaluating training while you're actually developing, or FORMING, the training. So it's something you should be doing as you progress through the ADDIE model, for example. The existence of formative training evaluation is one reason I don't fully believe the criticism that ADDIE is a purely linear, "waterfall" model for training design, but that's a topic for a different blog post.
Summative training evaluation is evaluating training after you've implemented it (so, as we noted, the final "E" in ADDIE). It's summative training that we're going to focus on in this article.
Stay tuned for later articles that look at formative training evaluation and the "ADDIE is/is not a waterfall method" controversy. Until then, you may want to read this short overview of formative and summative training evaluation if you're curious.
The four most commonly used training evaluation methods are the Kirkpatrick Four-Level Method, the Brinkerhoff Success-Case Method, the Phillips ROI model, and the Kaufman model. At least those are the most commonly used methods here in the US.
In an earlier blog post at Vector Solutions, we published a recorded Zoom discussion with L&D researcher and training evaluation guru Dr. Will Thalheimer in which the good learning doctor talked us through all four of those common learning evaluation models. We're not going to pretend we can explain this better than Will can, so here's that interview again.
Of the four commonly used training evaluation methods, by far the most common is the Kirkpatrick four-level method.
We discuss the Kirkpatrick four-level model in more detail here, but here's a quick overview of the four "levels" at which it calls for training evaluation:
A learning evaluation model that's newer, and that learning professionals have developed a lot of excitement over, is Dr. Thalheimer's Learning Transfer Evaluation Model, also called LTEM.
We discuss the LTEM model with Dr. Thalheimer earlier and we're once again going to leave it to the expert to discuss it here.
A quick thanks and hat-tip to Dr. Thalheimer for both of those recorded discussions.
Whichever training evaluation method(s) you use, the ultimate goal is to determine if your training is having the desired effects, revise it if it's not, and continue a cycle of virtuous continuous improvement.
We don't always get it perfectly right the first time, and that's alright, we're human. But it's also true that we should build-in measures to evaluate and then revise and improve if necessary instead of just having employees complete training and assuming we did a great job.
We hope you enjoyed this overview of training evaluation in the manufacturing industry and have enjoyed our Manufacturing Training Insights series as well.
Let us know if you've got any questions, and have a great day!
A Quick Note on Running a Beta/Pilot Test of Your Training
Before you rush into delivering the training you’ve designed and developed, and before we rush into writing about that, let’s take a quick breather and discuss running a small pilot test (this idea is similar to the fail fast, fail “light," and learn idea you see in design thinking, the PDCA cycle commonly used in lean and quality, and elsewhere).
Once you’ve developed your training materials/activities, it’s a good idea to run a limited test on them with a small group of learners before you release them “into the wild” with a much larger population of learners. Sometimes this idea of beta testing is considered one of the last things you’d do in the training development phase of ADDIE, or you might think of it as the first thing you’d do in the implementation phase of ADDIE. I’m not sure how much it matters if you think of this as a training development or a training implementation issue, or if it matters at all, as long as you do it.
Conducting a small pilot program of your new training program will allow you to find any errors (big or small) and correct them before you launch them to the larger training population and risk having the training go over like the proverbial lead zeppelin.
For your pilot, try to get some workers who are very similar to the larger worker population that will ultimately take the training. Watch them during the training and take note of their actions and reactions. Are they progressing steadily or struggling and confused? Do they have problems with the user interface and/or the actual subject matter? How did they do on practice exercises and, of course, the test? Be sure to ask for their opinions after, as well, to get their top-of-mind thoughts and opinions, then ask them to complete a survey about the training beta test as well.
And, of course, after you’ve conducted your beta test of the training materials, make any revisions necessary to improve the training before you implement it to the larger employee group (this is one reason some people are not convinced that the ADDIE method is flawed because it’s a “linear, waterfall” method--the opportunity to evaluate and iterate mid-stream).
Now It’s Time to Implement Your Training
Once you’ve run that beta test, it’s time for the rubber to hit the road and for you to implement your training activities.
Of course, exactly what you’ll do during the implementation phase will depend in part on the information you learned in analysis and decisions you made in design, mostly notably leading to the types of training delivery methods/media you chose to use for this particular training need (hot tip: consider a blended learning solution if appropriate, as evidence shows blended learning tends to lead to better learning outcomes).
In general, training implementation will involve things like the listed items below:
Of course, you may have designed a continuing learning experience instead of a one-and-done training session, which will allow workers to continue to build up their knowledge over time and to gain the learning benefits of spaced practice. And keep in mind that in addition to using spaced practice for training, there’s a LOT of benefit to be gained from getting managers to support the training objective once the training session is over and workers are back on the job.
If you’re implementing elearning, your job may be as simple as assigning the training, assuming workers already have access to things like your LMS, freedom within their schedule to complete the training, and so on.
If you’re implementing classroom-style, instructor-led training, well then your job is a bigger one. That’s beyond the scope of this article to describe in detail, but you’re going to have to get in front of the class, greet the learners, know how to work with the training materials, lead the discussion, answer questions, scan employee’s body postures and facial expressions to see if they’re paying attention or confused, correct misconceptions, provide feedback, and so much more. It’s truly a great skill to be an effective classroom instructor. If this isn’t you (yet), consider taking a course to sharpen your skills through an organization such as the Association for Talent Development (ATD).
If you’re going to be conducting virtual instructor-led training, then your job is similar to but different than it would be for classroom instruction. It’s similar because you’re going to have to juggle a lot of balls just as you would in a classroom, and you’re going to have to do things like engage with the learners, judge their understanding, aid with comprehension, correct misconceptions, provide demonstrations, give feedback on their performances, and more. But it will be different because the movement from classroom-training to virtual instructor-led training isn’t as simple as directly “porting” over your training materials from one medium to the other. Read our interview with VILT expert Shannon Tipton on live online learning and virtual instructor-led training for more on this.
And of course, you might have created training in a lot more different media than just classroom-style instructor-led training, virtual instructor-led training, and elearning. Maybe you created a PowerPoint or a PDF. Maybe you created a video. Maybe it’s a virtual reality learning experience or something you created that uses augmented reality. Or a chatbot. Whatever it is, these will have all of their own training implementation issues as well (read our article on disruptive technologies in L&D with Dr. Stella Lee for more on some of these newer technologies).
A Quick Word about Job Aids & Performance Support
A quick back-track to what came up in analysis and design here for a moment: if you recall what we’ve learned from human performance improvement (HPI), sometimes training isn’t the answer. Sometimes performance support, even something as simple as a checklist, is a better solution.
So don’t forget that if your analysis and design told you that performance support is the solution, then it’s time to implement that performance support out in the workplace as well. Post your checklists, deploy those videos so they’re available on people’s mobile devices, and so on.
Need some more information about performance support? Try these related articles:
Conclusion: Implementation Depends on Good Work in Analysis, Design, and Development
We hope you enjoyed this article in implementing training at a manufacturing site within the context or framework of the ADDIE training development model (remember, we’re not married to ADDIE and you don’t have to be either, it’s just a convenient and easy way to sequence our Manufacturing Training Insights article series).
Stay tuned for an upcoming article--or maybe a few--about evaluating manufacturing training.
Let us know if you have questions and please share your own thoughts and experiences.