An important part of designing, creating, and delivering job training materials is creating learning assessments--the test at the end of the training activity to determine if workers can perform the skill or skills required by the learning objective.
That test can come in many different forms, including performance demonstrations and, often in the world of online learning, multiple-choice questions.
Whatever type of test it is, you may sometimes find yourself wondering about the best practices for creating the test or assessment that employees must complete after training and before they perform the tasks for real on the job.
In this article, we're going to give you tips about something related to test creation that learning experts call fidelity (no, not THAT fidelity--this is not a juicy blog post). In training talk, fidelity is the extent to which your test or test question mirrors the real task your workers will have to perform on the job.
In describing fidelity and tests, we'll cover a few other best practices, too. Hope this helps you with your question writin'.
Before we get down to fidelity, let's take a moment to consider some basics.
All training activities should include a set of learning objectives. The learning objectives are the reason that you created the training--they are what you want employees to be able to do after the training is over.
If you begin to design or develop training without first crafting a good set of learning objectives, you've lost the battle before the fight begins. And, to the point of this article, you really can't create learning assessments without first creating good learning objectives, because these should map to and complement one another--they're a bit like the yin and yang of job training.
When you write learning objectives at the beginning of your training creation process, focus on observable behaviors that employees should be able to do after the training is over. And remember that those observable behaviors should be the same things that you want employees to be able to do on the job, as well.
If this idea of learning objectives is new to you, take a moment and check out our Guide to Writing Learning Objectives or check out our interview about Learning Objectives with learning researcher Dr Patti Shank and then come back and finish up here.
Once you've created your learning objectives, you should then:
I know that order may seem strange, but if you write your learning assessments (tests) right after you write your learning objectives, you have less chance of going astray. This is important because you want your learning objectives to directly inform your learning assessments.
Don't worry--once you're done with your learning assessments, there will be plenty of time to begin creating your learning content and learning activities.
Once you're ready to begin, the first step of writing an assessment item is to go back and look at the learning objective(s) for the training.
Ask yourself: what does the worker have to do to satisfy the learning objective? Once you know that, you can create a proper assessment. The assessment, obviously, must determine if the employee can satisfy the learning objective.
So for example, if the learning objective states that learners should be able to complete a specific performance, your learning assessment should ideally require the learner to complete that very same performance. That would be a direct 1:1 match between your learning objective and your learning assessment (the "gold standard" here) and your learning assessment would therefore have high fidelity.
And as mentioned earlier, you can then move on to designing and creating the learning content and activities to help learners pass your learning assessment.
Because when you're all done, there must be a direct relationship between the learning objective, the training material, and the learning assessments, as shown below.
Let's consider this example: the learning objective says a worker has to "perform a machine change over." You think about it, and you decide to create a question that presents the worker with each step of the process and asks the worker to "drag" the steps into the correct order, from first to last.
Now, here's a question for you: is this a proper assessment of the learning objective? If the worker does drag the various steps into the correct order, does that mean the worker can perform a machine change over?
We'll give you a little time to think about that....
OK, are you back? What's your answer? Did you say "this isn't a proper assessment" and "putting these items in the correct order doesn't mean the employee can perform a machine change over?"
If you DID say just that, you're right. If you didn't say that, let's explain why your friends were right and what you may have overlooked. The learning objective states that the worker has to be able to perform a machine changeover. That means the worker has to actually DO IT in the real world--perform the machine changeover procedure.
Putting the steps in the correct order is a nice start, but that doesn't mean the worker knows how to perform each of those steps. And so that wouldn't be a proper test for that learning objective.
Learning experts would tell you this question has a fidelity problem. When learning experts talk about fidelity in the context of assessments, they're talking about how well the assessment matches the objective. A high-fidelity assessment matches the terms of the objective well. A low-fidelity assessment doesn't match the terms of the objective well.
So, that's your takeaway here. Before you create an assessment, take a look back at the learning objective, ask yourself what the worker really has to do in order to satisfy the objective, and then figure out what kind of assessment would evaluate if the worker can do it or not.
In the example we just gave, you'd probably be better off creating an assessment in which the employee actually demonstrates physically how to perform a machine change over. That assessment would have high--even perfect--fidelity. Or you could create an interactive, simulation-based assessment that would not take place in the real-world but would still have high, if not perfect, fidelity.
All of which means that when you're writing questions for online tests, you'll have to ask yourself if you CAN accurately assess the worker's ability to satisfy the learning objective using the kind of questions you can make for a standard online test (things like true/false, multiple choice, matching, drag and drop, sequencing, etc.).
If so, great, go for it. You'll find you can often improve your multiple-choice questions (and improve their fidelity) by having them focus on having the learner solve a problem or make a decision in the same way they would have to on the job.
If you feel like a standard online question type can't be used with enough fidelity, consider making a more sophisticated scenario-based eLearning test or using a performance assessment that occurs in the "real world."
If you go with the real-world performance assessment option above, you can even create digital versions to help with administering these (see below).
You can read our related article Testing Employees After Training: Best Practices for Workforce Training Evaluation for more information about creating test questions and "real world" performance assessments.
So the takeaway here is to make your test match the real on-the-job performance as closely as possible. If it's realistic to get a perfect match, great. If not, consider near-perfect matches with scenarios or with questions like multiple-choice questions that require workers to solve problems and/or make decisions in the same way they would on the job.
If you've created a learning assessment with high fidelity, it should also have high validity (validity means you're measuring what you intend to measure). But you CAN create a learning assessment that's not super-high in terms of fidelity (for example, having employees answer questions that require them to make the same kind of decisions they'd make on the job) even though it still has good validity.
Creating proper learning assessments is an essential part of training design because (1) you want to know if workers can perform the job skill or task correctly before you send them back on the job to do it in "real life" and (2) you want to know how workers are doing after training as a way to evaluate the effectiveness of your training. As an added bonus, there's research that shows testing actually helps learners remember and later retrieve knowledge and skills from training (the so-called "testing effect").
For more on workforce training and testing, consider some of these articles:
Let us know your thoughts on workplace testing in the Comments below, and please feel free to download our free guide to writing learning objectives too.