The LTEM Training & Learning Evaluation Model

LTEM Learning Evaluation Image

In a recent discussion with learning researcher Dr. Will Thalheimer, we discussed four common learning evaluations models and mentioned that, in addition, Dr. Thalheimer had recently created his own called LTEM (which he “workshopped” with other leaders in the field and which he’s now iterated 12 times).

In the discussion below, Dr. Thalheimer explains his LTEM learning evaluation model.

We’d like to thank Dr. Thalheimer for taking time to talk with us about this and for all of his contributions to workplace learning, including his work on smile sheets, spaced practice, conference presentations, the effectiveness of elearning, evidence-based training & learning myths, and lots more.

If you’d like to watch the recorded video, we’ve got that for you immediately below. If you’d prefer to read the transcript, that’s below the video. Enjoy and share your thoughts in the comments section.

Dr. Will Thalheimer Tells Us about His LTEM Learning Evaluation Model

Convergence Training: Hi, everybody and welcome. Jeff Dalto. back once again for the second of two discussions with Dr. Will Thalheimer.

A quick reminder, most of you probably know who Dr. Will Thalheimer is, he’s kind of a learning research legend. He owns Work-Learning Research, does a lot of great stuff for free, has a lot of great courses, has a great book on Performance-Based Smile Sheets, and I recommend you check all that out.

And in the first discussion, we were talking about some commonly used training evaluation models. And in particular, we talked about Kirkpatrick, Philips, Brinkerhoff, and Kaufman. And we’re back today to learn about a model that will made himself recently which is called LTEM. So with that, let me say hi to Will, and thank you.

Dr. Will Thalheimer: Hey, Jeff, delighted to be here. We had fun last time should be good this time.

Convergence Training: Cool. I’m looking forward to it.

So as I mentioned, last time we went through the pros and cons of four different common training evaluation methods, and I just explained that you have created your own model as an alternative, and that’s called LTEM.

Can you tell everybody what what you were thinking and what you were trying to do when you created LTEM? And why we need yet another learning evaluation model?

Dr. Will Thalheimer: Sure.

So as I mentioned in our last session, there’s a lot of frustration that we practitioners have in regard to learning evaluation.

And not only that, but our learning evaluation gurus, Kirkpatrick, Brinkerhoff, etc., over the years, have been talking about how we’re not really making progress, we’re not making enough progress. So there’s a lot of that.

And I think we’re going to use the picture paints 1000 words kind of thing to sort of get us into this because I think sort of the background is really important.

So as I mentioned before, we talked about this in the last session, the Kirkpatrick four-level model has a lot of criticism about it, including folks from the research side, you can see some evidence of that here.

I didn’t show you this last week, but I love this quote. It’s from Towards Maturity, it’s a UK-based organization, but they did a global survey of L&D leaders and 97% of those leaders felt that they wanted to improve the way they gathered and analyze data on learning impact. So a lot of issues there.

97% want to improve the way they gather and analyze data on learning impact.

Towards Maturity Report, Making an Impact: How L&D Leaders Can Demonstrate Value. Dixon, G & Overton, L (2017)

I mentioned last time that I’ve compiled 54 common evaluation mistakes. So clearly, we’re not doing as well as we might be doing.

Now there’s one other thing to sort of motivate my interest in creating a new evaluation form. And that’s that we really ought to align it with how learning works. So I’m going to do a little deep dive into the learning and forgetting curves to put an evaluation in perspective.

Alright. So this is during learning over here, on the left, and we’ve got after learning over here. Toward the top of this, we are representing more remembering, and toward the bottom we’re representing less remembering.

So when our learners learn, they’re going to go up a learning curve, and then they’re going to slide down, most likely, they’re going to slide down a forgetting curve. And so from a learning design standpoint, we have to pay attention to this.

But let’s think about what this means in terms of learning evaluation. But first the big picture. So we do our learning curve, and I’ve drawn one here, but it could be more steep, it could be less steep. And then our forgetting curves could be really steep, people could forget really rapidly, or they could forget less rapidly. But also what we ideally hope is that when people go back to their jobs after their training, after their learning intervention, after their e-learning, that they’ll continue to learn and hopefully we’ve set them up in that way.

So this sort of future after learning really depends on two things, and I’m oversimplifying here, but basically on the design of the learning and on the after-learning follow up. People who are interested in learning more thinking more about the learning and forgetting curves, they can go to this YouTube video.

And let’s think about this in terms of learning evaluation now. So we do an assessment at the end of a learning intervention, at the end of a training, what does that tell us?

Well, we’ve got to be a little bit careful because we measure there, in some way it’s a biased metric. And we know that, probably a lot of people are going to be forgetting. So it sort of makes us look better than we are, gives us feedback that we’re doing really well when maybe we could be doing better.

But even more importantly than that, what we’re doing is taking a snapshot at that point in time, on the level of understanding of our learners. What it’s not telling us is whether they’re going to be able to remember whether they’re going to be able to apply what they’ve learned.

So evaluating right at the end of learning or during learning or right after learning really sets us up to be missing out on some things. If we want to get a sense of remembering and application, ideally, we’re going to use delayed tests. Okay? That’s really the only way to be sure. Now we can get some hints during or right after training, but we can’t know for sure. So this kind of thinking about evaluation in terms of learning is really important so that when we go forward and design our learning evaluation models, they are becoming most useful to us.

Convergence Training: Gotcha.

Dr. Will Thalheimer: So. Let me just summarize what I’m saying here, why I created this new learning evaluation model.

We have a history of frustration in learning evaluation. There are fundamental biases in the way we measure learning. For example, what I just showed you, we tend to measure at the end of learning. And that’s really biased. There are lots of common mistakes we make. And the dominant evaluation model that we’re all using is flawed. So lots of reasons there.

And again, I mentioned this last time, but we’re really trying to create an evaluation approach that’s more muscular, that really gives us more benefit, particularly in terms of helping us make our most important decisions, and helping us get the resources and support we need. So that’s sort of the background on why I decided that maybe we need to think about a new learning evaluation model.

The Eight Tiers of the LTEM Learning Evaluation Model

Convergence Training: All right, great.

I had forgotten from the first discussion how much work you put into the visuals here. So thanks again for that, that was fantastic. And for those listening out there, Will mentioned Towards Maturity, and I just want to give a hat tip to them, they do a lot of great work.

And with that, will, your LTEM model has eight tiers, represented sequentially in your graph, can you can you walk us through the eight tiers of LTEM, please, and maybe tell us what LTEM stands for?

Dr. Will Thalheimer: Sure. I’ll do both. And again, since a picture paints 1,000 words, let me go back to the presentation.

So LTEM stands for the Learning-Transfer Evaluation Model. And it is a one-page model with eight tiers in it. And I’m going to go through those in a minute. But it’s also got a 34-page report that goes along with it to give people a sense of what it means, why it’s developed this way, and why it why it works this way. So that’s the big picture.

Let me show you now…and let me also emphasize that this was an iterative process, it’s gone through 12 iterations. It’s sort of a stable iteration now. But I didn’t do this alone either, lots of people, some of the smartest people in the learning and development field, you can see their names listed here, gave me a lot of good feedback and helped me make this better over time. So this is not me alone.

Okay, one of the things I like to emphasize when I talk about the LTEM model, is this subtitle: Sending Messages to Enable Learning Effectiveness. And that is, based on my thinking that when we have a model, we ought to make sure that that that model is useful to us, that it pushes us, nudges us in the right directions to do good things, to think about things in the right way versus leaving things out.

So you can see in the one-page model, notice at the bottom, our tiers, one, two and a little bit of three, these are in red. And I did this with the idea that red is bad. And I know globally that that’s not always the case, in some cultures red is better, but that was the idea there. So these things at the bottom aren’t as good. And then the yellows are the mediocre, and then the green are better. Okay, so that’s the big picture. Now, I’ll go through each one of the steps.

Convergence Training: Can I ask a quick question about the color coding,

Dr. Will Thalheimer: Sure.

Convergence Training: In a Western, American, traffic signal way, if I’m told that red is bad, I’m thinking I shouldn’t do it. But I assume that’s not your point here. I think your point is maybe don’t stop here or don’t rely fully on this. Is that correct?

Dr. Will Thalheimer: That is correct. Well, it’s a little complicated because yes, you can continue to do these things, but you don’t want to do these things as a way to validate your success.

Let’s look at this attendance, tier one, as an example. So, a learner is going to sign up start, attend, and complete a learning experience. And look at the fine print here in the blue. “This metric is inadequate to validate learning success because learners may attend but not learn.” Now, that doesn’t mean that we shouldn’t check attendance or monitor our completion rate. But what it does mean is we should not use that as a measure of our success. You know, oftentimes, you see this in some of the rewards that organizations are given: “Oh, well, we had 20,000 learners go through our learning this year.” Well, that’s really not good enough, right? Because people can attend or complete but don’t learn.

Convergence Training: Fair enough.

It’s the same with tier two, activity. So attention, interest, and participation, we can measure these things, and sometimes that’s really good to do from a formative evaluation standpoint. But what we don’t want to do is use these as measures of success, because for example, our participants could participate fully, be really, really engaged, but they still might not learn, or they might learn the wrong things, etc. So, again, tier two is not good enough to validate our success.

Tier three is, you’ll notice this is learner perceptions, and it’s divided into two parts. And the first part of part three, 3A, is when we query the learner or survey them in a way that reveals insights related to learning effectiveness. So we target measures of learner comprehension, whether they’ve had enough realistic practice, motivation to apply after learning support, etc., and know that these measures hint at learning outcomes but should be augmented with objective outcome measures.

And 3B, this is when we query people in a way that does not reveal insights on learning effective So this is things about learner satisfaction, course reputation, etc. And we know from the scientific research that when we ask these type of questions, that the results of our smile sheets are not correlated with our learning results.

So basically, tier 3A can be represented by performance-focused smile sheets, and tier 3B is traditional smile sheets. Either of these are still their tier three, they’re still not good enough. So they’re either in the red danger range or yellow mediocre range.

Convergence Training: Will, if I can interrupt for a second, just to plug your book. Even if this is the first level of yellow, Will does have a great book on these Performance-Focused Smile Sheets that I recommend people pick up and then go from from red to yellow there in tier three.

Dr. Will Thalheimer: Thanks. And notice the guy who even wrote the book on smile sheet says that they’re not enough. Okay? I want to make that clear.

Okay, so knowledge. And notice that learners are answering questions about facts and terminology. So knowledge recitation, this is during or right after the learning event, and knowledge retention after several days or more. So you’ll note what the LTEM model adds here really the distinction between measuring understanding and measuring remembering. And note here again, even if you measure retention, this is usually inadequate because remembering terminology does not fully enable performance. Okay, so sometimes knowledge is necessary, but it’s hardly ever sufficient.

Okay, so now let’s go on to the green. So tier five, decision-making competence. Learner makes decisions given realistic scenarios. So again, we’ve got the distinction between understanding and remembering.

This is where we’re using things like scenario questions, case studies, it could be realistic role plays, simulations, etc.

Convergence Training: In level four and level five, you’re making a distinction between during or right after and after several days. Do you have any input on the length of that gap?

Dr. Will Thalheimer: Well, yeah, and I get into this in more depth in the report that goes along with this. You know, ideally, we hope people, we train them, and we hope that they can remember what they’ve learned for at least a week or two, right? So that they can begin to implement what they’ve learned.

You can see in the model I say several days or more. I usually recommend three, three days or more to be able to measure remembering. The thing is we have a little logistical problem and we might not want to wait forever because you know, we want might not want to wait a month because we probably have lost our learners, they’ve gone on and back to work. So there’s a little bit of logistical balancing that goes on here.

Convergence Training: Alright, fair enough. Thank you.

Dr. Will Thalheimer: So, tier six is task competence. So the learner performs relevant, realistic actions and decision making. This really ramps it up. So tier five is decision making, and tier six is decision making and actually doing okay, so this is even harder. Again, it has the distinction between understanding and remembering.

Let me give you an example of the difference between five and six.

So I used to be a leadership trainer, and I would teach my trainees, my managers, that they should bring their direct reports into decision making. And I would put them, it wasn’t just me but our organization, we ran computer-based simulations that had people make decision. And so they would make decisions about what to do in a given particular situation. When we did that, that was a sort of tier five. But what we weren’t measuring was, did they use the right tone of voice? Did they use the right words? Did they use the right body language? If we were able to measure that, that would bump it up to a tier six.

Convergence Training: All right, good. Thank you.

Dr. Will Thalheimer: Okay, so tier seven is transfer. When the learner uses what was learned to perform a work task successfully as clearly demonstrated through objective measures. I’m not going to get into the difference between assistant and full transfer, it’s not that important here.

And then the effects of transfer. So we take what we learned, we transfer it, and then we get some benefits, hopefully. And these can be outcomes that affect learners, coworkers, family, friends, the organization, the community, society, and the environment.

Okay, so you can see at tier five, we’re going to be able to certify that people have decision-making competence, tier six certifies task competence, tier seven, we’re going to certify transfer. And tier eight, we’re going to certify the effects of transfer.

Check out Will’s research report on learning transfer.

Convergence Training: Will, if I can ask you a question on tier eight. I think a lot of times, when people aren’t doing evaluation at all, or if they’re just evaluating course completion or attendance, they’re told by other learning professionals, “Hey, you’ve got to align with business goals.” And that’s where that conversation stops. And obviously, your tier eight is going well beyond business goals here.

I wonder if you could just talk to us about why why you thought that was important to go beyond having your training related to the business goals?

Dr. Will Thalheimer: Well, when I look at how people mostly do evaluations, particularly based on the four-level model, you know, the fourth level is results. But everybody translates that into business results. And it’s like, “Whoa, we are forgetting the learners! Don’t we want to do something for them, to help their careers, etc.?” And then there’s other effects that can happen as well. I’ve been particularly struck by the work of Roger Kaufman, who talks about sort of the societal impacts.

So I wanted to make sure that the LTEM model would help remind us of some of these other results that we might have.

Convergence Training: Okay, great. Thank you. Anything more on a high level overview of LTEM?

Dr. Will Thalheimer: Well, yeah, just to clarify. So, tier seven and eight, this is actual work performance. And tiers six, five and down, these are performance in learning. Sometimes these things overlap, but it’s helpful to sort of separate them conceptually.

Models, Messages, and Nudges

Dr. Will Thalheimer: So I talked about how a model should send good messages. Well, in the report that goes along with it, I’ve got like 25 messages that the LTEM model is intended to send. And here are just a few of those. I’m not going to speak through these, but I’ll let people look at these.

Here are the Key LTEM Messages Dr. Thalheimer listed on his screen in the video:

  1. Goal of learning is to create transfer and positive learning benefits.

  2. Measuring learner perception is inadequate to validate success. Focusing on effectiveness is better than focusing on satisfaction or reputation.

  3. We should consider the many effects of learning transfer, not just organizational results.

  4. Measuring attendance or learner activity is inadequate to validate success.

  5. We should evaluate our success in supporting remember, not just comprehension.

  6. Measuring knowledge is generally inadequate to validate success.

  7. Measuring decision-making or task competence during learning is better than measuring knowledge.

And one and two are sort of the same as the four-level model. But some of the other ones really go beyond what the four-level model does and the messages that get sent from it.

Convergence Training: Right, I guess, do you feel when people are talking about learning organizations and learning organization theory, that your LTEM message, especially tier eight and your expanded vision of that, is more in line with what people are talking about when they’re talking about learning organization instead of just focusing on business goals, is that part of what you’re saying?

Dr. Will Thalheimer: Well, there’s a lot to the learning organization idea. And it’s been it’s been a while since I read in that space.

But maybe you should say some some more on that…

Convergence Training: I think an obvious one is, if you’re training and all you’re measuring is progress toward your business goal and attainment of that or whatever. And you’re ignoring, like you’re talking about, for example, what the learner gets out of it. That doesn’t seem like that your primary focus is having a learning organization, and helping workers, and sharing knowledge, and everything. It’s just a much more kind of simplistic, narrow, and I would say organizationally selfish view of learning, which ultimately might mean your organization isn’t gonna be agile, won’t be flexible, won’t be prepared for the future.

Dr. Will Thalheimer: Yeah, well, you know, aligning incentives s is really important. And if all we’re focused on are our revenues and costs, for example, there’s going to be some big holes in what we’re doing. So yeah, I agree with you. LTEM was designed, at least those tier eight core sort of recommendations, to give a more holistic view of all the stakeholders that we have and all the interests that we have.

Convergence Training: Great, thank you.

Dr. Will Thalheimer: Okay, so I think that’s it for answering that question.

Convergence Training: Alright, cool. So to continue to hammer on that business school thing a little bit. You mentioned that that’s usually where people end what most people call the Kirkpatrick four-level training evaluation model. And I know you’ve done some pretty interesting investigative journalism and so you don’t call it Kirkpatrick, you call it Kirkpatrick-Katzell Model, and I’ll create a link to the backstory on that, but could you maybe just tell us the high points of how you think LTEM is different in comparison to that, the Kirkpatrick Four-Level model?

Comparing LTEM and Kirkpatrick

Dr. Will Thalheimer: Sure. In fact, people ask me all the time and I have a little diagram to to help make sense of that.

Convergence Training: Alright, cool.

Dr. Will Thalheimer: So we know the four levels–reaction, learning, behavior, results–there’s Raymond Katzell, a picture of him. And you can see LTEM on the right there.

So level one, reaction, is similar to tier three in LTEM. And obviously, LTEM makes the distinction between different ways of querying people.

But also notice the big gap or the blind spot that the four-level model has. It doesn’t send the message that just measuring attendance and measuring activity is inadequate. And it’s my belief that a good model should tell us what to do and what not to do. And you can see attendance is like the number one thing we do when we measure evaluation. So it’s pretty clear that we ought to have a model that tells us measuring that is inadequate.

Level two of Kirkpatrick is similar to knowledge, decision-making, and task competence. So as I mentioned in our last session, one of the problems with the Kirkpatrick-Katzell model is that level two is all smashed up into one bucket. And what we tend to do when we need an evaluation, we think, “Oh, we just need a knowledge check.” Well, the LTEM model tells us it’s not good enough, right? We can measure other things as well that are more potent, more powerful.

Level three of Kirkpatrick is similar to tier seven, transfer.

And level four is similar to tier eight. Except, obviously the LTEM model has a few more specifics.

That’s that comparison is it in a nutshell.

4-7 Ways to Use LTEM

Convergence Training: Okay, good. Thanks.

So people might be thinking, “I’d like to try this LTEM model.” Do you have any tips for them about how to actually start using it?

Dr. Will Thalheimer: Absolutely, absolutely.

So I’m going to share four ways to use LTM. But then also, I’m going to I’m also then going to recommend that people go look at an article written by Matt Richter who’s come up with many other ways (see Matt Richter, Seven Ways to Use LTEM.)

So assessing your evaluations, learning design & development: working backwards from your goals, credentialing, and spurring improved learning. I’m going to talk about all four of these things.

Using LTEM to Assess Your Evaluations

So the first one assessing our evaluations. So what an organization can do is use LTEM as sort of a map and do a gap analysis.

“So where are we now? Ah, this year, we’re using traditional smile sheets. And we’re measuring attendance, those are our things. Well, maybe next year, in 2020, we’re going to ramp it up, we use performance-focused smile sheets, and we’ll begin using scenario questions,” for example. “And then maybe the following year on our strategically important courses we’ll measure transfer and the effects of transfer.” So very simple. Where are we now? What could we do be doing better? A really powerful method, I think.

Convergence Training: Yeah, I like too that you’re suggesting doing it in a kind of iterative fashion, so that it’s not too intimidating to feel like yeah, I have to go out and do all this tomorrow.

Dr. Will Thalheimer: Well, and that’s one thing to emphasize. And I’m glad you brought that up. Because a lot of people look at this and say “Well, we do we have to do all these things?” No, absolutely not. This is just, you know, let’s do better than we were doing before, right?

One of the things, when we talk about evaluation, we have to always balance the benefits we get from evaluation with the costs, and the time, and the effort. And so we don’t want to be doing everything around evaluation. We want to be doing evaluation that’s targeted, that’s high priority, that doesn’t waste a lot of resources, but that gets the information we need to be effective.

Convergence Training: Great, and that’s obviously why you mentioned the top level things doing it for your strategically important initiatives, I think.

Using LTEM to Work Backwards from Goals in Learning Design & Development

Dr. Will Thalheimer: Yep, yep. Okay, the second thing we can do is thinking about using LTEM in our learning, design and development, and our process, so work working backwards from our goals,

So we know that we want to increase sales by 5%. And we’ve done a good needs analysis, and we’ve figured o

Jeff Dalto, Senior Learning & Performance Improvement Manager
Jeff is a learning designer and performance improvement specialist with more than 20 years in learning and development, 15+ of which have been spent working in manufacturing, industrial, and architecture, engineering & construction training. Jeff has worked side-by-side with more than 50 companies as they implemented online training. Jeff is an advocate for using evidence-based training practices and is currently completing a Masters degree in Organizational Performance and Workplace Learning from Boise State University. He writes the Vector Solutions | Convergence Training blog and invites you to connect with him on LinkedIn.

Contact us for more information