Commonly Used Training Evaluations Models: A Discussion with Dr. Will Thalheimer

Resources

Learning Evaluation Image

Dr. Will Thalheimer is of the most respected learning researchers out there. And that's especially true when it comes to issues regarding learning evaluation.

We were excited to be able to talk with Dr. Thalheimer about four common learning evaluation models, and we've got the recorded video for you below. If you prefer your learning evaluation information in written form, just scroll down for the transcript of our discussion. And if you'd like to read other discussions we've had with Will, click these links to learn more about spaced practice, the effectiveness of elearning, smile sheets, and learning myths v. learning maximizers.

Many thank to Will for participating in this discussion on learning evaluation and for everything he does. Please be sure to go off and check out his other materials and offerings at his website. And when you finish this discussion, know that we had a follow-up in which Dr. Thalheimer explained his new LTEM learning evaluation model as well.



Below is the transcription of our discussion with Dr. Will Thalheimer about learning evaluation. Enjoy.

Training Evaluation Methods--An Introduction

Convergence Training: Hi there, everybody, and welcome. This is Jeff Dalto of Convergence Training and Vector Solutions, and we have a really special guest today. I'm very excited, we have Dr. Will Thalheimer. Will is the owner of Work-Learning Research and writes at the Will at Work blog and also has some great books out.

You probably already know Will on your own—Will’s a big name. Hi, Will. If you follow the Convergence Training blog, you know that we've interviewed Will multiple times, and we are constantly referring to him and to his work on any number of topics. So obviously, we're excited to have Will.

Will, how are you today?

Dr. Will Thalheimer: I'm pretty good. It's Friday here.

Convergence Training: It’s Friday here too. Imagine that. We’re on the same side of the international dateline. Well, thanks again for coming on.

And today we're going to talk about training or learning evaluation. We're going to have two interviews, actually. This will be the first one. And we're looking at some common training evaluation models. And then we'll come back and have a second one on a model you just created called LTEM (listen to that LTEM discussion here).

But I wonder, just to kind of set the scene, Will, if you could let people know what we're talking about when we're talking about training evaluation.

Dr. Will Thalheimer: Sure.

And, you know, Jeff, you sent me a whole bunch of questions that you might ask me and so what I did, I took the liberty of creating some visuals, some slides, because sometimes a picture paints 1,000 words. So if you allow me to jump in there…

Convergence Training: Yeah, please do, please do get it started.

Dr. Will Thalheimer: And I'm going to start looking at some big-picture ideas because learning is really complicated, right? Much more complicated than the hard sciences, like rocket science, because human beings are so complicated. And then you put the layer of learning evaluation on that, and it becomes even more complex. So I want to go over some big picture things first. Just so that we are all on the same page.

Convergence Training: Yeah, please.

Dr. Will Thalheimer: Do you see my whole screen or just…

Convergence Training: The full screen, not just one PowerPoint slide. There we go—bingo.

Dr. Will Thalheimer: Hopefully none of my secrets are there.

Convergence Training: I'm sure people will be like sponging that up.

Dr. Will Thalheimer: Okay. So look, one of the things that people have told me, or that I've seen out there, is a lot of our learning evaluation experts are telling us that learning evaluation is really hard and we're not doing it very well. But also we as learning practitioners sort of believe the same thing.

Here's some research I did with the eLearning Guild just last year. And when we asked people in general, are you able to do the learning measurement you want to do, you can see 52% of them, over half, said no, I wish we could substantially change what we're doing now. So we as practitioners have this unease about learning evaluation.

By the way, I'm going to have a bunch of links for you. And Jeff's going to post those nearby. So don't worry about capturing all the links or stopping the video or anything like that. We're going to give them to you all later. But this research study is available at the eLearning Guild. If you're a member, you can get it, and you can get the executive summary if you're not.

One of the things that I focused on is some of the common mistakes that we make. And last year, I thought I would capture maybe 15 to 20 of these, but I came up with 54. You can see there are some problems out there. And again, I'll share those with you, you can take a look.

One of the things we ought to realize is that when we're talking about training, that's just part of the performance ecosystem. There are all other kinds of factors that get involved there. And sometimes we want to evaluate those. Sometimes we don't. But it's good to keep that in mind.

Because there's been so many confusions about learning measurement, I created this sort of three-part…I don't really want to call it a model…but three ways to measure learning. And people find it really helpful. So I'll share it with you. And we can measure learning to demonstrate the value of the learning, we can do it to support our learners in learning, and we can also improve the learning as well.

Now when I asked about this, I’ve done some research on this with eLearning Guild a long time ago, but also when I continue to ask and work with clients, when I ask learning practitioners: “what does your organization want from you, let’s put these in order,” and it almost always goes like this: the first thing our organization wants us to is demonstrate the value, then support learners, and only down at the bottom, is focus on improving the learning.

Now my thing is that this is the most important, this is where we can leverage things. Because if we're not improving the learning, we're not going to maximize the way we can support learners, we're not going to maximize their learning, and we're certainly not going to maximize the value that we can create. So improving the learning is the linchpin of all of this.

Convergence Training: Yeah, sounds like a little parable of a cart and a horse there.

Dr. Will Thalheimer: Exactly. I like that.

So there we are, good-looking learning professionals. Some of us are not quite as well coiffed as these folks.

But anyway, when we think about data and analysis, you know, we need to collect data that's accurate, valid, relevant, highly predictive of what we care about, what's important, that's also cost-effective. This is something we sometimes forget about, but we don't want our evaluations to cost so much that it hurts the overall cost benefit.

The most important thing though, is when we do our learning evaluation we want to help. We want that data to help us make our most important decisions. And I'm sure you all have your own thoughts on what our most important decisions are. But here's some of the things that I consider some of our most important decisions.

  • Is the learning method, or methods, we're using working or should we use another one?
  • Is this skill content or teaching useful enough to teach?
  • Are we doing enough to give learners support in applying learning?
  • Are we sufficiently motivating our learners to inspire them to act, to take the learning and actually implement it on the job, overcome the obstacles with implementing learning, etc.?
  • And then is training useful?
  • Or should we provide other or additional supports--we know that training doesn't work in a vacuum and so are there other supports we can make?

So there are some things, like this, that are crucial to our performance as learning professionals. If we can use our learning evaluation to get feedback on these, we ought to do that.

A couple of other things. One of the things to think about in learning evaluation is the inputs and the outputs. So we've got learning interventions, right? Whether it's classroom training, whether it's people learning in the workflow, hands-on, or whether it's elearning--doesn't matter, we can evaluate it.

When we evaluate the outputs, what are the results? We can also evaluate the inputs. So the outputs look like things we've seen before right? Smile sheets, the learner perceptions, learner knowledge, what have they understood? Are they able to make decisions? Are they able to remember? Can they perform on the job and are they sharing what they've learned as well? There's going to be many types of outputs. These are the effects of learning interventions. But we can also evaluate the inputs.

So one of the things we can do is research benchmarks, so we could look at our learning designs. And I do this a lot in my consulting work. So people come to me and say, “Will, we're not sure about our learning designs, we want to make them better. Can you do a learning audit for us?” So I compare it to some research benchmarks, some best practices. I use the decisive dozen, but there are other things people can use. So we can also look at our designs, our analysis, our assumptions, and look at and get information about our learning by looking at the inputs as well as the outputs.

Now most people, when we think about learning evaluation, are thinking about the output side. And that's perfectly legitimate. I just want to emphasize here that sometimes we can look at the inputs as well.

Four Common Learning Evaluation Models--Kirkpatrick, Kaufman, Philips & Brinkerhoff

Convergence Training: So I think that’s a great intro.

Our next question that we talked about was just discussing some of the more commonly used training evaluation methods. And I wonder if you could tell us a couple of the most common ones and then we'll, once we have them kind of on the map, we’ll drill down and talk about each one.

Dr. Will Thalheimer: Sure. Well, of course, the most common, the most well-known, is the Kirkpatrick four-level model. And it's been the dominant model in our field for a long time.

There are also other models. People talk about the Philips model (aka ROI), the Kaufman model, Roger Kaufman's work, and Rob Brinkerhoff as well through the success case methodology. Those are the big four.

Of course, I'm a little biased, I would add LTEM, the new model that I worked on, as well.

Convergence Training: All right, great. And again, for everybody listening, we are going to talk briefly about LTEM near the end of this one discussion, and then we'll have an entire detailed discussion on LTEM in a second recorded discussion. So hold on for that. (Note: Here's that second conversation on LTEM.)

And okay, if we can just walk through each of the four models you talked about--Kirkpatrick, Phillips, Kaufman, and Brinckerhoff--and maybe you can explain to people, especially people who may not have heard of any of these, what they are and what are some pros and cons of each.

And I'll interrupt you after we discuss Kirkpatrick just to get a little interesting background, because I know you've done some research on as well.

The Kirkpatrick Four-Level Training Evaluation Model

Dr. Will Thalheimer: Okay, so and we’ll go back to the picture that paints 1,000 words. So this is this is a picture of Donald Kirkpatrick.

And most of you know the Kirkpatrick model. It's level one, reaction…these are sort of the learner feelings about it. Level two is learning. Level three is the behavior of learners when they get back to the workplace and start implementing and applying what they've learned. And level four is the results that they get from that behavior.

Okay, so fairly straightforward. Now, I think I'm going to anticipate one of your questions.

Convergence Training: OK, fair enough.

Dr. Will Thalheimer: So I have now started calling this the Kirkpatrick-Katzell four-level model because as it turns out, Raymond Katzell is really the originator of the four-level idea. And we're going to give you this link later. And if you're interested in that, you can read an article that I wrote, sort of a little piece of investigative journalism, if you will. And it talks about Katzell’s role in creating the model.

Here's Will's article on the origins of the "Kirkpatrick-Katzell Four-Level Training Evaluation Model."

So that’s the four-level model. Now, one of the things I talk about in my work is how to evaluate a model, right? Models are good if they help us. So I talk about the messaging that the model sends, really. You can think of this in terms of sort of the behavioral economics notion of nudging. What does the model push us to do? And models push us in good ways and bad ways.

Well, the Kirkpatrick model is the same as all models, it has some beneficial messages. So the most important thing that it does, it tells us that we shouldn't just focus on learning. That we should focus on results too. And our whole field over the last…starting 30-40 years ago, began to move from a focus on the classroom to a focus on performance, and the Kirkpatrick model was aligned with that result.

The other really beneficial message that it sends is that learner opinions, learner surveys, are not that important. So they put those down at level one. That's important to know because too often we default to those.

By the way, in the in the report that goes along with the LTEM model, I go into a lot more beneficial messages and harmful messages of the Kirkpatrick model, but here's just highlighting a few of the top ones.

Here's Will's report on LTEM.

Convergence Training: Will, before you get started on harmful messages…and just to give a plug. While level one, the evaluation surveys and smile sheets, are the least important level, I do want to mention that Will has written a great book to help you get more value out of that. So keep your eyes open for that and apologies for the interruption.

Learn more about Will's book on smile sheets here.

Dr. Will Thalheimer: No, no worries.

Well, and actually let me point out that, even though Will has written a book on smile sheets, notice he's saying that these are not the most important things.

Okay, so one of the things that's harmful. The four-level model does not warn us against ineffective evaluation practices.

Now if I asked you what you think the most common way that we evaluate learning is, you're going to tell me it’s smile sheets. But that's not actually correct. The most common way is that we measure attendance or completion. So the Kirkpatrick model doesn't warn us against those. And I think a good model should warn us against things that we should not do, not just things we should do.

The Kirkpatrick four-level model also ignores the role of remembering. So we as designers of learning, we want to support people in being able to learn, but we also want to support them in being able to remember. And from a learning design perspective, those are two different things. So from a learning evaluation perspective, those things should be taken into account separately as well.

And probably the biggest problem with the Kirkpatrick model is that it puts all types of learning measurement into the level two bucket. They're all mashed together. So we can measure things like trivia, the regurgitation of trivia. That could be in level two. It could be the recall of meaningless information, or the recall of meaningful information, or it could be the ability to make decisions, or the ability to actually perform a task. All these things are related to learning. But because we put those all in one bucket, sometimes that means that we just sort of default: “Oh, you know what, we need a level two!” “Oh, okay, let's do a knowledge check.” And that really creates a lot of problems. It gives us bad information when we evaluate, but it also pushes us as learning designers into creating learning that's not really that effective.

So some beneficial messages, some harmful messages.

Convergence Training: Could I ask you a couple questions on the right hand column there?

Dr. Will Thalheimer: Sure, absolutely.

Convergence Training: Can you tease out what you're talking about, about ignoring the role of remembering?

Dr. Will Thalheimer: Okay. So we can teach somebody something. But if they forget it within a couple days, then we haven't really done our job. We haven't done our job well enough, because most of the time, people need to remember what they've learned over at least a little bit of time. And even if you teach 20 things on a Monday, how many people are going to use all those 20 things on Tuesday? Well, not many, probably. Maybe they'll use five things on Tuesday. Five more things on Wednesday, and then you know, the rest of the week, maybe a few things. But you can see some of those 20 things are not going to be needed for a week or more, two weeks or three weeks. So remembering is really critical.

Convergence Training: So what would be your response to somebody who said like, but that's implicit in the level three evaluation on the job observations about the job behavior, which presumably would accomplish some of that memory, right?

Dr. Will Thalheimer: So we have to be careful here, though. Let's say you try to measure behavior on the job. And let's say they fail. Well, why did they fail?

One of the most important things is they might have failed because they forgot. They could have failed for other reasons as well. But if we're not measuring remembering, then that causal chain is broken. We don't know what went wrong.

Convergence Training: Okay. Yeah. Cool. Thanks.

And then level two learning is mashed up. How much of that problem could be solved by simply having better level two assessments instead of restating trivia, like you said?

Dr. Will Thalheimer: Well, obviously it could. But it doesn’t.

What I'm saying is because we've had this model that has level two as one big thing, we have not been pushed, we have not been nudged, we have not been sent the message, that there's better learning measurements to do.

Convergence Training: Right, good.

Dr. Will Thalheimer: And we're going to see this again in LTEM because I basically designed it to be a response to the weaknesses of these other models.

Convergence Training: Cool. Thank you very much.

Dr. Will Thalheimer: So just so you don't think that I'm the only one being critical of the Kirkpatrick model or the four-level model. This is a scientific review and a top-tier scientific journal, and they evaluated the Kirkpatrick framework and they said : “It has a number of theoretical and practical shortcomings. It is antithetical to nearly 40 years of research on human learning…leads to a checklist approach to evaluation. And by ignoring the actual purpose for evaluation, risks providing no information of value to stakeholders.”

So that's pretty damning. Now, that's not the only one. This was just published this year: “Kirkpatrick’s framework is not grounded in theory. And the assumptions of the model had been repeatedly disproven over the past 25 years.” And they showed a number of research reviews of that. And that was published by Tracy Sitzmann and Weinhardt.

Dr. Will Thalheimer: So let me sort of summarize.

So the Kirkpatrick framework has been our dominant model for a long time. It's got some good benefits. It's got some things that are not as useful as they could be.

In all fairness, it was designed back in the 1950s. This is before the cognitive revolution in psychology, before we really compiled a lot of the most important research and learning, so you wouldn't expect that it would be integrated with that science of learning stuff.

So now we have an opportunity to go beyond the Kirkpatrick model.

The Philips/ROI Learning Evaluation Model

Convergence Training: All right, great, and you're right, you did anticipate my question about the interesting history of the Kirkpatrick model. I'll include some additional links for people who want to go down that rabbit hole, and maybe now can you turn your attention to the second model? I think we decided we're going to talk about the Philips ROI model now.

Dr. Will Thalheimer: Sure. Okay, so the Philips model basically takes the Kirkpatrick four-level model and adds ROI on to the end of it. And I'm going to go into ROI, but it means return on investment, just like you would learn in business school. We'll get into that.

But I just want to emphasize that even though Jack Philips is known for the ROI model, he does a lot of evaluations, most often with his wife Patty. They do great work, they're out there evaluating all the time, they're not just teaching workshops. They do a lot of great stuff.

So this is what the ROI methodology looks like. And I'm just giving you the high points. But here's how you do it. So based on the training, what have you done differently, if anything? (These are the questions you ask of learners who have gone through a training.) Based on the training, in what ways has the organization benefited? Based on your accomplishments, enabled by the training, what is the monetary value of the organization to the organization and make sure you explain the basis so they're really pushing them. What percentage of the improvement was due to the training, from 0 to 100? What confidence do you have in your estimate? Again, zero to 100. And then, was the investment in training an appropriate investment?

And then the calculation is very simple, right? You take the benefits and the costs. But what Jack really adds to this is that, as you can see in in four and five, you're really getting a way to be very conservative about your estimate. So, you might estimate that you save the organization $100,000. Well, what percentage of the improvement is from the training? Wow, maybe 50%. So you can see that's going to cut it down to $50,000. And then what confidence do you have, and that's again going to cut it down. So the benefit side of the equation, the top part of the equation, is a very conservative estimate.

Convergence Training: Will, just to clarify something and maybe anticipate something you might say soon--these are questions asked of the learner. Is that correct?

Dr. Will Thalheimer: That is.

So the strengths of the model are that it's framed in terms that some of our organizational stakeholders, some of our business stakeholders, really care about: ROI, return on investment. Also, one of the strengths of it is it takes a very conservative estimate.

Now, on the weakness side, you can see that this is the subjective input of the learners and learners may not be very good at estimating, how much benefits there are, sometimes they can but it depends on the type of type of learning.

The way I see ROI is there are times when we have particular type of stakeholders where this is going to be very important. But, you know, probably don't want to do this as the only thing we're doing.

Convergence Training: I anticipated and you made the point, but just to underline it, how often do you really think a learner could put some kind of ROI estimate and do that in an accurate sense? What are your personal feelings about that?

Dr. Will Thalheimer: Well, there's some areas that's easier than others.

So if you're a salesperson, right, and you take a sales training on a new product, or you know, on an old product, and you see your sales increase, well, you probably have a pretty clear idea about that.

Now, something like leadership training, I used to be a leadership trainer, right? You ask how much does this improve your people's productivity and how does that relate to how much money they're making for your organization. That's really not so easy to do.

Now, even then, you could argue that there are times when you have some stakeholders, for whom this kind of thing is very important. We don't work in a vacuum. We're trying to do two things at once. We're trying to create excellence in learning and performance. But at the same time, we need to maintain our budget and not get fired, things like that. You know, even Jack says, you probably don't want to do this on that high of a percentage of your programs. I think he estimates 5% or something like that. So now there's some real times when it's valuable, and there's other times when it's less viable or really not needed.

The Kaufman Learning Evaluation Model

Convergence Training: All right, great. Kaufman is our third model, are you ready to talk about that?

Dr. Will Thalheimer: Sure.

Convergence Training: Cool. Thank you.

Dr. Will Thalheimer: Alright, so there's Roger Kaufman. He has a five-level model too, and he's sort of based it on the Kirkpatrick model a little bed but not too much.

And by the way, he's working with some other folks. I'm not sure I know all the folks, but John Keller’s one of them, Ryan Watkins, I know he's done work with Ingrid Guerra.

But basically, he talks about level one, but it's not just reaction. He wants to add stuff about the inputs, the learning supports, all the kind of organizational things. Roger's real focus is…well, he’s really got two focuses.

One is that this should not just be about learning. Remember, I showed you the performance ecosystem before? And so when he talks level two, it's not just about learning. It's about acquisition of learning, and resources, and everything you need to get your results. You can see that in application, too, so it's not behavior based on learning, it’s application. And that can be not just learning application, but all the other resources you're applying as well. Level four is results because results are results. And then level five, and this is one of Roger's great contributions to our field, he calls it “mega,” but it's really the societal impact, the impact that goes beyond just our focus on our organizations.

Convergence Training: A sidetrack here…you’ve written a kind of interesting article about elearning and global warming, haven't you, that maybe partly relates to that?

Dr. Will Thalheimer: Well, yeah, I I tried to look at the big picture.

So one of the big pictures is…my wife just sent me this article yesterday to calculate how damaging flying is. These planes, they go up into the, I don't know, stratosphere or somewhere up there, and they're spewing a lot of pollutants into the air and causing global warming and things like that. So I just asked the question, should we look at our travel budgets and our traveling as part of a big-picture ethical kind of concern?

So yeah, it's very interesting. I published that and some people really liked it. But there was a lot of silence. Because I think it's hard for us, this is what we've been doing.

But it does suggest to me that if we can create really good elearning, then we should try to do that, not just because it can be well designed--in fact, some research I did a couple years ago show that in the wild, if you just have a classroom training versus elearning, elearning tends to be better designed. Because we just follow the old methods in the classroom, we lecture basically. It doesn't have to be like that, both can be better designed. But yeah, so there are some opportunities there for learning that could help with the mega stuff that Roger talks about.

So I think there are s two really nice things about the Kauffman model. Number one, it’s not just focused on learning, but thinking about the whole performance landscape. And number two, getting us to think beyond a really narrow focus that we've had over the years.

And if there's any weaknesses in there, sometimes I get a little bit confused of, you know, what’s in each bucket? But, you know, that could be could be just me.

Convergence Training: You know, it’s interesting that the mega societal thing came up in this and, I guess, the idea of purpose. Are you familiar with Daniel Pink's book Drive and the theory that autonomy, mastery, and purpose are key to our sense of motivation?

Dr. Will Thalheimer: Hmm, yeah.

Convergence Training: Yeah. So I think that fits in nicely with that point.

Dr. Will Thalheimer: Well, absolutely. One of the things that stunned me over the years is that none of our models really focus on the benefits and costs to the learner. It's kind of like we treated them like automatons, like assets, as opposed to, you know, human beings.

The Brinkerhoff Success Case Method of Learning Evaluation

Convergence Training: Right. All right. Okay, cool.

That was Kaufman. Now we’ve brought in societal issues. And then the last but not least of our common methods is going to be Brinkerhoff.

Convergence Training: Okay.

Dr. Will Thalheimer: Okay, so, I'm going to take you through a bunch of visuals here to describe the Brinkerhoff case method.

Now, he calls it the success case method. And he and I have wrangled through this a little bit, and you’ll see he’s doing many case studies. And you're taking success cases, but you're also taking failure cases, and then you're sort of evaluating. So that's why I like to call it the case method. But his argument is that “Well, you know, we really need to be positive so we can get people to pay attention to us,” which is a good argument.

And anyway, I think I've left a link with you, Jeff, so that if people want to go flesh that out, they can do that.

Here's a link to Will's discussion with Brinkerhoff about the name of Brinkerhoff's learning evaluation model. 

Convergence Training: Yeah, I'll definitely include that. I read that discussion today, actually, and I like both points. I will admit that originally, the name success case method confused me in a way that I think you're referring to, it makes it sound like it's all about studying successes. And I think his counter is basically amongst other things, it's about creating more successes.

Dr. Will Thalheimer: Right. Okay, so yeah, you can see sort of the skeptical researcher in me and my wanting to be fair.

Alright, so the case method starts with the sort of over-arching intent: is the training being used? Then, when the training is used, what good does it do? And then what would it take to get more value from the training? Okay, that's a starting point.

And then there's sort of a two-step process. And I'm oversimplifying here for time. But what we do first is we identify trainees that have been the most and least successful. And then we interview some of these, we don't want to interview all of them, because that'd be too costly. And going back to one of the goals of evaluation, we don’t want to make it too costly. We're interviewing some of these folks to understand, analyze, and document their stories, their cases. So basically, it's a two-part process, we survey first, and then we interview.

And one of the nice things that Brinkerhoff talks about is the impact model. And you can see it's five points here, knowledge and skills, critical actions, key results, business unit goals, and company goals. And what I really like about this is that this is a really good way to communicate with our stakeholders, they get it. It's simple enough, it doesn't overwhelm. Very straightforward.

And so then, you know, you might think about, when you're developing the survey, how do you do it? Here's an example. So across the top row, we have our choices. And then we have our actions here. So you could have used the training to have more organized staff meetings or coaching sessions or ask for more input from my direct reports. This looks like for leadership training, it could be anything along there. And then you can decide: I tried this and achieved a worthwhile result; I tried this, but have not noticed any results; I tried this, but it did not work; or I have not tried this yet. And then you sort of have a map, if you will, of what people have done, and their overall success. And then what you do from the surveys is you look at some of the people that were successful, and some of the people that were least successful.

So fairly straight-forward there.

Other Learning Evaluation Models

Convergence Training: Great. So in just a little bit, I'm going to ask you for a brief intro to LTEM. We just talked about four different training evaluation models. They are four of the most commonly used, biggest, most well-known. And before we go into LTEM, are there others? And can you just maybe spit off some names if there are some?

Dr. Will Thalheimer: Ah, yeah, there's a bunch of them. And I know there's one that people in Europe told me that they really liked called Hamblin. Although I looked at it looked exactly like the Kirkpatrick model. Not exactly, but similar.

You know, let me make a recommendation. Dani Johnson at Redthread Research, did a really nice summarization of the evaluation models that are out there and we can leave a link to that research out there. You know, her overall assessment was “Wow, really, except for LTEM, there really hasn't been that much change in the evaluation models for decades and decades." But it's a really good summary. There's other models in there that we didn't go into. There's also models that come from academia that are probably too complicated for most of us to use. And so, there are other things going on as well.

An Intro to LTEM

Convergence Training: Okay, so, we're going to get back together and do a second discussion. We're going to talk about a model you created I think about a year ago called LTEM. I hear a lot about LTEM these days.

I'm intrigued by it myself and wonder if you could just kind of set the scene in this discussion for the future discussion, explain why you created LTEM and kind of give us a general idea of what it's all about?

Dr. Will Thalheimer: Sure, well, I created LTEM because for years I've heard complaints about the dominant model, the four-level model, right? And also I've seen that we don't really evaluate like we should. Look at that list of 54 common mistakes that we make, so clearly we're not doing what we could be doing.

And it's not just me saying this. We, the learning professionals in the field, all feel this sort of sense of depression and, you know, can't get stuff done. And we really are not doing what we want to be able to do. We know we should be doing more. So many people, every time I do a conference talk, people come up to me and say, “Yeah, we'd like to do more, but you know, the organization won't let us” or whatever.

So I set out to create a better model. And it’s gone through like 12 iterations, I got a lot of some of the smartest people in the field to look at it and help me make improvements. It's now in sort of a steady state, after 12 iterations. And it’s really based on the idea that our models ought to…well, they ought to push us to do the right things, and pushing us away from doing the wrong things. Simple as that.

Convergence Training: Yeah. That’s related to the behavioral economics nudges that you mentioned earlier.

Dr. Will Thalheimer: Yeah, exactly. So if a model doesn't work for us, we should find a better model. Now when you when you evaluate a model, you can't expect perfection, you’ve got to compare it to other things that are available to us. Because models are simplifications of really complex stuff. So we need to create a model that's simple enough, but precise or accurate or valuable enough, that it does us good, or does us more good than other models are doing. So, simple as that.

So I'm just going to show people the model here and maybe say a few things about it.

So the model on the left here, you can see it's a one-page model. It's got eight tiers. You can see there's red at the bottom. And I know this is not worldwide, but I made red to be bad. So this is just measuring attendance, not that these things are bad to measure, but that they can't validate our success and learning. They can't tell us that.

And then there's sort of some middle levels here that are mediocre, then there's some things that are better to do. And I'm going to go into all that later. But basically, it's eight tiers.

And there's a 34-page report that goes along with that. We will put a link to the report if you want to read it before the next interview.

And yeah, it's got a lot of success. Unfortunately, I'm not a very good business person, so I give it away for free, and I don't even collect people's email addresses. So I don't have a very good sense of what people are doing with it. I mean, people have told me what they're doing with it. But, you know, that's only sort of anecdotal evidence. So there's a whole bunch of people doing stuff with it now.

I'm thinking about it, thinking about ways to do it. You know, when I wrote the 34-page report, I anticipated ways to use it. But now people are coming back to say, “Oh, well, we're using it this way, this way, this way.” In fact, Matt Richter, my podcast partner, we just started a new podcast called Truth in Learning, he just wrote an article on seven ways to use LTEM--he loves it. I think he loves it more than he likes me.

Here's Matthew Richter's article on using LTEM

But anyway, that's it. That's LTEM in a nutshell. I don't want to go in too much in depth because, this is a teaser for our next interview.

[banner-mobile]

Parting Thoughts (for now) on Learning Evaluation

Convergence Training: I'm glad you mentioned Matt Richter's article about using LTEM. I was going to call that out as well. I think two quick additional questions for you before we sign off. We're talking about training evaluation, do you have insight into how often people should be doing this?

Dr. Will Thalheimer: How often? Once every five years I think would cover it.

Convergence Training: Okay, fair enough.

Dr. Will Thalheimer: No, no, no, I'm joking. Well, we should be evaluating all the time.

You're giving me license to go off here.

Convergence Training: Go ahead.

Dr. Will Thalheimer: Well, okay, so number one, we can think of evaluating a particular training course or a learning effort. Right? And we can also think about getting better at that one.

But we can also think about our whole system or our whole curriculum, we can think about evaluating those things as well.

And then we can think about, you know, for some of our really strategically important learning programs…now, we probably ought to be piloting those before we roll them out…and so we should evaluate those.

And we want to evaluate…we that we tend in our organizations to have similar design, right? So we don't need to drill down in every one of our learning programs and evaluate it to the N-th degree. We may just need to pick out a few versions of it, evaluate those really crazily, and we're going to learn lessons learned from that.

So yeah, there's a there's a lot in that answer.

Convergence Training: That's a good start, if nothing else,

And then my second question for follow up is, when we talk about learning evaluation and training evaluation, I hear it talk about it and ask questions about it all the time, but at the same time I also hear people talking a lot about learning analytics. I wonder if you could explain—are they the same thing, different things? What's the relationship here?

Dr. Will Thalheimer: Well, so the connotation of learning analytics is that it's about data and data science and all that kind of stuff. xAPI people are using to gather more information. Still haven't quite nailed that yet, because it's really hard.

You know, one of the one of the things I mentioned in my measurement workshops is that sometimes we tend to measure what's easy to measure, as opposed to what's important to measure. Right. That's why we do smile sheets. That's why we measure attendance, because those are easy to measure.

Measuring things like are we giving our learners enough realistic practice? You know, that's much harder to measure. Or our people, you know, do they feel in inspired or motivated? That's much more difficult to measure.

So, you know, we have to watch out for those kind of things in general.

Convergence Training: All right, great. I like I like that you mentioned measuring things that are easy. The last point I was going to make is, you know, early in the discussion, you talked about the most common learning evaluation method being measuring course completion or attendance, and you've got that at the bottom of LTEM. And since we are talking about behavioral economics a lot, I think that's a kind of classic Daniel Kahneman, thinking fast and slow, I'll go for the easy measurement thing that doesn't actually answer the question I'm trying to answer.

Dr. Will Thalheimer: Right.

So I mean, you know, I've actually seen a lot of problems with this focus on data analytics. I'll give you some examples. I was I was sitting in a room. This is in a major global consulting firm, you know, a big consulting firm and I had just given my presentation on how to make better smile sheets, and then I was sitting in the back and then their team was showing the new way that they were going to report on their data.

And so they showed this new, very fancy dashboard with this great interface. I think it was developed in Tableau or whatever. And the leader of this group stood up in the back of the room and said, “This is great. This is exactly what we need. This is the new and this is….”

And I, because I had talked with the team before, knew that the way they were getting their data was completely crappy, that it was bad data, but it looked really good. So you know, there's that kind of issue.

There's also, you know, we can measure a lot of things with like xAPI now, but we tend to measure clicks. Which is fine, we need to know what people are doing so we can make some things better. But we also need to measure the things that are important as well.

Convergence Training: So, an open discussion there.

I know that I saw you passing along a free link to a free textbook on statistics, which I think is a great place for people in L&D if they want to start learning about crunching learning data. So I'm going to try to track down that link and included at the end of this article.

Here's a link to those free textbooks on statistics. If you're like most L&D people, and in fact like most people people, you're probably not very good with statistics. Here's a great chance to up your game for free.

Dr. Will Thalheimer: Oh, that's great. Yeah.

Convergence Training: Alright, cool. So, Will, thank you a million. The slides are great, by the way. Thanks for putting so much work into this. You put more work into prepping for this than I did.

And for people out there, could you just let them know like how they can connect with you and follow you and about any things you’ve got going on now?  Any online courses, any conference presentations, any podcasts we need to know about?

Dr. Will Thalheimer: Sure. I put together a couple slides just in case you asked this to me.

Convergence Training: Thank you, cool.

Will's Summary on Learning Evaluation Models

Dr. Will Thalheimer: Yeah. Oh, I didn't really summarize what I said. Let me do this summary and then I'll jump into that.

I really think we need a more muscular approach to evaluation. And I mean, one that's more effective, that really gets us better results, particularly to help us make those most important decisions and get the resources and support we need. I certainly don't have all the answers to this. But you know, together, we need to work on this.

I've studied this stuff in the learning evaluation, but I still feel like, there's so much more for me to learn. So I think as a field, we need to do this, because what we've been doing over the past decades is not working very well. And now, we're doing a little better, but there's still so much to get better at.

So the one of the things I want to emphasize is that I've just created this new course. It's an online, self-study course. It's called Presentation Science: How to Help Your Audience to Engage, Learn, Remember, and Act. I’m really thrilled about this. It’s going great, the first people who have gone through it…one person said, “This is the best online course I've ever taken ever.” So I’m kind of thrilled with that. You can take a look at that presentation science.net

And here’s a bunch of links to articles that I'm going to give you, Jeff, that you can post for people so that they can take a look at these things. My updated smile sheet questions, there's this nice debate about the Kirkpatrick model that Clark Quinn and I had, a bunch of stuff there that people may be interested in. And then just to summarize, there's my contact information if people want to get in touch with me.

Here are a bunch of links from Will: 

Learning Evaluation Background:
https://is.gd/evaluation54mistakes
https://is.gd/Guild18
https://is.gd/originator

Major Learning Evaluation Models:
Roger Kaufman: https://megaplanning.com/
Jack & Patti Phillips: https://roiinstitute.net/
Rob Brinkerhoff: http://www.brinkerhoffevaluationinstitute.com/
Jim & Wendy Kirkpatrick: https://www.kirkpatrickpartners.com/
LTEM Report and Model - https://www.worklearning.com/ltem/

Why Not Brinkerhoff Success Case Method:
https://www.worklearning.com/2018/06/27/brinkerhoff-case-method-a-better-name-for-a-great-learning-evaluation-innovation/

Red Thread Research on Evaluation Models:
https://redthreadresearch.com/2019/03/08/learning-impact-literature-review-2-2/

Other Evaluation Articles by Will Thalheimer:
Katzell's Contribution (Kirkpatrick NOT Originator) - https://is.gd/Katzell
Updated Smile-Sheet Questions 2019 - https://bit.ly/SSquestions2019
A Better Net Promoter Question - https://is.gd/replaceNPS
Be Careful When Benchmarking
Debate About Kirkpatrick Model 
Better Responses on Smile Sheets - https://is.gd/betterresponses
Be Careful When Benchmarking
Practitioners: https://www.worklearning.com/2019/11/20/levels-of-evidence-for-the-learning-profession/

Will's Smile Sheet (Learner Survey) Book:
https://smilesheets.com/

Will's Gold-Certification Course on Performance-Focused Smile Sheets:
https://www.worklearning.com/academy/

Will's New Presentation Science Course:
https://www.presentationscience.net/

Convergence Training: And what's the name of the smile sheet book for people?

Dr. Will Thalheimer: Oh, it's Performance-Based Smile Sheets: A Radical Thinking of a Dangerous Art Form.

Convergence Training: Cool. And a lot of the really super-great white papers and research stuff, that's at the Will at Work blog. Is that correct?

Dr. Will Thalheimer: The blog has now been incorporated into Work-Learning Research. So yeah, you can just go to WorkLearning.com.

Convergence Training: And do I go there as well for the upcoming podcasts?

Dr. Will Thalheimer: Thanks for asking. For the podcast, we have our own website, it's called Truth in Learning.

But anyway yeah, it's out there. If you want to follow my stuff, if people are intrigued with what I'm doing, the best thing to do is to sign up for my newsletter because I really publish things there and I can remind people of what's going on.

Convergence Training: And I want to call out that Will's active on Twitter and LinkedIn as well, and you can follow him there.

Dr. Will Thalheimer: Yeah, yeah. I'm liking LinkedIn more and more.

Convergence Training: Cool. Well, everybody out there I'm sure you guys enjoyed that. Stay tuned. We'll be coming back to talk in lots of detail about LTEM shortly. And Will, thanks a lot. We appreciate your time and look forward to around to talking again.

Dr. Will Thalheimer: My pleasure, Jeff. Thanks so much.

 

Want to Know More?

Reach out and a Vector Solutions representative will respond back to help answer any questions you might have.