In a recent discussion with learning researcher Dr. Will Thalheimer, we discussed four common learning evaluations models and mentioned that, in addition, Dr. Thalheimer had recently created his own called LTEM (which he "workshopped" with other leaders in the field and which he's now iterated 12 times).
In the discussion below, Dr. Thalheimer explains his LTEM learning evaluation model.
We'd like to thank Dr. Thalheimer for taking time to talk with us about this and for all of his contributions to workplace learning, including his work on smile sheets, spaced practice, conference presentations, the effectiveness of elearning, evidence-based training & learning myths, and lots more.
If you'd like to watch the recorded video, we've got that for you immediately below. If you'd prefer to read the transcript, that's below the video. Enjoy and share your thoughts in the comments section.
Convergence Training: Hi, everybody and welcome. Jeff Dalto. back once again for the second of two discussions with Dr. Will Thalheimer.
A quick reminder, most of you probably know who Dr. Will Thalheimer is, he's kind of a learning research legend. He owns Work-Learning Research, does a lot of great stuff for free, has a lot of great courses, has a great book on Performance-Based Smile Sheets, and I recommend you check all that out.
And in the first discussion, we were talking about some commonly used training evaluation models. And in particular, we talked about Kirkpatrick, Philips, Brinkerhoff, and Kaufman. And we're back today to learn about a model that will made himself recently which is called LTEM. So with that, let me say hi to Will, and thank you.
Dr. Will Thalheimer: Hey, Jeff, delighted to be here. We had fun last time should be good this time.
Convergence Training: Cool. I'm looking forward to it.
So as I mentioned, last time we went through the pros and cons of four different common training evaluation methods, and I just explained that you have created your own model as an alternative, and that's called LTEM.
Can you tell everybody what what you were thinking and what you were trying to do when you created LTEM? And why we need yet another learning evaluation model?
Dr. Will Thalheimer: Sure.
So as I mentioned in our last session, there's a lot of frustration that we practitioners have in regard to learning evaluation.
And not only that, but our learning evaluation gurus, Kirkpatrick, Brinkerhoff, etc., over the years, have been talking about how we're not really making progress, we're not making enough progress. So there's a lot of that.
And I think we're going to use the picture paints 1000 words kind of thing to sort of get us into this because I think sort of the background is really important.
So as I mentioned before, we talked about this in the last session, the Kirkpatrick four-level model has a lot of criticism about it, including folks from the research side, you can see some evidence of that here.
I didn't show you this last week, but I love this quote. It's from Towards Maturity, it's a UK-based organization, but they did a global survey of L&D leaders and 97% of those leaders felt that they wanted to improve the way they gathered and analyze data on learning impact. So a lot of issues there.
97% want to improve the way they gather and analyze data on learning impact.
Towards Maturity Report, Making an Impact: How L&D Leaders Can Demonstrate Value. Dixon, G & Overton, L (2017)
I mentioned last time that I've compiled 54 common evaluation mistakes. So clearly, we're not doing as well as we might be doing.
Now there's one other thing to sort of motivate my interest in creating a new evaluation form. And that's that we really ought to align it with how learning works. So I'm going to do a little deep dive into the learning and forgetting curves to put an evaluation in perspective.
Alright. So this is during learning over here, on the left, and we've got after learning over here. Toward the top of this, we are representing more remembering, and toward the bottom we're representing less remembering.
So when our learners learn, they're going to go up a learning curve, and then they're going to slide down, most likely, they're going to slide down a forgetting curve. And so from a learning design standpoint, we have to pay attention to this.
But let's think about what this means in terms of learning evaluation. But first the big picture. So we do our learning curve, and I've drawn one here, but it could be more steep, it could be less steep. And then our forgetting curves could be really steep, people could forget really rapidly, or they could forget less rapidly. But also what we ideally hope is that when people go back to their jobs after their training, after their learning intervention, after their e-learning, that they'll continue to learn and hopefully we've set them up in that way.
So this sort of future after learning really depends on two things, and I'm oversimplifying here, but basically on the design of the learning and on the after-learning follow up. People who are interested in learning more thinking more about the learning and forgetting curves, they can go to this YouTube video.
And let's think about this in terms of learning evaluation now. So we do an assessment at the end of a learning intervention, at the end of a training, what does that tell us?
Well, we've got to be a little bit careful because we measure there, in some way it's a biased metric. And we know that, probably a lot of people are going to be forgetting. So it sort of makes us look better than we are, gives us feedback that we're doing really well when maybe we could be doing better.
But even more importantly than that, what we're doing is taking a snapshot at that point in time, on the level of understanding of our learners. What it's not telling us is whether they're going to be able to remember whether they're going to be able to apply what they've learned.
So evaluating right at the end of learning or during learning or right after learning really sets us up to be missing out on some things. If we want to get a sense of remembering and application, ideally, we're going to use delayed tests. Okay? That's really the only way to be sure. Now we can get some hints during or right after training, but we can't know for sure. So this kind of thinking about evaluation in terms of learning is really important so that when we go forward and design our learning evaluation models, they are becoming most useful to us.
Convergence Training: Gotcha.
Dr. Will Thalheimer: So. Let me just summarize what I'm saying here, why I created this new learning evaluation model.
We have a history of frustration in learning evaluation. There are fundamental biases in the way we measure learning. For example, what I just showed you, we tend to measure at the end of learning. And that's really biased. There are lots of common mistakes we make. And the dominant evaluation model that we're all using is flawed. So lots of reasons there.
And again, I mentioned this last time, but we're really trying to create an evaluation approach that's more muscular, that really gives us more benefit, particularly in terms of helping us make our most important decisions, and helping us get the resources and support we need. So that's sort of the background on why I decided that maybe we need to think about a new learning evaluation model.
Convergence Training: All right, great.
I had forgotten from the first discussion how much work you put into the visuals here. So thanks again for that, that was fantastic. And for those listening out there, Will mentioned Towards Maturity, and I just want to give a hat tip to them, they do a lot of great work.
And with that, will, your LTEM model has eight tiers, represented sequentially in your graph, can you can you walk us through the eight tiers of LTEM, please, and maybe tell us what LTEM stands for?
Dr. Will Thalheimer: Sure. I'll do both. And again, since a picture paints 1,000 words, let me go back to the presentation.
So LTEM stands for the Learning-Transfer Evaluation Model. And it is a one-page model with eight tiers in it. And I'm going to go through those in a minute. But it's also got a 34-page report that goes along with it to give people a sense of what it means, why it's developed this way, and why it why it works this way. So that's the big picture.
Let me show you now...and let me also emphasize that this was an iterative process, it's gone through 12 iterations. It's sort of a stable iteration now. But I didn't do this alone either, lots of people, some of the smartest people in the learning and development field, you can see their names listed here, gave me a lot of good feedback and helped me make this better over time. So this is not me alone.
Okay, one of the things I like to emphasize when I talk about the LTEM model, is this subtitle: Sending Messages to Enable Learning Effectiveness. And that is, based on my thinking that when we have a model, we ought to make sure that that that model is useful to us, that it pushes us, nudges us in the right directions to do good things, to think about things in the right way versus leaving things out.
So you can see in the one-page model, notice at the bottom, our tiers, one, two and a little bit of three, these are in red. And I did this with the idea that red is bad. And I know globally that that's not always the case, in some cultures red is better, but that was the idea there. So these things at the bottom aren't as good. And then the yellows are the mediocre, and then the green are better. Okay, so that's the big picture. Now, I'll go through each one of the steps.
Convergence Training: Can I ask a quick question about the color coding,
Dr. Will Thalheimer: Sure.
Convergence Training: In a Western, American, traffic signal way, if I'm told that red is bad, I'm thinking I shouldn't do it. But I assume that's not your point here. I think your point is maybe don't stop here or don't rely fully on this. Is that correct?
Dr. Will Thalheimer: That is correct. Well, it's a little complicated because yes, you can continue to do these things, but you don't want to do these things as a way to validate your success.
Let's look at this attendance, tier one, as an example. So, a learner is going to sign up start, attend, and complete a learning experience. And look at the fine print here in the blue. "This metric is inadequate to validate learning success because learners may attend but not learn." Now, that doesn't mean that we shouldn't check attendance or monitor our completion rate. But what it does mean is we should not use that as a measure of our success. You know, oftentimes, you see this in some of the rewards that organizations are given: "Oh, well, we had 20,000 learners go through our learning this year." Well, that's really not good enough, right? Because people can attend or complete but don't learn.
Convergence Training: Fair enough.
It's the same with tier two, activity. So attention, interest, and participation, we can measure these things, and sometimes that's really good to do from a formative evaluation standpoint. But what we don't want to do is use these as measures of success, because for example, our participants could participate fully, be really, really engaged, but they still might not learn, or they might learn the wrong things, etc. So, again, tier two is not good enough to validate our success.
Tier three is, you'll notice this is learner perceptions, and it's divided into two parts. And the first part of part three, 3A, is when we query the learner or survey them in a way that reveals insights related to learning effectiveness. So we target measures of learner comprehension, whether they've had enough realistic practice, motivation to apply after learning support, etc., and know that these measures hint at learning outcomes but should be augmented with objective outcome measures.
And 3B, this is when we query people in a way that does not reveal insights on learning effective So this is things about learner satisfaction, course reputation, etc. And we know from the scientific research that when we ask these type of questions, that the results of our smile sheets are not correlated with our learning results.
So basically, tier 3A can be represented by performance-focused smile sheets, and tier 3B is traditional smile sheets. Either of these are still their tier three, they're still not good enough. So they're either in the red danger range or yellow mediocre range.
Convergence Training: Will, if I can interrupt for a second, just to plug your book. Even if this is the first level of yellow, Will does have a great book on these Performance-Focused Smile Sheets that I recommend people pick up and then go from from red to yellow there in tier three.
Dr. Will Thalheimer: Thanks. And notice the guy who even wrote the book on smile sheet says that they're not enough. Okay? I want to make that clear.
Okay, so knowledge. And notice that learners are answering questions about facts and terminology. So knowledge recitation, this is during or right after the learning event, and knowledge retention after several days or more. So you'll note what the LTEM model adds here really the distinction between measuring understanding and measuring remembering. And note here again, even if you measure retention, this is usually inadequate because remembering terminology does not fully enable performance. Okay, so sometimes knowledge is necessary, but it's hardly ever sufficient.
Okay, so now let's go on to the green. So tier five, decision-making competence. Learner makes decisions given realistic scenarios. So again, we've got the distinction between understanding and remembering.
This is where we're using things like scenario questions, case studies, it could be realistic role plays, simulations, etc.
Convergence Training: In level four and level five, you're making a distinction between during or right after and after several days. Do you have any input on the length of that gap?
Dr. Will Thalheimer: Well, yeah, and I get into this in more depth in the report that goes along with this. You know, ideally, we hope people, we train them, and we hope that they can remember what they've learned for at least a week or two, right? So that they can begin to implement what they've learned.
You can see in the model I say several days or more. I usually recommend three, three days or more to be able to measure remembering. The thing is we have a little logistical problem and we might not want to wait forever because you know, we want might not want to wait a month because we probably have lost our learners, they've gone on and back to work. So there's a little bit of logistical balancing that goes on here.
Convergence Training: Alright, fair enough. Thank you.
Dr. Will Thalheimer: So, tier six is task competence. So the learner performs relevant, realistic actions and decision making. This really ramps it up. So tier five is decision making, and tier six is decision making and actually doing okay, so this is even harder. Again, it has the distinction between understanding and remembering.
Let me give you an example of the difference between five and six.
So I used to be a leadership trainer, and I would teach my trainees, my managers, that they should bring their direct reports into decision making. And I would put them, it wasn't just me but our organization, we ran computer-based simulations that had people make decision. And so they would make decisions about what to do in a given particular situation. When we did that, that was a sort of tier five. But what we weren't measuring was, did they use the right tone of voice? Did they use the right words? Did they use the right body language? If we were able to measure that, that would bump it up to a tier six.
Convergence Training: All right, good. Thank you.
Dr. Will Thalheimer: Okay, so tier seven is transfer. When the learner uses what was learned to perform a work task successfully as clearly demonstrated through objective measures. I'm not going to get into the difference between assistant and full transfer, it's not that important here.
And then the effects of transfer. So we take what we learned, we transfer it, and then we get some benefits, hopefully. And these can be outcomes that affect learners, coworkers, family, friends, the organization, the community, society, and the environment.
Okay, so you can see at tier five, we're going to be able to certify that people have decision-making competence, tier six certifies task competence, tier seven, we're going to certify transfer. And tier eight, we're going to certify the effects of transfer.
Check out Will's research report on learning transfer.
Convergence Training: Will, if I can ask you a question on tier eight. I think a lot of times, when people aren't doing evaluation at all, or if they're just evaluating course completion or attendance, they're told by other learning professionals, "Hey, you've got to align with business goals." And that's where that conversation stops. And obviously, your tier eight is going well beyond business goals here.
I wonder if you could just talk to us about why why you thought that was important to go beyond having your training related to the business goals?
Dr. Will Thalheimer: Well, when I look at how people mostly do evaluations, particularly based on the four-level model, you know, the fourth level is results. But everybody translates that into business results. And it's like, "Whoa, we are forgetting the learners! Don't we want to do something for them, to help their careers, etc.?" And then there's other effects that can happen as well. I've been particularly struck by the work of Roger Kaufman, who talks about sort of the societal impacts.
So I wanted to make sure that the LTEM model would help remind us of some of these other results that we might have.
Convergence Training: Okay, great. Thank you. Anything more on a high level overview of LTEM?
Dr. Will Thalheimer: Well, yeah, just to clarify. So, tier seven and eight, this is actual work performance. And tiers six, five and down, these are performance in learning. Sometimes these things overlap, but it's helpful to sort of separate them conceptually.
Dr. Will Thalheimer: So I talked about how a model should send good messages. Well, in the report that goes along with it, I've got like 25 messages that the LTEM model is intended to send. And here are just a few of those. I'm not going to speak through these, but I'll let people look at these.
Here are the Key LTEM Messages Dr. Thalheimer listed on his screen in the video:
Goal of learning is to create transfer and positive learning benefits.
Measuring learner perception is inadequate to validate success. Focusing on effectiveness is better than focusing on satisfaction or reputation.
We should consider the many effects of learning transfer, not just organizational results.
Measuring attendance or learner activity is inadequate to validate success.
We should evaluate our success in supporting remember, not just comprehension.
Measuring knowledge is generally inadequate to validate success.
Measuring decision-making or task competence during learning is better than measuring knowledge.
And one and two are sort of the same as the four-level model. But some of the other ones really go beyond what the four-level model does and the messages that get sent from it.
Convergence Training: Right, I guess, do you feel when people are talking about learning organizations and learning organization theory, that your LTEM message, especially tier eight and your expanded vision of that, is more in line with what people are talking about when they're talking about learning organization instead of just focusing on business goals, is that part of what you're saying?
Dr. Will Thalheimer: Well, there's a lot to the learning organization idea. And it's been it's been a while since I read in that space.
But maybe you should say some some more on that...
Convergence Training: I think an obvious one is, if you're training and all you're measuring is progress toward your business goal and attainment of that or whatever. And you're ignoring, like you're talking about, for example, what the learner gets out of it. That doesn't seem like that your primary focus is having a learning organization, and helping workers, and sharing knowledge, and everything. It's just a much more kind of simplistic, narrow, and I would say organizationally selfish view of learning, which ultimately might mean your organization isn't gonna be agile, won't be flexible, won't be prepared for the future.
Dr. Will Thalheimer: Yeah, well, you know, aligning incentives s is really important. And if all we're focused on are our revenues and costs, for example, there's going to be some big holes in what we're doing. So yeah, I agree with you. LTEM was designed, at least those tier eight core sort of recommendations, to give a more holistic view of all the stakeholders that we have and all the interests that we have.
Convergence Training: Great, thank you.
Dr. Will Thalheimer: Okay, so I think that's it for answering that question.
Convergence Training: Alright, cool. So to continue to hammer on that business school thing a little bit. You mentioned that that's usually where people end what most people call the Kirkpatrick four-level training evaluation model. And I know you've done some pretty interesting investigative journalism and so you don't call it Kirkpatrick, you call it Kirkpatrick-Katzell Model, and I'll create a link to the backstory on that, but could you maybe just tell us the high points of how you think LTEM is different in comparison to that, the Kirkpatrick Four-Level model?
Dr. Will Thalheimer: Sure. In fact, people ask me all the time and I have a little diagram to to help make sense of that.
Convergence Training: Alright, cool.
Dr. Will Thalheimer: So we know the four levels--reaction, learning, behavior, results--there's Raymond Katzell, a picture of him. And you can see LTEM on the right there.
So level one, reaction, is similar to tier three in LTEM. And obviously, LTEM makes the distinction between different ways of querying people.
But also notice the big gap or the blind spot that the four-level model has. It doesn't send the message that just measuring attendance and measuring activity is inadequate. And it's my belief that a good model should tell us what to do and what not to do. And you can see attendance is like the number one thing we do when we measure evaluation. So it's pretty clear that we ought to have a model that tells us measuring that is inadequate.
Level two of Kirkpatrick is similar to knowledge, decision-making, and task competence. So as I mentioned in our last session, one of the problems with the Kirkpatrick-Katzell model is that level two is all smashed up into one bucket. And what we tend to do when we need an evaluation, we think, "Oh, we just need a knowledge check." Well, the LTEM model tells us it's not good enough, right? We can measure other things as well that are more potent, more powerful.
Level three of Kirkpatrick is similar to tier seven, transfer.
And level four is similar to tier eight. Except, obviously the LTEM model has a few more specifics.
That's that comparison is it in a nutshell.
Convergence Training: Okay, good. Thanks.
So people might be thinking, "I'd like to try this LTEM model." Do you have any tips for them about how to actually start using it?
Dr. Will Thalheimer: Absolutely, absolutely.
So I'm going to share four ways to use LTM. But then also, I'm going to I'm also then going to recommend that people go look at an article written by Matt Richter who's come up with many other ways (see Matt Richter, Seven Ways to Use LTEM.)
So assessing your evaluations, learning design & development: working backwards from your goals, credentialing, and spurring improved learning. I'm going to talk about all four of these things.
So the first one assessing our evaluations. So what an organization can do is use LTEM as sort of a map and do a gap analysis.
"So where are we now? Ah, this year, we're using traditional smile sheets. And we're measuring attendance, those are our things. Well, maybe next year, in 2020, we're going to ramp it up, we use performance-focused smile sheets, and we'll begin using scenario questions," for example. "And then maybe the following year on our strategically important courses we'll measure transfer and the effects of transfer." So very simple. Where are we now? What could we do be doing better? A really powerful method, I think.
Convergence Training: Yeah, I like too that you're suggesting doing it in a kind of iterative fashion, so that it's not too intimidating to feel like yeah, I have to go out and do all this tomorrow.
Dr. Will Thalheimer: Well, and that's one thing to emphasize. And I'm glad you brought that up. Because a lot of people look at this and say "Well, we do we have to do all these things?" No, absolutely not. This is just, you know, let's do better than we were doing before, right?
One of the things, when we talk about evaluation, we have to always balance the benefits we get from evaluation with the costs, and the time, and the effort. And so we don't want to be doing everything around evaluation. We want to be doing evaluation that's targeted, that's high priority, that doesn't waste a lot of resources, but that gets the information we need to be effective.
Convergence Training: Great, and that's obviously why you mentioned the top level things doing it for your strategically important initiatives, I think.
Dr. Will Thalheimer: Yep, yep. Okay, the second thing we can do is thinking about using LTEM in our learning, design and development, and our process, so work working backwards from our goals,
So we know that we want to increase sales by 5%. And we've done a good needs analysis, and we've figured out that we think if we could get our sales managers to coach better, that that would be really a good way to do that. So this is a strategically important program, we want to make more money, so now we can think about how to evaluate backwards.
So we're going to measure transfer by measuring how well our managers are coaching. During the learning, we're going to measure simulated coaching, we're going to have an exercise that will give people some scenario questions so that they can make decisions and show us how competent they are, we'll have some if-then decisions to check out their coaching knowledge and we'll use a performance-focused smile sheet.
So again, just working backwards from our top-tier evaluations.
Convergence Training: That's great. Cool.
Dr. Will Thalheimer: Another way to use LTEM is to think about credentialing. So we've got a whole bunch of people that provide courses, right? And we want to make sure, you know, some courses are better than others. So it would be nice to have a way to credential how strong those courses are.
You also have a bunch of learners that are going through our courses. It'd be nice to know,, did they go through really strong courses? Or did they go through sort of week courses where they just got some awareness?
So we can actually use LTEM as sort of a guide to credentialing. I'm going to use one of my workshops as an example. So, decision-making competence, this is a tier five. So if I can provide a course that has a good test of decision-making competence, then I can basically let people know that this course is at LTEM tier five.
So the course I'm thinking of is my workshop. It's a gold certification course on performance-focused smile sheets, it's taught online as self study, but in it it has two assessments toward the end of the course, where people are given scenario-based questions. And they have to answer these questions at a certain criteria. And when people do that, we can say something like this: "Those who met the requirements perform successfully on an LTEM tier five assessment, showing competence-making decisions in the challenging realistic scenarios."
Now, if you think about a course like that, we could compare that to a course that only provides for example, tier four knowledge. And so we can use LTEM as a way to credential both our courses and perhaps our learners as well.
Convergence Training: Yeah, kind of a more sophisticated badging.
Dr. Will Thalheimer: Yeah, a way to badge. But a way that we know the badges actually means something.
Convergence Training: Right, right.
Dr. Will Thalheimer: If people want to read about this, there's this link here as well.
Dr. Will Thalheimer:
And the final one I'm going to share with you is using attempt to actually improve learning designs. And this really comes out of the work with someone that is doing a dissertation using LTEM. And what they're doing is they're introducing LTEM to a hospital Learning and Development Group. And the idea is that when you introduce LTEM, that gives the organization a sense, or an ability, to actually set evaluation objectives before they set their learning objectives. So they look at LTEM. And they go, "Oh, you know, I really like to have some task competence here, we'd like to measure transfer." And by doing that, the idea is that the design of the learning is going to be better.
And then the learning that's actually deployed to the learners is going to be more effective. So that's the fourth one in a nutshell.
Convergence Training: All right.. Anything else on that, Will?
Dr. Will Thalheimer: Well, I yeah. So just last month, a few weeks ago, Matt Richter, he's my podcast partner. But he's really loved LTEM and he wrote an article on it. And you can see that he uses it not just for learning evaluation, but for instructional design for training, game design for coaching, performance consulting, for keynoting. And presenting to ensure the focus on meaningful outcomes, not just infotainment, sales and business development, etc.
One of the stories he told me was "I use LTEM. And I talked with the folks that want me to build training for them and I say, 'Okay, where do you want to end up? Okay, which which outcomes do you want,' and then we talked through that, so it's a really good way to get some conversations going with our stakeholders as well."
There's a link to that article as well. I think that's on LinkedIn.
Convergence Training: Yeah, it is, I read it. And I found it interesting.
And just to call out number six, this is something I deal with a lot, I do a lot of speaking at conferences, and I think Will has a course on it now, I recommend you check this out, how to give a more effective conference presentation. But I love the idea of trying to ensure meaningful outcome in your conference presentation as well.
Dr. Will Thalheimer: Right.
Convergence Training: Okay. So this was kind of a unique experience for me. And by the time I became aware of you Will, you had already done certain things and things were out there. But I feel like I watched LTEM bubble up. I feel like, if I recall correctly, I saw some introductory articles on it, and suddenly, it was like out there, there was a white paper and there was a model. I can't promise I stayed up with each of the 12 iterations.
Dr. Will Thalheimer: Well, some of those were private. Let me tell you.
Convergence Training: Fair enough. Fair enough.
But but it was interesting to watch it kind of go out in the wild and watch people start to talk about it. And I thought it was really cool, and I noticed other people did as well.
But I wondered if you could tell us a little bit about how it's been received in the learning profession?
Dr. Will Thalheimer: Well, it seems to be going very well. I don't really have a way to track it. You know, the LTEM model and the reports are available for free to people who come there. I don't even take an email address from people. So you know, it's out there.
And the I keep hearing about it, though. In fact, just today, someone wrote something about LTEM. And then somebody else said, "Yeah, I'm going to do my dissertation on this." So, you know, it seems like there's people doing dissertations, there's a lot of organizations talking about it, there's even vendors who have now like incorporated this, "We compared what we're doing to Josh Bersin's model and Will Thalheimer's LTEM," you know.
So it seems to be resonating. You know, I'm sure the dominant four-level model is still bigger out there. But I think, I'm hoping that over time, that LTEM model really provides value for people.
Convergence Training: And yeah, I think it does both at the gut level of just asking people to reconsider what they're doing and rethink some of the things they're doing. And then also giving either a useful model or potentially so model, I think it's useful.
And I've seen it bubble up too like...I might have something to do with this part of it...but there's an industrial hygienist who teaches safety and health management at Central Washington University, who's teaching LTEM to her students now.
Dr. Will Thalheimer: Wow,
Convergence Training: Which is a little far afield. And I admit I might have played a role in that but...
Dr. Will Thalheimer: Thank you.
Convergence Training: Well, my pleasure, but my point is, I think there are probably a lot of those things going on, a lot of LTEM tentacles out there.
Dr. Will Thalheimer: Well, you know, the eLearning Guild's DevLearn Conference was last week, and I didn't go. But several people told me "Oh my gosh, Will, your LTEM model was in all these sessions. So it's good to hear.
Convergence Training: So cool. All right, good to hear. So for people out there, any final tips or words of wisdom or encouragement for them? Who want to start improving their training evaluation game?
Dr. Will Thalheimer: Sure.
Well, number one, take a look at LTEM, see what you like about it, begin using and trying it out. But, you know, even more importantly than that is sort of step back and think about what are you trying to do? What decisions, what actions are you trying to do better? And what how will evaluation, the data you get from evaluation, how will that help you in doing that?
And see, that's the big picture that sometimes we miss. There's a lot of people out there that gather all this data, and then they tell me "Well, we've actually got all this data but we never even look at it." What's the point of that? We do evaluation for a reason. Evaluation costs us in terms of time and resources, so we're trying to get some benefit from it. And, you know, we have to be strategic about that.
We ought to be doing that as sort of a part of our professional responsibility, if you will. You know, you put something out there, we want to make it better. We want to be effective as possible, we want to develop sort of cycles of continuous improvement going back to your learning organization.
Convergence Training: Well, I'm glad you made that final last point about those virtuous cycles of continuous improvement. And I think that's something for everyone to keep in mind, and even going back to what you said about earlier about how an organization can choose over the next three years to go a little bit higher up the LTEM model, like you don't have to have it perfect today. But if we're, you know, fighting the good fight and moving up the right trend, we're doing well
Dr. Will Thalheimer: Yeah, I think that's very important.
Convergence Training: Cool. Okay, well, as always, thank you.
We'll have this out there in the wild soon enough. I've already mentioned that I'm a big fan of the stuff you do, I recommend folks check out your blog and your book on smile sheets and the course on conference presentations.
But in general, what's a good way for people to find you and connect with you and follow you?
Dr. Will Thalheimer: Well, first, before I share some links with people, let me thank you, Jeff, because you're out there all the time, not just emphasizing my stuff, but a lot of research informed stuff. So, you know, I really appreciate it. And know other people do as well. So keep up the good work on that.
Convergence Training: Oh, it's my pleasure. Thanks for thanks for doing all the good work so I can share it.
Dr. Will Thalheimer: Okay, so here's some links and I think you're going to share these with people so I won't belabor this.
These are the links Will shared on his video display:
LTEM Report & Model - https://www.worklearning.com/ltem/
Katzell's Contribution (Kirkpatrick NOT Originator) - https://is.gd/Katzell
Updated Smile-Sheet Questions 2019 - https:///bit.ly.SSquestions2019
A Better Net Promoter Question -- https://is.gd/replaceNPS
Be Careful When Benchmarking -- https://is.gd/DoNotBenchmark
Debate About Kirkpatrick Model -- https://is.gd/epicbattle
Better Responses on Smile Sheets -- https://is.gd/betterresponses
Be Careful When Benchmarkging -- https://is.gd/DoNotBenchmark
Newsletter Subscription -- https://www.worklearning.com/sign-up/
Here's a couple (see above).
There's some contact information (www.worklearning.com)
So those are some great ways to be in touch. And again, there's my newsletter here that keep you informed about stuff.
Convergence Training: Alright, cool. Thanks. Well, for everybody out there again, this was Dr. Will Thalheimer of Work-Learning Research. Go out there and get better at training evaluation. And don't don't beat yourself up if you're not perfect. You know, all of us probably never will be.
Dr. Will Thalheimer: And I'll do one other call out. Please let me know how it's going. I would love to hear your stories about your successes, your lessons learned, your obstacles, whatever. Be great to learn what you're up to. Now, there you go.
Convergence Training: Good idea. Well, okay. Hey, thanks a lot, Will. I'll hit pause and talk in a second.
Dr. Will Thalheimer: Alright, thanks.
Hope you enjoyed this second of two interviews with Dr. Will Thalheimer. And many thanks to Will.