Safety Metrics & Indicators Reconsidered

Safety Metrics and Indicators Image

The safety world is doing a lot of talk about safety measurement and safety metrics lately. There's a widespread belief that the reliance on lagging indicators for safety measurement (most especially incident rates) isn't beneficial. And there's also a widespread belief that we should be using more leading indicators, even if it's not always clear which leading indicators to use.

Plus, there are interesting discussions about quantitative v. qualitative indicators as well as controversies about things that can't be measured at all.

We sat down with Pam Walaski, whose recently been studying up and revising her own beliefs on safety measurement, to get a nicely nuanced introduction and some guidance on moving forward when it comes to safety measurement (notice in particular her suggestion to use both lagging and leading indicators but also her different spin on what lagging and leading indicators are, which ones to use, how they should be related to one another, and how they should tie-in to business goals).

Feel free to watch the video below to begin soaking it in. If you're the type who'd rather read, we've included the transcript of the discussion below the video.

If you liked this discussion with Pam discussing safety metrics, you might also enjoy our discussion with Carsten Busch talking about safety metrics.

Plus, you might find this guide to the "new view" of safety, including HOP, HPI, Safety Differently, Safety-II, and Resilience Engineering very insightful. And yep, Pam's in it, and so are a lot of other great safety professionals.

Pam Walaski on Safety Measurement, Safety Metrics, and Safety Indicators

Let's dive right into this discussion with Pam on safety metrics.

Convergence Training: Hi, everybody and welcome.

This is Jeff Dalto of Convergence Training and Vector Solutions back once again with another webcast-audio-podcast. And today we're talking occupational safety and health issues revolving around safety metrics and the use of indicators.

And we have a repeat guest today, Pam Walaski is with us. A lot of you probably know Pam from ASSP and elsewhere. Pam is a Certified Safety Professional, and she’s also the Senior Program Director with a company called Specialty Technical Consultants. And in addition, and I might argue, most cool-ey, she is a faculty member at the Indiana University of Pennsylvania's Department of Safety Sciences.

So with that, I'd like to say hi to Pam and welcome. How are you doing?

Pam Walaski: Hi, Jeff. Good. I'm always glad to talk with you and it’s good to be back.

Convergence Training: Yeah, I always enjoy talking with you and chatting with you offline as well. I'm excited about today's discussion, so thanks in advance. I know you’re doing a lot of work on this indicators and metrics issue. And obviously it's of a lot of interest to everybody out there. So we appreciate your time.

Pam Walaski: And one quick point, though I am technically a temporary faculty member at IUP right now, I'm only just teaching one class. But it is nice to be working with the future of the safety profession. I teach a freshman class. So it's fun to sort of have a role to play with their start in the field, as you said.

Convergence Training: Well, thank thanks for correcting me and yeah, I agree. That's a great opportunity to work with young people and help enable their career and move the safety profession forward. So hats off to you and to all the people out there doing the same thing.

So before we jump in and start talking about safety metrics and safety indicators, for the people listening out there, can you tell them a little bit about who Pam is and what you have done in your professional background?

Pam Walaski: Yeah, I think you caught up some of it. I'm currently a Senior Program Director with Specialty Technical Consultants. We're a niche consultancy firm; we focus on management systems, auditing, and program development. It’s a small group of folks, there are about 15 of us, we're all over the country. And I've been with them since May, so it's kind of a new gig for me.

Prior to that I was with another consulting firm, but it was engineering consulting, and I was their safety director. And I've also done a lot of consulting over the years.

Right now, my focus is mostly on as, as I said, management systems. Work risk assessment is an area that I'm extremely interested in and do a lot of work in as well.

And this particular topic, interestingly enough, came up because one of my clients approached me and said that they were trying to change their perspective. They had heard that they needed to be doing things with leading indicators and wondered if I could help them. And I said, “Sure, everybody knows that lagging indicators are bad and leading indicators are good. And so this should be an easy project.”

But I wanted to do a little bit of research first before I sort of sat down with them. And the more I read, the deeper the rabbit hole got. And I found that I really didn't know as much as I thought I did about this topic. I've learned a lot and gained some new perspectives.

And so that's kind of what we were hoping to talk a little bit about today. I climbed out of that rabbit hole, I think. So hopefully I can be helpful.

Convergence Training: Oh, no doubt. Well, we look forward to talking about what you've learned. And Pam did mentioned she does a lot of work with risk and occupational safety. I know Pam teaches for the ASSP on similar topics. And we have a number of recorded previously recorded webinars with Pam on those topics. So we'll link them to the transcription here and we encourage people to check those out.

Here are those previously recorded webinars with Pam:

Definitions: Safety Measurements, Safety Metrics, and Lagging and Leading Indicators

But as Pam said, today we're going to be talking about safety measurement and safety metrics, and we're going to be talking about lagging and leading indicators. For people who are maybe new to this kind of language, can you start off by telling us what we're talking about, what those terms safety measurements, safety metrics, lagging indicators, and leading indicators mean?

Pam Walaski: Sure. One of the things that I found is that there really isn't a good definition out there that I think is commonly accepted. But we do often in the profession use the terms leading and lagging indicators to describe how we measure occupational safety and health performance.

And I think pretty consistently, most people would think of a lagging indicator as an after-the-fact measure--something that's already happened, where we're looking at the outcome of something that we've done within our organization. The most common ones that people are familiar with, of course, are the incident rates, the total recordable incident rate, the DART rate, the lost time rate, and those kinds of things.

Experience modification ratings are also another commonly used lagging indicator because they tell you what's happened. And they're used to measure your performance year-over-year against yourself as well as against other organizations. And there's a lot of benchmarking that goes on out there.

Leading indicators have gotten a little bit more popular lately in terms of discussions and articles. And they were thought to be better. And I use that term loosely, because they measure proactive or preventative work that we do as occupational safety and health professionals. The idea is that's really where we should be looking, because driving continuous improvement means really looking forward or upstream, if you will, at what we do. And so developing those kind of indicators are better than looking at things after the fact.

But interestingly enough, when you read the literature, you don't really get a common definition for either of them, just the common understanding of what they are, and I see a lot of other terms used to describe them: trailing perspective, upstream, downstream, preventative, proactive, reactive, just a lot of different terms are out there. And so there's some lack of consistency about how we use them. But I think in general, most occupational safety and health professionals would be most familiar with the terms leading and lagging in terms of how we measure.

What We Talk about When We Talk about Safety Measurement

Convergence Training: Cool, and then I guess, if I could just drag you over to safety measurement. What would people talk about when they talk about safety measurement?

Pam Walaski: I think we're looking at the types of things that we do as occupational safety professionals, the kind of activities that we engage in, and whether or not they are providing value to our organization, and are we measuring the kinds of things that provide value to our organization. So they typically are associated with occupational safety and health activities, incident investigations, training programs, policies and procedures that we develop compliance, near-miss reporting, those kinds of things, which is kind of one of the problems with, in my perspective, the typical use of indicators in occupational safety and health, and we'll talk about that a little bit later.

But the idea to me is that those indicators are set aside from business indicators, other KPIs that your organization might be tracking. And so a leading indicator or lagging indicator is traditionally focused on occupational safety and health, but they don't have a tie to the business. They don't have a tie to the businesses strategy. And so we find in many respects, we are isolating ourselves or setting ourselves aside from the business. And then we kind of wonder why we're not engaged with the C-suite and we're not showing how work that we do is engaged with what the business is doing.

And so I tend to think about performance measurement from an occupational safety and health perspective as business performance measure. It's nothing different; we're measuring the success of the business in terms of what it thinks is important. And obviously businesses think that performing well from our occupational safety and health perspective is important, but a lagging indicator really just kind of sits out there all by itself. And I think one of the things that we need to start doing is finding ways to integrate the more into the businesses expectations.

Leading Indicators

Convergence Training: Great. That was a great answer on safety measurement and soon we'll talk about this more, about how you want to move to performance measurement, aligning with business goals, and we'll talk more immediately about lagging and leading indicators.

But when you were giving the definitions, they are slippery and confusing, and you see them in different ways and what occurred to me is that a lot of times people talk about lagging indicators as something that's already happened. But obviously that's true of a lead of a leading indicator as well, because you're counting something you've already done. It's already happened. And the thought is, as you pointed out, that the leading indicator has a future preventive value or future predictive value. Is that correct?

Pam Walaski: Right. And so two things about that.

One of the things that I discovered in terms of the predictive value is that there really are no good studies out there, scientifically validated studies out there, that tell us that those indicators we're measuring have any tie to what we do. And that's really one of the bigger issues. And there are a number of folks out there who are traditional researchers, who will argue that these are all well and good, but they don't really tie statistically in any valid way relate to what you're doing. And that really is a problem.

There are also some folks out there, including somebody who I'm a big fan, Fred Manuele, and another gentleman by the name of Anthony Hopkins, that makes some compelling arguments that leading indicators can be lagging indicators and lagging indicators can be leading indicators. And so using those terms kind of confuses and confounds the issue. Fred Manuele called the whole discussion about what was leading and lagging “jibberish” in his one of his articles that I read. And Anthony Hopkins said very much the same, that it doesn't really matter what you call them, what matters is what they're measuring and how you're using them to drive continuous improvement.

So that's part of my learning as I did more research, it doesn't really matter what we call them. And so I've started to get away from the terms leading and lagging and focus more on performance measurement and what that means.

Convergence Training: Alright, we look forward to hearing more about that. If you'll bear with me, I'll ask you a question about lagging and leading indicators but then we'll shift our focus and for people listening will definitely get links to the articles Pam just talked about.

Lagging Indicators

Convergence Training: Can you talk to us about what some people call lagging indicators as we just defined them and talk about their historical use and our current use of them in safety management?

Pam Walaski: Yep. So lagging indicators most traditionally are, as we said earlier, the sort of incident rates that we use or experience modification ratings or cost of claims that our workers comp carriers might provide to us. And so they are traditionally used in that way, in that after-the-fact way

There is a lot of value to lagging indicators. And so my argument now and as we continue to talk is that lagging indicators aren't bad, and we shouldn't be throwing them away. One of the values of lagging indicators is that they're very well understood in the industry. For example, incident rates have a formula that they're used to calculate. So my incident rate is calculated in the same way somebody else's incident rate is calculated, which in theory makes them comparable. And so that's a very important part about lagging indicators, traditionally from an incident rate perspective. So we can use them in a consistent way, and people understand what I mean when I say total recordable incident rate. It's a common term, we all calculate it pretty much the same way. And so that's one of the real values.

Business also understands lagging indicators, because we've been using them for so long. And so if we want to evolve how we measure occupational safety and health performance, the solution isn't to take something that we've used for the past 40 or 50 years and say “Well, you know what, never mind. Let's try something else.” Right? We don't want to necessarily abandon them. Lagging indicators are good ways for organizations to measure their performance year-over-year. They may not be the best to benchmark against other organizations, but they do provide some value to the organization and they are easy to understand.

The problems with them, in addition to being after-the-fact measures, is that in some respects, incident rates and injury information are not necessarily consistently applied in terms of how they’re calculated. So for example, there's a lot of arguments out there about what is an OSHA recordable. And I'm speaking to the US based audience here of what an OSHA recordable is. So you'll see threads on LinkedIn or Facebook where somebody will post “This just happened to me. Here's the details of the incident-is this recordable or not?” And I guarantee you that there will not be agreement among the people who chime in. Half of them will say no, and then there'll be all kinds of variations in between. So the point is that if we all don't agree on what a recordable is, then we can't really compare recordable rates, because the information may not be the same. Now the aggregate data that we get from BLS is probably large enough to kind of smooth out those rough edges. And that's helpful, but we have to be very careful about that.

There's also the randomness of injuries occurring. And I just finished a book, and we'll talk some more about it, by Carsten Busch. I know you know him very well. It's called If You Can’t Measure It…Maybe You Shouldn’t.

Check out our earlier interview with Carsten Busch, Safety Mythologist: 10 Safety Myths.

And he just published it just recently, and I'm almost done with it. But he argues that injuries in and of themselves, and he makes a very good case for this, are random events. They don't always have the same kind of expectation of occurring with the same circumstances. And we know that to be true. The same situation can occur 100 times and 99 times nobody gets hurt, and one time somebody does. And so that randomness doesn't mean that there's a good way to compare what happens. And he also talks about using a very small sample within your own measurements, so using the injury rate from last year doesn’t really mean anything, you really have to go much further back or look at rolling averages, again, to kind of smooth out the problems associated with the lagging indicators.

The other problem, and this to me is the really the biggest issue, is that when we use lagging indicators, we teach our senior leadership to pay attention to them, which means that every time we have an injury that maybe affects our incident rate, everybody throws their hands up in the air and says, “Oh, my goodness, somebody got hurt, we've got to do something about this!” And so we focus most of our time and energy on fixing that one injury or that one incident that may not have any relevance. But we've taught our business leaders that this is important, and we should pay attention to this because somebody just got hurt. Which isn't to say that we don't care if somebody gets hurt, or that we should ignore it. But it places a huge emphasis on one individual incident, and I can speak from experience in my last position, when we would have a recordable, it was a very, very traumatic event for everybody. Even if it was something as simple as somebody getting bitten by a tick and going to the doctor and getting a prophylactic antibiotic just in case the tick was infected with Lyme disease. But we spent hours and hours and hours looking into that particular incident and how could we have kept that from happening?

So to me, that's one of the problems. Business doesn't measure itself, typically, by its failures. And an incident rate is a failure. Business measures itself by its successes. And so the theory then goes that leading indicators are successes or better ways to measure success.

How New Safety Fits In

Convergence Training: Right. So that that brings up the whole HOP and Safety Differently issue about how do you define safety. So we encourage people to check that out and will include some links.

Also check out the SafetyDifferently.com website for more on HOP, Safety Differently, and other flavors of New Safety.

You know, the point you just made, about how if we always report on lagging indicators like incident rates that gets everybody focused on something is very similar to a point I heard during a discussion with Todd Conklin recently here as well. And he was talking about how, you know, if you're doing something high-risk, I can't remember what it was, but it was maybe someone's up on a scaffold and they're building something, and if the goal is not to kill people, then businesses have to give safety professionals the freedom to break some arms sometimes. So that's pretty similar to what you were saying.

What Makes for a GOOD Leading Indicators?

Pam Walaski: Yeah, absolutely. Absolutely.

But on the other side of the coin is not that leading indicators are the solution to all of it either. So leading indicators are intended to measure preventative or proactive activities. But the problem with leading indicators that I've found in looking is that we really, as a profession, haven't really squared in our heads what a good leading indicator is. And so what I see in some of the things that I've read and heard is something called a leading indicator that is really no more than a tally of an activity that is proactive.

So for example, a leading indicator that is the number of hours we spend in training in 2019, or the number of JSAs that we update, or the number of safety suggestions that we get. All of those are preventative and proactive activities. But the indicator is nothing more than a number. It's a tally, it doesn't have any quality component to it. Just because we had we met our indicator of training hours for 2019 doesn't mean the training was good. And it doesn't mean that anybody learned anything from it. And just because we met our indicator for near-miss reports doesn't mean that those reports were any good and that we can do anything with them.

And so when we switch over to thinking about leading indicators, we have to be careful that a number, a tally, is not an indicator. It's just a number. And so we have to incorporate a quality component into that indicator before it has any relevance to what we're trying to do. So for example, finding a way to measure the success of training in terms of people's takeaways, not just did they score and pass the quiz that we give at the end of the training session, but two weeks later, can we observe them performing in ways that tell us that they learned something? Or in terms of JSAs, you know, have they been written in a way that is a high-quality JSA and meets certain parameters, not just that somebody pencil-whipped a new JSA and signed off on it and said, “Here you go.”

Because the one thing that we know about leading indicators is that if we set up a measurement of X number of something, chances are pretty good, we're going to come close to hitting it because people are going to see that number and say, Okay, how do I get there? What do I have to do to get there?” And getting there becomes the most important thing.

So, one of the things that Wells Fargo learned, in a very bad way, was having incentives that are rewarded creates a tremendous problem. So the Harvard Business Review had a really great article in September called Don’t Let Metrics Undermine Your Business. And they use Wells Fargo as an example of something they call surrogation, which is just simply that the metric becomes the Holy Grail, and achieving it is the most important thing. And so what Wells Fargo found out, and they used a metric that their CEO put out called “Eight is Great.” And “Eight is Great” meant that if I went to Wells Fargo for a mortgage, for example, and I successfully applied for and got a mortgage, their customer service people and their product sales people’s job was to convince me that there were seven other Wells Fargo products that I must have.

And so people signed up for things they really didn't want and ultimately, because that incentive was incentivized with rewards to the sales people, people got signed up for things without their permission. That's what brought the whole thing down and millions of dollars and terrible reputations, and later, Wells Fargo learned a terrible lesson.

But it applies to us as well. If we establish a leading indicator, even a lagging indicator, and we incentivize that in some way, people will do whatever they have to do to achieve it.

The other problem that Wells Fargo found, and that is related, is that the indicator that they picked was not done in conjunction with the people who were tasked with achieving it. And again, you see that a lot in indicator selection, particularly leading. You know, the Safety Department decides what the leading indicators are going to be, as opposed to sitting down and talking to the people who are going to be responsible to find out what they think would be a good way to measure the good things that we're doing, the preventative things that we're doing.

So there are lots of ways leading indicators can get twisted around to not be what we would hope they would be.

Convergence Training: All right, good thoughts. I've scribbled some notes here.

First you were talking about how leading indicators are often used currently, just as tallies or as what I would call basically just a measure of business, as opposed to having some quality component to it, like you said, and the first example you gave was about training hours. And so there I would encourage people to check some good models, include some new models, of training evaluation. We just had a discussion about the roots of this with learning researcher Dr. Will Thalheimer.

Side note: Here are two recorded discussions with Dr. Will Thalheimer on learning evaluation:

And then the second thing, even if you have a quality measurement to some extent, there's still the decision you talked about earlier, which is there’s an assumed value to these leading indicators, even if there's no data or evidence to show that it's actually true that it has predictive or preventive value, right.

It’s important to think about that, because every field has those assumptions that we believe are true. And we don't think critically about, right?

Side note: for example, check these related articles:

Pam Walaski: Right. And so you know, as you think about ways forward, that brings up the point that I think is really important to address, which I mentioned at the very beginning: our indicators currently sit out there, and they're devoid of any connection to a business driver, right? And leading indicators are no better, because they still sit out there all by themselves, even if they are designed and well-crafted and have a quality component. There's still a lack of integration into the business, into the strategic plan of the organization, which is really where in the long run I think we need to be in terms of how we use both leading and lagging indicators.

And even more so, leading indicators and lagging indicators should be talking to each other, they should be a continuum as opposed to separate things. So what I would envision is not that we have a little dashboard that has our leading indicators over on this side, and our lagging indicators over on that side, but that we have a business driver or a strategic goal, and our leading indicators and lagging indicators are tied directly to that.

And there are lots of great ways to do that. There are a lot of people who are talking about that. Carsten Busch, in the book we just mentioned, talks a lot about that. Peter Susca, who's

Jeff Dalto, Senior Learning & Performance Improvement Manager
Jeff is a learning designer and performance improvement specialist with more than 20 years in learning and development, 15+ of which have been spent working in manufacturing, industrial, and architecture, engineering & construction training. Jeff has worked side-by-side with more than 50 companies as they implemented online training. Jeff is an advocate for using evidence-based training practices and is currently completing a Masters degree in Organizational Performance and Workplace Learning from Boise State University. He writes the Vector Solutions | Convergence Training blog and invites you to connect with him on LinkedIn.

Contact us for more information