Every field has its sets of established truths. But if you consider some of these so-called "truths" a little critically, you sometime find out they're not true at all. It's like the professional version of an urban myth. They're in the air around us; we read about them and we heard smart people saying they're true; we never stop to question if they really ARE true; and ultimately, we end up believing in them ourselves.
In the learning and development field, a classic myth is that you can get better training results by designing training to match your learner's so-called learning styles. But I digress--we'll get back to that in another article.
In occupational safety, there may be some myths out there too. And that's why we had the conversation below with "safety mythologist" Carsten Busch. Put on your critical-thinking cap and your skeptical socks and give it a listen (or a read). And many thanks to Carsten for his sharing his time and knowledge with us.
If you liked this video, we think you'll also like the guide to the "new view" of safety, below--HOP, HPI, Safety Differently, Safety-II, Resilience Engineering, and more. And yep, Carsten is one of the many global safety experts who contributed their thoughts to the guide.
Let's dive right into our interview with Carsten Busch...
Hi, everybody, and welcome. This is Jeff Dalto of Convergence Training back with another webcast from our webcast series. Today we're going to be in the world of Occupational Safety and Health. And we have a pretty cool guest, I'm excited about this.
This is Carsten Busch. He's a self-proclaimed safety mythologist, and he's the author of a book called Safety Myth 101. And we'll be discussing a number of the safety myths that he brought up in his book. And so before we go much further, let me just start by welcoming Carsten and saying hi.
Carsten, how you doing today?
Thank you, Jeff, for inviting me to do this. And I'm extremely fine.
I'm looking forward to do this hour with you.
As am I. Well, cool. Thank you so much for coming on. And before we start getting into the prepared questions, could you tell us a little bit about yourself, how you see your role in safety or your views on safety, and show everybody that book Safety Myth 101 that we will be drawing some questions from?
When we first showed the thing. And there's a Dutch version if you dare, it's even more beautiful.
I'm sure it is.
Well, I'm not going to take that one out.
A bit about myself. I've worked for over 25 years now in safety, starting in occupational safety, adding some quality on the way, then moving into traffic safety. I've worked a lot in railroads and a bit in process safety but mostly safety management.
And for the past four years, I've worked for something completely different. Occupational Safety again, back to the roots, but within the Norwegian police force, I'm an advisor on occupational safety, which is an extremely interesting world to work in. Totally different from industry and railways and stuff like that.
So you asked me say something about my role in safety. And well, I proclaimed myself a safety mythologist. That was mostly for fun because nobody has a title like that, and everybody wants to be unique. So I became unique, but there's also something more about it. I try to be critical with regard to establish truths, but also with regard to the newer stuff, and try to see what's in it and what's useful and well, mainly ask questions. Not give answers that much, maybe.
Great, great for those listening, I think you'll agree, like I do, Carsten is a provocative and interesting guy. He mentioned that he’s critical of traditional safety dogma but also some of the some aspects of new safety as well and Carsten, I know you've written an article on that, and I'll be sure to include a link to that article, kind of showing how you position yourself, I think, to some degrees in the middle of those two camps. Also, people will find that a Carsten is pretty funny as well, something I enjoy. And with that, we've got a series of 10 questions prepared for Carsten, and we'll jump right into the first one. And each of these kind of relate to a safety myth that Carsten has identified in his book.
Side note: Check out Carsten's article Brave New World, which kind of situates him between old and new safety,
Carsten, first one: tell us about risks and hazards and are risk and hazards bad?
Well, if you take the point of departure in our everyday speech, you would probably think they're bad because when people talk about hazards and risk, when you run a risk of something, then most of the time it's something bad, and hazard is often the origin of risk. And therefore we also see hazard as a bad thing. Hazard is something that can do damage to something. That's, well, the loose definition of a hazard that is found in some standards.
But if you think about it.. really? Because look at this one (shows a coffee cup). You know what it is? A coffee cup. And I use those often when I do workshops and risk assessment courses, because it says here in small print - so you know, there's a lawyer involved, probably: “caution contents may be hot.”
And then think about it. If you go to 7-11 or where ever you get your coffee, and you would get a cup of a nice, cold coffee, which is very safe. You're not going to the counter and say thank you for the most safe coffee ever. You're going to complain and you're going to complain loudly. Because coffee has to be hot, has to be strong. I nearly want a heart attack and I want my tongue burned. Not really, but I want the possibility.
So I really want the hazard, I want the risk, and I've thought about—do you know the movie, The Matrix? And somewhere in that movie comes up the question, “Why have these machines built the virtual worlds the way they have built it? Because and then they explain they had a version of the world that was perfect and there was no harm whatever, but people died.
And then they created a new version of their matrix where people have problems and stress and pollution and stuff. And that's where they thrived. And I think that that's a really nice image for us. Because systems and organisms need risk, we need stress. We need signals to react upon, if we don't get them, we die, we will wither away.
So I guess you're making two arguments, I think. One is it with the hot coffee, I'm drinking one myself right now, and I've stolen that example from your book and I use it a conferences myself, so credit now where credit's due. That risk brings, you know, good, bring some form of pleasure or reward or something good. So you really, I think part of what you're saying is you can't have that success you want in business sometimes without taking some measure of risk or confronting some level of hazard.
And then secondarily, I think you're saying they act as an indicator or a signal that something bad is about to happen. Am I getting both those right?
Yeah. I should say we need risk in order to develop, to learn, to…whatever. But maybe not too much, and certain risks we don't want.
So next question. And this has to do ultimately with the definition of safety, which is something you hear discussed quite a bit lately.
A lot of people will say that safety is the absence of incidents, I believe, some organization or some standard and maybe you know this off the top of your head, has defined safety as such, or as something similar. But what is your take on that? Is safety the absence of incidents?
I don't think so. But if you go out on the street and you ask people “What is safety for you,” most people will probably return something in the line of not having accidents, or they say, well, it's a feeling, feeling good or something. I think that are the two most common general answers and the connection to accidents is very natural.
And if you go back in the history of safety, all the early writers or most of the early writers write about accident prevention, like Heinrich, who of course is the best known guy, his book was Industrial Accident Prevention. And there were several books of that title, not only by him but also by others.
Safety and risk only enter the language later on, not as promptly as accident prevention. So I think it's quite natural of the people identify the two together safety and absence of incidents. And the thing is, of course, to test if the definitions works. Say you would have to re verse it and say “If I don't have an accident, am I safe?”
And one of the examples I use is when I go into a bathtub, and I think let me be a bit efficient and blow dry my hair, which is very hypothetical, of course, and I get out of the box, and nothing has happened. Was that safe? Of course not. I'm doing something very, very dangerous sitting with an electrical appliance in the bathtub. But when I get out, one could say, well, this was safe because no accident happened. No it's not. It’s lethally dangerous but I was lucky.
So, not defining safety as something like the lack of incidents would lead you to practice and measure safety differently, is that correct?
I think viewing safety as an absence of accidents can give you some degree of information. It can be a measure. But it's a very weak measure, and it's only a very gross, rough approximation. You will have to look at other things to get a better picture.
Like how do we deal with people and hazards, for example, and risks? Well, how are the barriers and stuff like that?
Great, great. So some measure of information in your incident rate, but that's not the full story.
It's not the full story. And then you can even expand on that and say, well, depending on your definition of safety, you might even say that accidents are part of safety. Because if you talk about acceptable risk—engineers have that kind of definition of safety, an acceptable level of risk. That means you will accept some kind of accident. And then accidents are actually part of your definition of safety, which is kind of upside down thinking for most people, I guess.
Yeah, yeah. Could you maybe talk a little bit more about that? That's pretty intriguing. That upside down world of thinking before we move on to the next question?
Yeah. Well, if you use something like ALARP where you work, as low as reasonably possible, which means you can make a decision. You're going to accept some kind of level and then you have to show that in some way, you have barriers, etc. to prevent this as much as possible, but not at any cost. You're not going for zero but you're going for something above zero without maybe spelling it out.
The thing is, I think that most people even though you have to this assessment where you say “We are going to accept some measure of risk, because we cannot afford or we don’t want to afford absolute safety, and it's impossible anyway.” But when an accident happens, funnily enough, people suddenly expect the absolute anyway. Or they want some, even higher because they believe that the fact that an accident happened proves that you were not safety.
Well, that's not necessarily so, because probably you were or maybe you were within the margin. Maybe you weren't because those assessments may be faulty. So you don't know.
And that's probably—and now I'm really thinking out loud—there is probably another reason why defining safety as an absence of accidents is not a good measure of safety because it doesn't say necessarily something about the processes that went on before. Maybe you did very good assessments, maybe you did put in place every measure that was possible or necessary or reasonable. But then something totally unpredictable happens or something happens that everybody thought would not happen. This is a one-in-a-million; you hear always the one in a million chance. Maybe this is the one in 1,000,000, and one in a million is maybe acceptable.
So then now, yeah, accidents do happen.
I'm not sure if I made any sense now, because I was mostly thinking out loud, and this was not rehearsed in any way.
No, I think you did. I mean, it ultimately, it seems like you're coming down to the fact that, in risk thought we talk about as low as reasonably possible, which suggests that we're willing to live with a certain amount of risk, and that suggests we're willing to live with a certain amount of injuries. Maybe they're minor injuries, or maybe there's such major outliers that are so unlikely to occur, that we're willing to live with that risk and the potential because most the time, it's not going to happen. Is. That about right.
Yeah, that sums it up pretty nicely.
All right. Well, I guess, leading nicely from that, our next question was, are all accidents preventable? And I'm guessing, you’re going to say no.
Yeah, we’ve discussed that a bit already, and I've a couple of problems with that statement. It's a very known slogan, of course, and there are some major players in safety that will promote it. And I think the thought behind it is probably good, but the wording is totally wrong.
Firstly, it's usually applied in hindsight. In hindsight, you can say “Oh, well, we could have prevented this. Because if we had such and so and something different, then it would not have happened.” The problem is, of course, prevention is something proactive and not in hindsight. So there's a little mismatch there.
But then you can also think, what is actually necessary to make this true to prevent all accidents? And then you need quite a lot of stuff, you need unlimited information about what you're doing, about what you're going to do. You need 100% prediction, you need unlimited resources. And I don't think any organization has any of these three thing—100% knowledge, prediction and resources. And you also can’t have any surprises. I think it's that's totally not the world we are living in.
So, it's just not possible. If the people who use the slogan “All accidents are preventable,” are trying say is possibly every accident is an opportunity to learn and improve, and that I can fully agree upon.
Let's put it that way then—all accidents are an opportunity to learn. That I think is a much better way of looking at things than saying it's preventable, because then you are in quickly in the role of saying “Okay, we didn't prevent so you're more in the negative mood, I think then, and it's I think it's one of the points that Todd Conklin makes in…not his last book. I haven't read that one, what's it called? The serious accident prevention book about two years ago now.
Oh, the one about preventing fatalities?
Side note: Check out Todd Conklin’s book Workplace Fatalities: Failure to Predict and our own article about Dr. Conklin’s book on preventing fatalities at work. Also, Pam Walaski makes this same point in our recent interview Using Risk Management to Reduce SIFs at Work.
Yeah, that one. He says quite a lot about getting away from the prevention thinking. And I think that's the same, that I'm arguing against the idea that all accidents are preventable. If you think that way, you will be in a much more negative view than if you didn't, if you say well, there’s some opportunity to learn from accidents.
All right, so we’ll take that up again and go a little further into that in the next question. It’s kind of interesting. It's the second time you've argued against the kind of simplistic safety slogans. We talked about things like zero harm, not being an attainable cause or goal and also about the slogan that all accidents are preventable. Also being a little simplified and maybe not great. And in particular with all accidents are preventable, you mentioned that that is said retro-actively, not in a forward-thinking manner. And I wondered, that leads nicely to our next question, which ultimately, I think has to do with root cause analysis and similar efforts.
And that leads actually our next question, which is, can safety pros identify the cause? Can you talk to us a little bit about causality and the difficulty in identifying the cause of a workplace incident?
Sure. It's one of the things I must say I have learned over the two and a half decades that I've been in in safety. Because when I started and I don't know how it was for you, I'd been told taught in many accident investigations courses, your job is to find causes. So it just sticks in your mind. An accident happened, we have to find the cause, discover the cause. And a few years ago, and I don't know what ticked it off, I started thinking about that and reading a bit. And then I entered a whole discussion of constructivism. And causes—they are not things. The cause is something we construct in our heads. Constructs are things that don't really exist because you don't find causes in the natural world. But we attribute cause with something that happens.
Take for example, I bumped into a glass door and, oh yeah, I wasn't paying attention. I construct that cause. But I could also construct a lot of other causes. Like I was distracted by my colleague who called me and I turned my head and bang, walked into the door; or the guy who designed the door and didn't do his job, because he should have put these things on door to make the glass more visible for people walking towards the door.
So you see, I have a couple of choices to make and maybe all of them apply at the same time.
So THE cause--I don't really believe in it. And it has of course also to do with your stop rule--where you going to stop your analysis? Are you going to stop at the first convenient point where you think “Hey, I can solve this problem? Then I could stop with me I walk out the door and I have to pay attention. So I am the cause and I should shape up and not be distracted and not look at my iPhone and whatever.” Or are we going to analyze further and look at the bigger system with distractors and the design of the door and, is my workload too high so that I need to check my phone to see where my next appointment is? Etc, etc.
So if you want to find the cause, it's probably just you choosing something and it's not the truth then. It's a version of a truth, I think.
And so I’d imagine there are probably a couple of issues with selecting the cause. One is that it's often a multifactorial thing; two, as you're saying, within those multiple factors or causes, you wind up pretty randomly or arbitrarily selecting one, usually, or sometimes at least kind of the simplest one. And that's going to leave you I think, amongst other things, vulnerable to making decisions based on well-known cognitive biases, amongst other things, of the type that maybe Daniel Kahneman would have written about in Thinking, Fast and Slow. Does that sound about right?
Yeah, definitely. And mostly, if I walk into the door, I'm not going to blame myself. That's a fundamental attribution error. If I make a mistake, it's everybody else's fault. But if you walk in the door, stupid, Jeff again.
That’s a good example. So the exact same thing, two different causes, depending on whether or not I was involved.
So that leads us nicely to our next question. We have the stop point, the simple answer, the cognitive biases. What would you say if I told you that, or many people will tell you, that people are the problem, people and human error ends up being identified often as the root cause of many incidents? What's your response to that?
Well, sometimes they are, probably, because people make mistakes. And of course, you can choose to stop there. And for organizations, it's often a solution to a problem. And it's a very cynical solution, but it's a reality we should be aware of. For an organization, it’s a solution to, figuratively speaking, throw somebody under the bus and then move on.
But if you want to really solve a problem, I think James Reason was one of the first to say the people in the sharp end just inherit the problems from higher up in the system or elsewhere in the system. And that is how it often is that people are placed into a situation where they are almost required to make a wrong choice, because you have time pressure and you have to be safe and you have to deliver quality and within budget. And within all these constraints: faster, better, cheaper. The old NASA slogan, you’ve probably seen the memes on the Internet--pick two because you can't get all three. So you can have fast and cheap, but that’s probably not good.
So, yeah, sure, people will screw up in one way or the other, but is that because of them? Sometimes maybe. But often it’s a function of the situation they're placed into. And so one of the really, really, really important things I've learned in the last years is that your reflex shouldn't be to ask, “Why did he do something stupid like that?” But instead, try to make a reflex of asking, “Why did it make sense to him or her in that situation?” And asking that question will lead to where we’re past the point where the people are the problem to finding your answers by people are in a problem. And then you have the people as the starting point or the error is the starting point and not as an answer.
Great. So this next question gives us an opportunity to kind of pull in at least three threads we've already talked about. You've already mentioned, Heinrich, we've already discussed whether or not people are the problem, and that's going to lead to a discussion of the 1-29-300 rule, and we've already talked about causation.
And I wondered, Heinrich’s Safety Pyramid is kind of a much-discussed, hotly debated issue in safety these days. And this concept that there's a 1-29-300 rule within that pyramid, and that there's some kind of causal relationship between the incidents at the bottom, the less severe incidents at the bottom, and the more severe incidents at the top. And I wonder if you can just kind of ground us in a discussion of Heinrich’s Pyramid and give us give us your views on that—what Heinrich actually meant, what lessons we can take from it today, and so on.
I think we need to expand this webinar now from an hour to four hours.
OK….the abbreviated version.
And I really need some more preparation, but the pyramid I think is probably the most misunderstood, misquoted, and….well, often it comes out totally wrong. Because there are many misunderstandings and they're just copied and repeated and enhanced in many ways.
If you, for me, boil down the message from Heinrich’s Pyramid, it’s about opportunity. If something little happens, you do not have to wait till it becomes something bigger. If you can act now, it was an argument for proactive safety, safety management, or accident prevention as he called it, and he says several places in his texts, a lot of attention is given if something major happens, if something is really bad, if someone is really badly hurt, if somebody is killed. If a factory blows up, we are going to investigate and put a lot of effort into it and then try to prevent it.
But why don't we look at the precursors of things that happen? And I think that's a good thought. Because I don't have to drive with my car with worn tires. And when my tire blows up and I end up in the ditch, that's not the moment I'm going to change my tire. I do regular checks. I see well now the profile is really low. I'm going to get new tires. And then well, I've possibly--that's the thing, you never know if you actually prevented anything, because you can say this didn't happen, so a good score. I'm going to count everything I've prevented. That's rain dancing. And so as I say, I've possibly prevented a car crash. I don't have to wait until the big thing happened.
The problem is that people have misunderstood the whole message and then mash everything together, all the slips, trips and falls and the bumping heads and the cuts in fingers, etc. and think if you do something about the small accidents, you will prevent the Deepwater Horizon from blowing up. It doesn't work that way, of course.
Because you have to stick within similar events. If I changed my tires, it doesn't prevent me from experiencing brake failure. That's a different scenario. And the thing that Heinrich did with this 1:29:300 ratio is he averages a lot of different accidents. But in his examples, he discusses similar events like somebody crossing a rail track at the point where he shouldn't cross them and he does this several thousand times and then he gets hit by a train. I think that was one of his examples. Well, there is some thousand to one ratio and then he has another thing which has a 200 to one ratio, and then he averages them and comes up with this total number. The problem is many organization have just worked with a total pyramid and think “Well, we do something about frequent stuff, and then the infrequent stuff goes away,” but it doesn't work that way. It works only if you are working within your sliver of the pyramid, which is similar to the top, which is kind of what about the people in safety discuss about the serious incidents and fatalities (SIF)? They're moving that way. But I think they're still mashing stuff together. Which doesn't work. It’s better than looking at the whole pyramid, but it’s not entirely there.
But the main message, I think, is forget the ratio, because it doesn’t work. Instead think in terms of opportunities.
The problem is, of course, how do you know what opportunities to use? Because you can't react to them all. For example, I think a lot of maintenance works on the basis that is similar to Heinrich’s Pyramid. You see small stuff, like a low profile on my tires, when do you react on it? When is it critical enough to do something before it becomes something bigger? Or, maybe it's just a vibration in my steering wheel. It can be some totally random thing. It can also mean that my tires aren't balanced anymore. And those are two big problems I think in applying the thought. I think the thought is very good. It's about opportunity, being proactive. Don't wait till somebody dies or gets hurt, you can do this much earlier at lower costs, and you have more of these opportunities. The problem is you can't take them all, so which one to pick? And how to separate them from the everyday stuff.
Can I ask you a couple follow ups on that one?
The first one being, if I don't have resources to react to everything at the bottom of the pyramid, and I have to separate the signal from the noise as you talked about, and ideally identify something that at the bottom that is truly a precursor of something that could lead to the top as opposed to just something that's going to stay at the bottom, if you will, any tips on how to identify that signal from noise, how to how to recognize this as a precursor?
One thing that’s also recommended in the literature I’ve read on serious injuries and fatalities is criticality or potential in this case. If I cut my finger in the kitchen, in a worst one in a billion case, it can of course become a blood infection and I can die. But it's not very likely.
But me using the same kitchen knife to repair electrical appliances in my home--yeah, that's a higher potential there. I think it's a silly example, but it's the one I’ve got.
So that's one way and the other. And I think it was Andrew Hale, who wrote, in my opinion, the best paper so far on the triangle in 2000 or 2002. It's free online, downloadable. He speaks about looking at the amount of energy in any case, I think.
Side note: You can download Andrew Hale’s Conditions of Occurrence of Major and Minor Accidents: Urban Myths, Deviations, and Accident Scenarios here.
And well, that's probably quite a useful and simple way of looking at the matter. If you think again of Deepwater Horizon, or Texas City, where, where the investigation showed they had misapplied and misunderstood the triangle, and Andrew Hopkins has written about this, among others. Slips, trips and falls don't have the same amount of energies in them as high tower filled with explosive liquid.
Side note: See Andrew Hopkins, Failure to Learn.
So that should be a signal that okay, maybe you should do a bit less about the slips, trips and falls and