I recently finished reading the book Pre-Accident Investigations: An Introduction to Organizational Safety by Dr. Todd Conklin. It's a great introduction to Human and Organizational Performance, also known as HOP.
HOP is a systems-based approached originated with safety thought leaders like Conklin, Sidney Dekker, and James Reason. It has been adopted by General Electric and other companies, and was the focus of the exciting and somewhat-controversial plenary session at the ASSE Safety 2017 Conference (I've included a video recording of that HOP/BBS discussion near the bottom of this article). HOP has much in common with safety differently, new safety, safety 2, etc.
If you've wanted an introduction to Human and Organizational Performance (HOP) and/or to Conklin's thoughts, this book is a good starting point. In our article below, we provide some key points from the book. If you've read the book yourself, or if you're using HOP at work for safety, please leave additional comments at the bottom of the article.
Conklin's book is a wealth of information. It offers case studies based on true safety incidents, various methods and procedures, and more. So be sure to read the book to get the whole picture and know it's not our goal to give you the whole picture here.
Instead, we're going to present some of the larger, over-arching themes in the book, let you chew on those, and then encourage you to take your studies further. We've included our own thoughts on each theme plus a quote from the book so you can see it in Conklin's own words.
We invite you to use the comments section if you're familiar with the book and want to call out other large themes or specific parts of the book you found very useful.
Many times in the book, Conklin points out that much of what safety professionals discover during an incident investigation could have been discovered before instead of after the incident. So why not put some brain power into preventing the incident before it happens?
In the beginning of the book, Conklin retells a story that a safety manager friend had told him which sets the stage for the rest of the book and makes this point. Here's what he says:
Having a premortem meeting, a meeting where you ask smart, experienced people what could go wrong before it does go wrong, provides a new set of data about a failure that has yet to happen...
A pre-accident investigation is exactly the same idea.
You and your organization can learn from this story. This is your guide to leveraging great, untapped knowledge that already exists in your organization. Your job is to prevent the frequency and severity of events in your company. I am convinced that the only way we can can prevent events and failure is by learning. There is data to learn before an incident if you ask the right questions and are willing to look.
See p. xii
Many times, safety tends to focus on isolated machines or the actions of single individuals without putting those into a larger context. Human and organizational performance takes more of a systems approach (you can see this movement to "systems" thinking in safety management systems such as ANSI Z10 and ISO 45001 as well).
Here's how this is introduced in Pre-Accident Investigations:
In a classic root cause analysis (RCA), our job is to deconstruct the event down to its most minute parts, analyze those parts, and fix whatever is broken. In Human Performance, we do almost the opposite. Instead of deconstructing the event, we construct the event context and look not at the individual pieces of the event, but at the relationships between those pieces.
See p. xiii
This Conklin podcast titled Context Drives Behavior will help you consider this issue more fully.
Traditional safety tends to focus too much on a top-down approach in which management attempts to control workers--even if the intention, to reduce injuries and illnesses, is a good one. The same is true of much of management beyond the safety realm as well, actually. As writer Daniel Pink notes about management theory, "there's a mismatch about what science knows and what business does."
Conklin sets up HOP as an alternative approach. According to Conklin, safety isn't manna from management that drips down onto workers. Instead, the role of the safety professional is to help workers learn from one another and to help management learn from workers.
Here's how Conklin describes his own work at the Los Alamos National Laboratory and their journey toward human and organizational performance:
...we have learned about our operations and systems. We have learned because we took the time to help our workers understand the importance of learning from each other, and in turn our management learning from the workers.
See p. xiii
This more extended article on using learning teams for incident investigations goes into this in more detail. We also invite you to read this article on the importance of lifelong learning and learning how to learn.
Change is often hard and it's often resisted. That's often at least partly because we have mental models, also known as schemas, that can become so ingrained we can't recognize how at times that they are wrong, how they limit us, and how they prevent us from seeing better alternatives.
And so moving from your current safety management philosophy to one that embraces the principles of HOP won't be easy. But it will be worth it.
Here's how Conklin puts it:
Changing your organizations' safety management philosophy is a significant change. Change can be challenging, and sometimes hard. The type of change we are discussing in this book is especially difficult because you are asking your managers to demonstrate a new and different kind of trust in their workers. You are also asking workers to trust and communicate differently with their managers. The new philosophy will be hard at first.
See p. xiv
Traditional safety management tends to focus on "fixing" the worker or an error made by the worker after an incident occurs. By contrast, HOP focuses on fixing the system in which the error occurred.
Here's how Conklin puts it:
There are two choices.
Fix the worker (training, discipline, or termination) who did something he did not mean to do, in the hope that he won't again do something he didn't mean to do in the first place.
[Or] Fix the system...
Fixing the worker gives the impression of an immediate solution to the problem, but it probably fixes the wrong thing. Punishing the worker is a fast and easy way to "solve" the problem, with the only issue being that it unquestionably fixes nothing at all, not even the worker in question. Because the whole failure will inevitably happen again with a different worker....
Fixing the system...will fix the right problem, and will ensure that the facility [doesn't experience the same problem again].
See p. 4
This Todd Conklin podcast titled Blame Fixes Nothing does a nice job of introducing this issue.
It's reasonable that a book about human and organizational performance would offer a definition of "performance" and explain what happens when performance isn't as expected.
Conklin defines performance as:
...the degree to which you get what you expect from a person, a machine, or a process.
See p. 6
And he explains that a "deviation" is when you get something other than what you'd expect. So deviations can include cases of underperformance and overperformance. Of course, safety professionals tend to focus on deviations involving underperformance, and rightly so, but it may also prove fruitful to draw lessons from cases of overperformance as well.
In discussing performance and deviation, Conklin notes that he doesn't care for terms such as "event," "accident," and "failure," and that his true preference is "deviation from expected outcome (DFOE)," although he admits that's not very catchy and has in fact never caught on at his workplace.
Quick note: see our interactive poll on this event/accident/failure/incident issue and place your vote.
You're a safety professional, right? I assume everyone reading this article other than my mom is a safety professional (Hi Mom!).
Well then, Ms. or Mr. Safety Professional, let me ask you a question: what is safety? How do you define it?
It's not such an easy question, is it?
Many safety professionals define safety by saying it's the process of reducing injuries and illnesses; by counting incident rates and trying to decrease them; and/or by otherwise focusing on safety-related KPIs and striving for zero harm.
Conklin and others in HOP think this is upside-wrong. Here's how he puts it:
You don't have to be a genius to know that something seems oddly wrong about the way we measure safety success. We count the number of people we hurt, and totally discount all the people we are keeping safe.
See p. 7
Given that problem, he poses a new definition of safety:
Safety is not the absence of events; safety is the presence of defenses.
See p. 8
For more on this, you might want to check out our discussion with Ron Gantt about Safety Differently.
It may come as no surprise to you that people are not perfect. They are fallible. They make mistakes.
In fact, if you stop and think about it, people make "mistakes" (especially mistakes in the sense of a "deviation from expected outcome") almost all the time.
Here's how Conklin puts that:
Humans make errors. People are fallible, and even the best of us make mistakes...
Everybody makes errors, everybody. The very worst performers make errors. The very best workers make errors Error is a predictable and natural part of being a human being.
See p. 8
Safety professionals can take some consolation in the fact that although people are inherently fallible, in most cases those errors don't lead to safety disasters. But on the flip-side, and we all know this, sometimes they do.
Again, here's how Conklin puts it:
People make a lot of errors. Not all errors have a consequence. In fact, not all errors are actually errors. We only really notice an error if it has some type of outcome or consequence that is large enough to be noticed by either you or other people around you. Error only becomes apparent if you notice an error.
Contemporary wisdom says that the average skilled worker, workers who work with their hands, makes 5 to 7 errors per hour. That same wisdom says that a knowledge worker, workers who work with ideas and concepts, make between 15 and 20 errors per hour....Errors are how we are wired....a natural part of being human. Human error is inevitable--all workers are error-making machines. What all this means is pretty simple: error is everywhere, and there is nothing you can do to avoid the errors. You can't punish error away. You can't reward error away. Error is an unintentional, unpredictable event. You know it, and I know it.
See pp. 8-9
Yet a common problem with many occupational safety efforts is that they seem to be developed with the goal of making people perfect. With removing the possibility of human error from the equation. With removing the humanity from humans.
This is a textbook example of swimming upstream; of plugging a dike with your thumb; and of hoping against hope.
It's Conklin's point that if we attempt to "perfect" people and make them them infallible, we'll by definition fail in our own effort. (Funny observation: the attempt to "safety proof" fallible people so that they don't fail is itself destined to be a failure. Ecce homo.)
Instead, Conklin believes we should begin with the assumption that people will make mistakes, and build systems strong and/or resilient enough to absorb those mistakes without leading to an accident. This is where the HOP focus on systems and human behavior within a context at work comes into play.
Here's how Conklin ties together the issues of pre-accident investigations, systems thinking, and the inevitability of human error:
You're not ever going to be able to stop an accident. You can directly change the way the accident affects your organization, your workers, and yourself. A pre-accident investigation helps you make your organization better prepared for a failure...
Engage your learning systems to make your organization smarter and more prepared for the potential failures you uncover.
You are gathering information to prevent adverse outcomes for your workers and your organization. You will never be able to measure what doesn't happen. You will never be able to predict every event. yet, it is clear that if you can gather enough information about a system to identify the places where failure is most likely to happen, or places where if a failure were to happen it would have some type of serious consequence, you can actually intervene in your organizations process and systems, and prevent events.
See p. 50
This short, 3-minute podcast at Dr. Conklin's Pre-Accident Investigation podcast site, titled Mistakes are Normal, does a great job of making this point in a simple manner.
Safety is directly connected to learning.
Organizations need to learn from employees, as already noted. They need to learn from all sorts of deviations, including those that don't cause notable consequences and those that do.
If an organization isn't a learning organization, they are not a safe organization. Incidents, including property damage, injuries, illnesses, and fatalities, are more likely to occur.
Because learning is so tied up in safety, human and organizational performance is dedicated to helping organizations become learning organizations.
Here's how Conklin puts it at one point of his book, when he's talking about the difference between work "as planned" and work "as performed:"
The space between planned work and performance work is the operational gap. In that operational gap lives a vast amount of information. This is where you learn about safety, as safety exists in your operations. This is very different from observing worker behavior or auditing procedural use and adherence. This is real post-job information about what happened when work was being done. Finding and understanding the difference between "work as imagined" and "work as actually done" is like finding the place where all your safety data "hangs out." This is a treasure trove of information. Seek understanding here in order to know how to better plan work the next time you perform this task, or tasks like this task.
See p. 71
See our Safety and the Learning Organization article for more on this connection between safety and learning.
We hope you found some value in this brief introduction to human and organizational performance, as presented by Dr. Todd Conklin in his book Pre-Accident Investigations: An Introduction to Organizational Safety.
If you want to learn more about HOP, the first thing we recommend you do is buy the book and read it. There's a lot more that Conklin covers, including some great tips for putting HOP into work at your organization now.
Beyond that, you'll be happy to know there's an entire Pre-Accident Investigation podcast series. You can't go wrong there.
In fact, there's a particular series of podcasts at Dr. Conklin's Pre-Accident Investigation podcast site we'd like to refer you to because they're foundational for making the switch from "traditional" safety to HOP/Safety Differently. Here they are:
(this is an ongoing series...we'll keep updating over time)
There are also any number of other influential thinkers and sources of information related to HOP/New Safety/Safety II/Safety Differently. We've listed and linked you to just a few below:
Speaking of Safety Differently, we've done a number of interviews with Ron Gantt about this (it's closely related to HOP and/or HOP by a different name). Check these out if you're interested:
In addition, Conklin's book includes a helpful "Basic Reading List for Human Performance" (see p. 135). That's a great resources as well, and we encourage you to check it out. In fact, several of the book recommendations immediately above came from that list. A couple of interesting resources we were surprised but impressed to see in Conklin's list include The Checklist Manifesto by Atul Gawande and Freakonomics by Levitt and Dubner (for more about the field of behavioral economics, see our article about Dan Ariely's book The Upside of Irrationality).
Finally, we mentioned at the beginning of this article that Dr. Conklin (along with a representative of General Electric, which has taken up HOP for safety efforts at their organization) spoke at a much-discussed plenary session during ASSE Safety 2017. We've included a recorded video of that discussion below and very much encourage you check it out.
And finally, if you're interested in HOP and Dr. Conklin, you'll no-doubt enjoy the guide to "new views" of safety below, including HOP, HPI, Safety Differently, Safety-II, Resilience Engineering, and more. It's got contributions from many of the world's leading safety professionals, including Dr. Conklin himself.