Thinking about how we think about Safety

Thinking about how we think about ‘Safety II’ and the ‘New View’ in Safety before we have an incident:

A good starting point for any of these sorts of conversations is thinking about our mindset with respect to safety generally. The definition of ‘Safety’ as described by Sidney Dekker in Just Culture – Balancing Safety and Accountability is a really good way to go:

“‘Safety’is the presence of positive capabilities, capacities and competencies that make things go right and not as the absence of things that go wrong”. Hollnagel has an almost identical definition.

This definition drove me to thinkdeeply about safety and incident investigations in a different way:

  • Looking at what normally goes right to create safety instead of looking just at things which did not go quite right or are currently not going right.
  • Focusing on the differences between the way the work was done on the day (Work-As-Done) and the way the work was intended to be done by the procedure, work instruction, THA et cetera (Work-As-Intended).
  • Considering the question “What is responsible for his incident?” rather than “Who is responsible for his incident?”
  • Getting as many things as right as possible, rather than minimising the number of things that go wrong.
  • Considering people as a solution to harness and develop, rather than a problem to control.
  • Challenging ourselves and our thinking about whether we are truly helping people get the positive capabilities, capacities and competencies that they need to create safety.

What a powerful definition! Think about it.

If people are creating safety in the workplace, then where do our procedures fit in? Do we expect people to adapt to create safety, applying their capabilities, capacities and competencies? Or do we expect them to follow the procedures? Or do we expect them to do both? And what are they doing at the moment anyway? This brings us to the topic of Work-As-Done.

Work-as-Done and Work-as-Intended

Work-As-Done (WAD) is the way work is actually done in the field, on the day. It is very often different from Work-As-Intended (WAI), which is the way the procedure or work instruction says to do the job. We only need to get out and about and watch people work to see this.

Remember that our people consistently create safety in what they do while they do their work. It is from their capabilities, competencies and capacities that safe work is produced. They adapt to suit the conditions on the day. There are always explanations as to why the way the work is done in-the-field on a day-to-day basis does not always match the procedure. In fact, it rarely matches the procedure exactly. Yet, the vast majority of times it results in ‘safe work’.

When something goes wrong and where we have an incident, or when something has not yet gone wrong (every day…), we need to get our heads around how the work is actually done and understand what is driving any differences between Work-As-Done and Work-As-Intended.

Task / Procedural / Situational Complexity can play a big part in driving these observed differences:

Humans are capable of doing more than one thing at a time, but only if they are easy and understandable. We cannot handle too much complexity. Sometimes, we try to put everything into a procedure and what can result is a procedure, or set of procedures, that can be neither clearly understood nor followed. What is going on around us in our workplaces also affects how we function. The more complexity involved, the higher the level of risk. Understanding the level of complexity and managing it is vital to the safe completion of tasks.

There is another topic that could not possibly be left out of a discussion about Safety II or the New View, and that is Resilience.

Resilience*. Resilience is often described as the ability to bounce back, to accommodate ‘unexpected’ change and to absorb uncertainties without falling apart.

Incidents, according to Resilience Engineering do not represent a breakdown or malfunctioning of normal system functions, but rather represent the breakdowns in the adaptations necessary to cope with the real world complexity:

  • Knowing what to do, i.e., how to respond to disruptions and disturbances by making adjustments to normal work.
  • Knowing what to look for, i.e, how to monitor that which is or could become a threat in the near term.
  • Knowing what to expect, i.e., how to anticipate developments and threats further into the future, such as potential disruptions, pressures, and their consequences.
  • Knowing what has happened, i.e., how to learn from experience, in particular to learn the right lessons from the right experience

Having conversations about, and setting up our systems, procedures and our expectations around resilience will make a huge difference to our safety outcomes. It is a very powerful conversation starter when we are exploring what is contributing to any observed differences between Work-As-Done and Work-As-Intended.

*Much of this is taken from “Resilience Engineering: New directions for measuring and maintaining safety in complex system Final Report December 2008” by Sidney Dekker, Erik Hollnagel, David Woods and Richard Cook.

Helping people get good at judging the likelihood of outcomes, of assessing the risks associated with following, or not following procedures and guidance requires Risk Intelligence.

Risk Intelligence is the ability to estimate probabilities accurately, whether the probabilities of various events occurring in our lives, such as a car accident, a workplace incident, or the likelihood that some piece of information we’ve just come across is actually true. We often have to make educated guesses about such things, but fifty years of research in the psychology of judgment and decision making show that most people are not very good at doing so. Many people, for example, tend to overestimate their chances of winning the lottery, while they underestimate the probability that they will suffer from cance at some point in their life.

At the heart of risk intelligence lies the ability to gauge the limits of your own knowledge— to be cautious when you don’t know much, and to be confident when, by contrast, you know a lot.

Being able to understand the risk associated with an activity is vital to controlling the risk and getting the task completed in a safe and effective manner. It is just as important in understanding the likelihood of an event turning out as planned as it is to understand the risk of event not going to plan or having an unexpected outcome.

In essence, risk intelligence is all about having the right amount of certainty. The more we force silly procedures on our people, the more we put up signs and mandate ways of being and doing things, the less Risk Intelligent they become. We need to help them learn how to think again.

So, how do we make sure our people know that they are creating safe work through their choices? One way is by being there as they do their normal work, exploring the differences between Work-As-Done and Work-As-Intended and most importantly, by having conversations.

Having powerful conversations

As I have just mentioned, go and have conversations with your people. This is not rocket science and has been known for many years. What I am sharing with you are a few simple questions that you can include in your conversations that will trigger some great and value-adding thinking:

  • What can happen unexpectedly, or go wrong, and how do you prepare for it?
  • How badly could it go wrong?
  • How easy or hard is the task?
  • Looking back over the years or months, has the way we have done this task changed?
  • Is the way you are doing this task, the same way for all crews and shifts?
  • Is the way this task is ‘normally’ done the same as in the procedure, standard or work instruction?
  • Is there a safer way, or a ‘more safe’ way of doing this?
  • How comfortable are you doing the task this way?
  • Is there anything stopping you from doing the job the way you want to do it?
  • How many procedures or standards or work instructions need to be followed to get the task completed?
  • Is the Task Hazard Analysis (THA) or Procedure useful and is it able to be followed?What do you understand by the risks that are described in the THA?

After a job has been completed:

  • What worked as we thought it would?
  • What didn’t work as we thought it would?
  • What surprised us?
  • What hazards did we catch? And miss?

I have been asked many times to suggest ways of doing simple incident investigations. It became clear to me that the conversations we have before an incident should not be any different to the conversations we have after an incident. It is for this reason that I thought I would describe my view of what makes a simple, yet very modern and effectve incident investigation tool:

“Outcome Analysis” – A simple investigation process that aligns with the above:

An event occurs. It could be an injury, a near miss or an observed difference between Work-As-Done and Work-As-Intended from a field observation and conversation.

The supervisor, superintendent or other leader has a series of conversations with those involved, other operators in the area and who those who know how the work is really done.

They head back to the office with the person involved, a safety rep, and/or a peer and they draw a simple set of Work-As-Done and Work-As-Intended timelines. The idea is to keep this simple rather than describing a detailed time line. Once this is done the team explores the reasons behind the differences between Work-As-Done and Work-As-Intended using some of the topics described earlier in this blog and come up with a few actions that will fill the gaps.

She then completes a one-page report covering:

  • Overview of the event
  • Work-As-Done vs Work-As-Intended gaps identified (Not the entire time line)
  • Summary of explanations for the gaps (Not a full five whys just the outcome)
  • Actions

(I wrote this blog after reading, thinking about and having lots of conversations about the stuff contained in the books, blogs, podcasts, papers and presentations of Sidney Dekker (Safety Differently, Behind Human Error, The Field Guide to Understanding Human Error, Just Culture et cetera), Erik Hollnagel (Safety I and II, and his excellent work on Resilience), Dylan Evans (Risk Intelligence), Daniel Kahneman (Thinking, Fast and Slow) and Todd Conklin (Pre-accident investigations), amongst others.)