Raeda Consulting May 31, 2018

It’s all about the story

Story

We all love stories. We learn from stories. We share with stories. Out investigation reports should always include the various narratives and perspectives that help us understand and learn.

 

 

Whether you use TapRoot ®, ICAM (Incident Cause Analysis Method), Timeline and 5 whys or one of the myriad of other options as your weapon of choice in the hunt for the drivers of workplace safety incidents, the narrative remains the single most important element in your investigation report. What do I mean by this? The main tolls I use for serious incidents and fatality investigations these days are timeline and 5 whys, and ICAM – both modified to include consideration of Work-As-Done as it compares to how others do the work normally (Work-As-Normal) and how managers think the work is being undertaken as per the work instruction, procedure checklist or guideline (Work-As-Intended, or Work-As-Imagined).

 

I believe that simply creating a chart, identifying a couple of ‘root causes’ or a list of contributing factors does not cut the mustard in terms of maximizing learning and understanding of an incident. We need to also tell the story. After all, the investigation report is nothing other than the story of the investigation team’s version of the incident – how they see it through their eyes. Their story in other words. This is not ‘the truth’ anymore than an eyewitness’s version of the incident is ‘the truth’.  It is simply another view of what happened and why. If we look at an event report it is not just one story or narrative, it is comprised of many narratives, often from different perspectives. These narratives come together in a well-written report to tell the story of the event. I was re-reading the Columbia space shuttle ‘Columbia Accident Investigation Board (CAIB) Report along with the ‘Columbia Crew Survival Investigation Report’ over the last few weeks and they are a classic exa mple of the many facets of an incident. Most of us do not have the resources or interest in the creation of over 600 pages of report (And that only includes Volume 1 of the CAIB report and the crew survival report), but we do have the resources to pull together the various stories of an event in such a way as to make is accessible to others. By the way a great book to tell the various stories of the Space Shuttle Columbia tragedy is called Comm Check… The Final Flight of Shuttle Columbia by Michael Cabbage and William Harwood, published by Free Press in 2004.

 

Let’s look at an example of an Organizational Factor from an investigation that is a bit closer to most of us than a Space Shuttle: Firstly from the ICAM Chart in the report: “No link between training and fatal risk database”, and secondly from a possible narrative associated with it: “There is no formal link between the ‘Training and Competency Development System’ and the site’s ‘Fatal Risk Database’. As the ‘Training and Competency Development System’ drives the creation of the ‘Training Needs Analysis’ and the  ‘Training Needs Analysis’ dictates, amongst other things, what goes into the site’s induction, when the Induction was recently changed and the working at height section was removed nobody recognized that awareness around work at height was essential for all employees and contractors who may be exposed to the risk and that it had gone from the induction. Apart from any Management of Change issues which are discussed elsewhere in the report, it was the underlying lack of a formal link between the ‘Training and Competency Development System’ and the site’s ‘Fatal Risk Database’ that ultimately drove this omission.”

 

One tells the story and the other forces us to think about what it is saying, build our own story and perspective about what it says before we can make sense out of it, understand it and learn from it.

 

To sum up, we all like to hear stories. They apply colour, perspective and life to mere facts. We learn from stories. We remember stories and we use them to share and engage with others. Use this in investigation reports and you will go a long way to helping understanding and learning.

 

 

Raeda Consulting June 21, 2019

Telling it like it is

Hand over machine

‘Telegraphing deliberate action’ is all about getting into the habit of not only stopping and thinking about what you are about to do, but also physically pausing just before you do the action and, at the same time, verbalizing your intent to yourself and to those around you at the time.

Telling it like it is

This seems at first glance to be an odd title for a blog about safety, leadership or coaching – the stuff of my usual blogs – but please bear with me.

I have spent a lot of time thinking about, and talking with leaders about how to set up workplaces that are ‘error tolerant’ and where it is possible to ‘fail safely’. These ideas / ideals are about the workplace and are super-important goals. I also got to thinking; “in addition to setting up the workplace for success, what about the human bit – the person doing the work, or the work crew around the person doing the work bit – what can they do to help minimize mistakes, slips, lapses et cetera?” Then I read about ‘take deliberate action’ and ‘telegraphing’ actions and felt that here was something that was REALLY worth sharing. So that is what I am trying to do in this blog.

The ideas of ‘take deliberate action’ and ‘telegraphing’ actions are so similar that, for the purposes of today, I combine them and talk about ‘telegraphing deliberate action’.

‘Telegraphing deliberate action’ is all about getting into the habit of not only stopping and thinking about what you are about to do, but also physically pausing just before you do the action and, at the same time, verbalizing your intent to yourself and to those around you at the time.

So, what does ‘telegraphing deliberate action’ look like in anger?

Scenario: An Elevated Work Platform (EWP) operator is manoeuvring an EWP so that the basket, with a person in it, moves away from a hot furnace that the crew are working on. Moving the EWP in the wrong direction may result in the person in the basket of the EWP receiving burns. Adopting ‘telegraphing deliberate action’, the EWP operator holds his hand over the “B” lever and announces to no-one in particular “I am moving lever “A” in order to lower the basket”. The EWP spotter, who is standing next to the EWP operator sees the hand movement, and hears the intention of the operator. He immediately alerts the EWP operator to the discrepancy. Alternatively, the EWP operator himself, upon saying lever A and seeing his hand over lever B, stops the action and corrects it himself. Either way, the potential burn is avoided and safe work is achieved. I can see the potential for using ‘telegraphing deliberate action’ in many other industries. I am thinking hospital wards, operating theatres, maintenance workshops, offices.

When I talk with operators and leaders about this idea, they tend to say that it makes sense and they could see how it could help, BUT generally feel that it would not work so well (for them) as it would be weird to go on chattering as you do work. They say they would feel uncomfortable doing it. This leads to what I feel is the biggest problem with getting to an effective use of ‘telegraphing deliberate action’ in our workplaces – and that is that we do not generally think out loud as we work. We do not share our ideas as we get things done in a normal work situation and in fact, the concept of ‘safe to speak up’ that we often see in industries may not extend to the cultural need to be seen to know what you are doing, and just getting on with it. The benefits of ‘telegraphing deliberate action’ can only manifest when people are actually speaking up as they are doing their work. This may present some difficulties during implementation. One way around this is for the leaders to practice ‘telegraphing deliberate action’ in their normal daily activities, in their meetings, in their workplace interactions – to get into the habit of talking through what you are doing and encouraging your teams to listen and help you poke holes in your logic. Of course, imperative in establishing ‘telegraphing deliberate action’ is helping others understand the ‘why?’ of the activity. If they get it, if they deeply understand why ‘telegraphing deliberate action’ may help them minimise mistakes and help them undertake safe work, they will do it. If they do not get it, they won’t. “Starting with Why” has a big role to play here.

What I love about the idea is that even if other operators are not present, the application of ‘telegraphing deliberate action’ benefits the operator themselves as the act of pausing and vocalization of intent forces the mind to be mindful of the present situation and what is trying to be achieved. It gives us another opportunity to get it right. And getting it right in the first place is much more satisfying than doing an investigation afterwards…

‘Telegraphing deliberate action’ adds an element of mindfulness to the task and seeks to eliminate those ‘automatic’ mistakes, especially when the button about to pressed looks just like the one next to it (or the label on the medication you are about to give to a patient is almost identical to the one you absolutely don’t want to give to the patient).

‘Telegraphing deliberate action’ is not for the benefit of the observer or the leader doing some in-the-field leadership observation activity, or the boss. ‘Telegraphing deliberate action’ is purely for the benefit of the person doing the work.

What Jim Wetherbee says about telegraphing action:
“ When the practice of telegraphing actions becomes automatic between crew members, the operating effectiveness of the team improves dramatically. When executed properly, this practice contributes to error-free operations, allowing the team to achieve better performance, with higher-quality results.”

Whatever you do, whether you are a manager, an operator, a nurse, doctor or an engineer ‘telegraphing deliberate action’ can make a huge difference to your level of mindfulness and situation awareness in the workplace and help you to get it right the first time. So, it is not only about reducing mistakes but is very much about operational excellence. It also helps keep your workmates up to speed with what it is you are about to do – or at least up to date with what you think you are about to do. By sharing your intention, you are sharing your mental model of what the work situation is and what you are about to do within it.

I have a request of you. Do a ‘micro-experiment’ (to borrow a Dekker idea). Go away and have a play with the practice of ‘telegraphing deliberate action’. Do a ‘micro-experiment’. Talk to people about it. Get them to have a play with it. Do it yourself for a while and see how it feels – there will be benefits.

Inspiration and sources of interest for this blog include the words and works of: Jim Wetherbee, David Marquet, Sidney Dekker and James Reason.

Cheers

longy

Raeda Consulting March 25, 2019

Controlling risk – My Top 9 techniques

Top 9

If you have not read Controlling Risk in a Dangerous World: 30 Techniques for Operating Excellence, written by Cpt. Jim Wetherbee, Morgan James Publishing. 2016 – then you need to. In this blog, I have tried to distill from it what I feel are the most important techniques for controlling risk that would absolutely help those at the sharp end of work. I hope it piques your interest and encourages some conversation.

Controlling risk – My Top 9 techniques

I was re-reading one of my favourite books the other day – Controlling Risk in a Dangerous World: 30 Techniques for Operating Excellence. Written by Cpt. Jim Wetherbee, Morgan James Publishing. 2016 – and was trying to distil from it what I felt are the most important themes that would absolutely help those at the sharp end of work. Of course it is always dangerous whenever we try to summarize – or pick the best bits out of – anything as a reduction always loses context and detail, but in this case, I feel it is worth it. If for no other reason that it may pique your interest to go and buy Jim’s book and read all of it, a number of times.

I have chosen nine out of the thirty techniques for operating excellence as I feel they offer a central role for those of us who are exposed to the hazards in their workplaces. The intention is not to, in any way, belittle any of the other techniques but rather to focus our conversation somewhat.

I have grouped the ‘Top 9’ into the categories of ‘Thinking’, ‘Planning’, and ‘Doing’ – somewhat inspired by the PDCA (Plan, Do, Check, Act) cycle that we have all seen before. There are overlaps as you can see from the diagram below.

Blog Venn Diagram Image

Note: The numbers in brackets represent the number of the technique in Jim’s book

I will go through each of the ‘Top 9’ in an order that seems to make sense – to me at least.

 1. Develop and Maintain Risk Awareness

Developing risk awareness requires work. Over time and with practice operators can achieve a high-level awareness of risk, a sense of what is going on around them and if anything dangerous is developing. I am reminded of James Reason’s idea of error awarenessand chronic uneaseand also Dylan Evan’s concept of risk intelligencehere.

Wetherbee talks about developing and maintaining risk awareness by using three steps:

  1. Learning from past activities – search of vulnerabilities.
  2. Sensing present operations – maintaining situational awareness, and
  3. Predicting future activities – anticipate the changing shape of risk.

 

The three steps can be achieved by operators spending enough time understanding their equipment and systems – looking at their vulnerabilities, understanding the risk profile before starting the task and then monitoring for change and/or drift (See Dekker and Snook) during the task itself, always being mindful during the work.

2. Share and Challenge Mental Models

Once you have thought about the work you are planning on doing and what can bite you, you need to make sure you, and the rest of the team, have an aligned understanding of what is going on. This is called a mental model.

It is important that each member of the team knows what their role is and also what the others in the team are doing. Sharing mental modelsgreatly assists with this. Also during the task, keeping others up to date with what’s going on helps to keep the team aligned. This reminds me of the work of David Marquet around take deliberate actionand the concept of telegraphing actionthat Wetherbee talks about. Wetherbee describes it well: “Before moving any switch or making command inputs to a control device during operations, we indicated our intentions by pausing briefly over the switch or control to allow a second crew member to verify whether the intended action was correct.” Marquet goes a step further and suggests that the intent to action is also verbalized, even when on your own. Doing this keeps the shared mental modelalive. Both take deliberate actionand telegraphing actionwork just as well when you are on your own as they do when you are in a team environment.

3. Control Risk

Given this is very close to the title of the book, it is all about this topic really. Getting it right means that we have created some safe work. Of course it always sounds simple: Identify hazards, Assess risks, and Implement hazard controls. Easy peasy. BUT, it is often hard. It follows on nicely from the techniques already discussed inasmuch as having developed risk awareness and established a shared mental model, the individual and team need to get down to the exercise of understanding what can bite them and then what they can do to make sure they are not bitten.

Identifying hazards is a skill that can be learned and I strongly advise you to establish within your business a process to do just that. There is some excellent work being done using virtual reality and simulations that can really help here.

 

Assessing risk is really about asking yourself a few questions such as:

  • Will I, or anyone on my team be injured? If so, how badly?
  • What if…?
  • What usually goes right here, but may not today?
  • What will happen to me and the team if we do not control the risks?

Implement hazard controls is simple. ‘NIKE – Just Do It’ sums it up pretty well. By this time, you have become aware of what risks there are, how relevant to you and the team they are, what could go wrong and now you simply have to make sure that the things that you are about to do in the task are done in such a way that the hazard does not manifest itself. Always remember the hierarchy of controls, eliminating the hazard is always a better option that talking about it and keeping an eye on it.

4. Develop and Execute A Plan (For All Critical Phases of Operations)

You must have an overall understanding of the mission, capabilities, purpose of the activity, equipment and systems to be used beforeyou can explore the critical phases and build a plan. Wetherbee goes on to say: “Success in complicated operations can only be achieved when operators execute a well-developed plan for all critical phases of the operations.” Once again, we see the previous elements building into this one. You know the hazards, have assessed the risk and worked out how to control them, now you need to make sure you know which bits of the task are critical and develop a plan for how they will be achieved. A task-based risk assessment (TBRA) can be useful here. A TBRA should simply set out the steps of a job, what the hazards are and how to control them. It should also highlight the critical phases of the job and have added emphasis on the work needed to execute those phases. It needs also to have identified the trigger steps (See the next technique). As an aside, I do not believe a TBRA process should require the team to quantitatively, or semi-quantitatively, assess the risk (This is often required via a risk matrix). It should instead focus on what work is to be done, what the hazards are, how to control them, what critical phases and trigger steps are and basically assist in building a common mental modelof the work for the whole team.

5. Identify Trigger Steps (execution steps with immediate consequences)

This step is really done alongside the previous one. A trigger step is one that has immediate consequences. There is no time to stop and go back. Whether you get it right or whether you get it wrong, there is no going back. Once you have cracked the egg into the soup, you cannot get it back into its shell.

The trigger steps are identified in the “Develop and execute a plan” step and now extra vigilance is needed prior to auctioning the step. (I’m about to do something that will have consequences. Have I checked everything one last time? Have I forgotten anything?). Telegraphing action really helps here as well.

6. Expect Failures (System and Human)

Expect failure in yourself, what you are using and everyone else. (I assume everyone else on the road is trying to kill me). Chronic unease fits in here nicely. Always have that level of alertness about what could go wrong – everything is going along swimmingly – this cannot last. Try to keep on eye on things and have a back-up plan if things go south. Anticipate a failure and already know what you are going to do.

If there is one guarantee in the world of safety and work, it is that people will fail. If someone goes up onto a scaffold with tools and equipment, they will drop them at some point in time. This is the only reason we put drop zones around scaffold that is being worked on. As you drive a car, you need to be very aware of what others are doing. Always look in the mirrors, down the side roads you pass, at the traffic lights, what pedestrians and cyclists are doing, planning to yourself all the time, what you will do if…

7. Follow Procedures (and Rules) Thoughtfully

I love this idea. Procedures seem to be here to stay. They are, to me at least, both a curse and a savior. Corinne Bieder and Mathilde Bourrier, in Trapping Rules Into Safetysum it up well when they say “it is not always clear what procedures are really meant to achieve. Are they guidance to operate complex system? Ensure safe operation? Or maybe to provide management or regulators with an easy and explicit reference that allows them to easily identify indicators to monitor performance?” They argue that even the definition of what a procedure is vague – “a single word for a variety of objects”.

Having smart procedures, ones that are accurate is a good thing. Wetherbee suggests we need to ask each front-line leader in the organization two questions:

-Do you think all your operating procedures are accurate? (‘Accurate’ includes being effective and representative of the organization’s collective wisdom on the best way to accomplish a task or activity?

- Does each of your operators think all of your procedures are accurate – and will help him or her be successful?

If the answer is ‘No’ to the first question, then fix the procedures and do not expect or demand compliance. The same applies for the second question.

Wetherbee suggests that there are only two ways to get into trouble with procedures: not following them, and following them blindly. So regardless of whether your procedures are accurate as yet, the best advice to be given is to follow them thoughfully. Ask yourself if you follow the procedure will it deliver what the boss wants and will I not get hurt. Is it easy to follow correctly or difficult. Does it all make sense for you, the end-user?

8. Be Mindful During Operations

I am reminded here of a couple of authors (and their books) worthy of exploring in relation to this technique: Carol Dweck with Mindset, Karl Weick and Kathleen Sutcliffe with Managing the Unexpected, They are worth a read. Wetherbee talks about mindfulness in terms slightly differently that these authors. He talks about five aspects that enhance mindfulness that the operator can learn, develop and then maintain. They are: Technical knowledge, the need for operators to know all they can about the systems they operate; Teamwork, knowing their own and their team mates strengths and weaknesses and how the team members can best be used for the success of the activity; T-0 vigilance (T minus zero). This is based on what astronauts use to remain present before and after take-off. He explains that operators must remain present and focus on everything necessary to be successful. Of course they cannot focus on everything so their needing to know what bits are important is a pre-cursor to this work; Cognition (controlling and automatic)is next. Through practice it is possible to move work from the controlling (manual, mentally draining) to the automatic (like the complex activities required to drive a car after years of doing it – not so mentally draining). He recognizes that multitasking is only possible in the automatic mode. Think about chatting to a friend whilst driving a car; Lastly is Fields of vision. During critical or complicated tasks, narrow your vision and then periodically scan the rest of the world to make sure it is still as your mental modelexpects it to be.

9. Preserve Options During Operations

Somewhat related to many of the other Top 9, preserving optionsis like having a plan that is just there for when things go wrong. Always keep at least one option on the table. Keep the options updated as things progress. Wetherbee has some great illustrations on this technique. These include: when he is driving he is always looking for options if the car coming the other way does something unexpected; when cycling, he rides out wide of parked cars, just in case they open a door. He advises us not to get trapped without an escape route, to be always thinking ‘what if…?’ and having an answer ready about what to do if it does.

In summary

As I said at the top, it is always dangerous whenever we try to summarize – or pick the best bits out of – anything as a reduction always loses context and detail. I hope that I have not done Jim Wetherbee a dis-service by focusing on a few of his excellent techniques. I strongly advise you to buy Jim’s book as I really think it is an excellent set of tools to think about and apply as you strive to create safe work.

Controlling Risk in a Dangerous World: 30 Techniques for Operating Excellence. Cpt. Jim Wetherbee, Morgan James Publishing. 2016

Raeda Consulting September 27, 2018

My Top-20 Safety, Leadership, and Coaching books.

Books

I often get asked in my workshops to suggest some books to read in order to help workshop participants better understand the topics I talk about. This is an attempt to point them in what I consider the right direction.

This mega-blog attempts to give you my views and thoughts on my current top-20 technical safety / coaching / leadership books. They are not in any particular order as I believe that you should read all twenty of them. They will hopefully whet your appetite on the topics that keep me enthralled and also very busy in my business (Raeda).

 

I am calling it a ‘mega-blog’ as it is way too large to deserve the name ‘blog’ I hope you enjoy it and the books it talks about.

 

The Field Guide to Understanding ‘Human Error’ (3rdEd.) Sidney Dekker, Ashgate, 2014

 

Aimed at helping us understand and distinguish between what Dekker calls the “old view” and the “new view” of safety and ‘human error’, theField Guidemoves us from seeing human error as a cause of something to being a consequence. He reminds us to avoid words like “failure” and to embrace words like “experience” and to step into the shoes of those involved in events in order to try to understand their perspective – their story. Although I have not said much about this book here, I believe it is absolute no-brainer – Just Read It…

 

Comm check… The Final Flight of Shuttle Columbia.Michael Cabbage and William Harwood, Free Press. 2004.

 

“It is all about the story” is the title of one of my recent blogs. This book is a fantastic view of the Columbia Shuttle re-entry breakup. It tells the stories from various perspectives. It tells the stories that led to the event. It tells the stories of the participants, and it tells the stories of the management and decision-making teams and processes.

It does not tell us how to do an investigation but I believe it is an important book for those whose job it is to pull together investigation reports. It reminds us to talk to the various perspectives and narratives that need to be told. Of course most of us do not have the time, resources and commitment to write a book about each of the safety investigations we do. We can learn a lot from Cabbage and Harwood about incident story-telling.

 

Risk Intelligence – How to Live with Uncertainty.Dylan Evans, Atlantic Books. 2012

 

This interesting (in a good way) book is all about having the right degree of certainty to make sound decisions. It prompted me to think about and to ask others how good we are at knowing what we know and knowing what we don’t know. How risk intelligent are we? This book and it’s concepts is a great thought provoker when doing safety investigations and when thinking about safety and risk management more generally.

 

Safety Differently – Human Factors for a New Era.Sidney Dekker, CRC Press. 2015

 

In Dekker’s words, “This book attempts to show where our current thinking is limited; where our vocabulary, our models, and our ideas are constraining progress.”  This book brilliantly puts our current approach to safety in perspective of the past, how we got to where we are and then how and why we need to think differently, to “do” safety differently. As Dekker can do so well, he pulls no punches in his messaging and his approach. He is compelling and Safety Differentlyis an essential read for all leaders and ‘safety’ people, whatever that may mean.

 

Disastrous Decisions – The Human and Organizational Causes of the Gulf of Mexico Blowout. Andrew Hopkins, CCH press. 2012

 

I have included this book in my top 20 so as to introduce you to Andrew Hopkins if you do not already know his work. Hopkins dedicates a complete book to each catastrophic incident. All Hopkins’ books are excellent, providing a nice balance between the technical discussion on the mechanics of the event in question and a sound analysis of their contributory elements from the many angles and view points that exist.

 

Pre-accident Investigations – Better Questions.Todd Conklin, CRC Press. 2016

 

A mixture of quoting and paraphrasing the preface: A basic premise of this book is that we should not care if a worker made a mistake or violated a process – both errors and mistakes are so normal and predictable that they are not even interesting… and never causal.

 

Todd’s book is all about, as the title unsubtly suggests, asking better questions. Questions that will help us understand how our organization’s processes have let failure manifest into an incident. He very much focuses on learning first, not doing first. And that the only purpose of investigations is to learn and improve.

 

One quote that I really like: “The prize is not in writing the perfect corrective action; the prize is in asking the perfect question.”

 

The nuts and bolts of the book are about how to make learning teams work. This is done in seven steps or phases:

 

“Phase 1: Determine need for Learning Team

Phase 2: First session: Learning mode only

Phase 3: Provide “Soak Time”

Phase 4: Second Session: Start in learning mode

Phase 5: Define current defenses / build new ones

Phase 6: Tracking actions and criteria for closure

Phase 7: Communicate to other applicable areas”

 

To me, the power of this approach lies in the involvement of those who actually know what their day-to-day work looks like in the (investigation) team, talking through and explaining what I would call Work-As-Normal.

 

 

Organizational Accidents Revisited. James Reason, CRC Press. 2016

 

Reason does a brilliant job of re-capping his earlier – somewhat similarly looking – book of 1997 and then goes on to share some ideas about how to help our people get better at being ERROR – AWARE through foresight training. Chapter eight is an excellent overview of those thinkers who have a slightly, or significantly, different view than Reason. He touches on Turner, Leveson and of course Charles Perrow and Jens Rasmussen – all of which should also be read. But I cannot include every book in my top 20…

 

James Reason touches on FRAM (Functional Resonance Analysis Method) – the baby of Erik Hollnagel. FRAM is a tool that after many re-reads of the book I do not quite get, but do acknowledge there is something significant in there.

 

Organizational Accidents Revisitedis a very easy and worthwhile read, especially for those of you using ICAM (Incident Cause Analysis Method).

 

Start with Why – How Great Leaders Inspire Everyone to Take Action. Simon Sinek, Penguin. 2011

 

From an on-line review: “According to Sinek, most leaders talk about WHAT they do – the products or services that make them money. Some leaders talk about HOW – the process they use that sets them apart. Very few leaders talk about (or even know) their WHY – the reason the business exists in the first place.”

 

This is an awesome book and I mean that in the traditional sense. I was struck by the realization of why starting with why is such an imperative in all we do. It is, with the wonderful power of hindsight, so obvious. If somebody knows WHY doing something is going to help them be better, their life be simpler, healthier or their business be more impactful and they know their why to the depth of their bones – really get it – they will be unstoppable.

 

I use this concept in all my coaching workshops and in many coaching activities and very much use it when helping leaders at all levels get better at workplace incident investigations by helping them think differently.

 

In summary, read this book! It is not only about leadership but is also about how we need to think about all we do.

 

The Challenger Launch Decision: Risky Technology, Culture and Deviance at NASA. Diane Vaughan, University of Chicago Press. 1996

 

The best overview I can give you in order to encourage you to read this prodigious book is from the preface to the book itself:

 

“The cause of the disaster was a mistake embedded in the banality of organizational life and facilitated by an environment of scarcity and competition, elite bargaining, uncertain technology, incrementalism, patterns of information, routinization, organizational and interorganizational structures, and a complex culture.”

 

This book covers issues such as ‘learning culture’, the normalization of deviance, the culture of production and structural secrecy. Topics that are well worth understanding and absolutely do not only apply to space shuttle accident investigations.

 

Drift Into Failure: From Hunting Broken Parts to Understanding Complex Systems. Sidney Dekker, Ashgate. 2011

 

As usual, Dekker writes to an educated audience and that makes most of his books a tad tough to read for some. Having said that, I thoroughly recommend that you read them all – twice.

 

What I love about this book is that it offers examples (stories) that help paint the picture Dekker is trying to paint. He gets us to think beyond Newtonian cause and effect thinking to that of complexity and relationship. He also gets us beyond the black and white of Descartes to the many shades of grey.

 

In many ways, this book is about complexity of systems and finishes up talking about the complexity of drift – the final sentence sums it up: “Complexity allows us to invite more voices into the conversation, and to celebrate the diversity of their contributions. Truth, if there is such a concept, lies in diversity, not Singularity.

 

A great companion to Drift Into Failureis Scott Snook’s Friendly Fire: The Accidental Shootdown of U.S. Black Hawks over Northern Iraq. Princeton University Press. 2000

 

Friendly Fire: The Accidental Shootdown of U.S. Black Hawks over Northern Iraq. Scott Snook, Princeton University Press. 2000

 

This book is the ultimate example of why the concept of ‘root cause’ in sometime complex socio-technical workplace safety incident investigations is a pretty meaningless concept.

 

It also explores both as a concept and through an excellent narrative, the idea of practical drift. Snook gives us the narrative from the various perspectives of those intimately involved:  the individual-level account; the group-level account; the organizational level account and what Snook calls a cross-level account – how the various stories and perspectives all fit together. This is a good book to read alongside Dekker’s Drift Into Failure.

 

Verbalisation: The Power of Words to Drive Change. Sven Hughes, Verbalisation Limited. 2017

 

Verbalisation, at least in the context of safety and safety-related incidents, teaches us all about learning (as compared with sharing, which we tend to to ad nauseumafter an incident). To me and to many others, including Karl Weick and Kathleen Sutcliffe in Managing the Unexpected, learning is all about cognitive changes that manifest in changes in behaviour. This is what Verbalisationis all about, To me it appears that this book is also about an advertising exercise for the company (Verbalisation Limited) but I hope you too can see past that and get the huge amount of help I got in relation to tailoring messaging to specific audiences in order to maximize learning.  And that is something we all need to get better at.

 

Turn the Ship Around:  A True Story of Turning Followers into Leaders. L. David Marquet, Portfolio Penguin. 2013

 

This book should be essential reading for anyone who purports to lead others. The simplest way to get your head around Marquet’s ideas is to watch the You Tube clip called “Intro-versity presents “Greatness” by David Marquet”.

 

I am confident it will pique your interest enough for you to rush out and by the book. Turn the Ship Aroundis a story that develops the ideas that shaped Marquet’s views on, and practices of, leadership. It’s ideas remind me somewhat of coaching, wherein the answers to a problem lie, not with the coach, but with the player – the person being coached.

 

Marquet eloquently describes how he moved from directive to intent-based leadership and the massive impact it had on his life as a submarine commander in the US Navy. One aspect that I love (and used in my book) is at the end of each chapter he adds a handful of “Questions to consider” which attempts to make the topic discussed in the chapter relevant and real for the reader, their leadership and their organization. To sum up, I will upgrade my opening sentence: This book mustbe essential reading for anyone who purports to lead others.

 

Investigative Interviewing Psychology and Practice. Rebecca Milne and Ray Bull, John Wiley and Sons. 1999

 

A brilliant and insightful blend of the theory (why) and the practice (how) of interviewing. Aimed at lawyers, social workers, judges and pshychologists, I believe it is just as powerful for safety professionals and coal-front leaders who want to understand what people did or saw during a workplace safety incident. Milne and Bull talk about memory creation, memory duration, question types and styles during interviews, and most importantly, and I feel most interestingly, details about how to undertake an enhanced cognitive interview.

 

This book is well worth at least two reads if a part or your world includes interviewing after incident or helping others improve their interviewing skills.

 

Safety I and Safety II: The Past and Future of Safety Management. Erik Hollnagel, Ashgate. 2014

 

Hollnagel starts this insightful book with an exploration of the etymology of the world ‘safety’. It is a very old word…

 

The main idea of Safety I and Safety IIis that Safety I is defined as a condition where as little as possible went wrong and that Safety II is defined as a condition where as much as possible goes right. It is mainly from this great book that I built (the search for gaps between) Work-as-Done, Work-as-Normal and Work-as-Intended as the basis for constructing ‘timelines’ and focusing safety investigations discussed in my book Simplicity in Safety Investigations.

 

Safety II in Practice: Developing Resilience Potentials. Erik Hollnagel, Routledge. 2018

 

I must confess that The FRAM instance on the cover initially scared me off a bit. I persevered however and am very glad that I did. Providing an overview and some new perspectives on Safety II beyond his previous work, Hollnagel moves into an explanation of Resilience and Resilience Potentials. This is such an important concept. I feel that if you are in any way connected to the creation of ‘safe work’ as a doer, a leader or a functional safety expert, you needto know this stuff.

 

I must also confess that I got a bit lost when Hollnagel starts explaining how to measure and plot the Resilience Potentials and how to describe their relationships using a FRAM. But that is my failing and you may really get that bit. This book is well worth a couple of reads.

 

Human Factors & Ergonomics in Practice: Improving System Performance and Well-Being in the Real World.Edited by Steven Shorrock and Clare Williams, CRC Press. 2017.

 

By review on Amazon: “Every now and again, I read a book that makes a difference in my life. This is one of those books. Before reading Human Factors & Ergonomics in Practice, I had a bit of a sense about what Human Factors (HF/E) was. I have read all of Dekker, Hollnagel, Reason and many others who all talk about HF/E and I have been exposed to investigation methods that ascribe to the ideas of HF/E. I even talk about it a little in my book, but I never really got it. Steven Shorrock and Clare Williams, along with a band of merry practitioners clearly describe what HF/E is, what HF/E is not, and provide a plethora of domain-specific examples and practitioner-centric thought that all comes together to help set the ideas of HF/E in the mind of the reader. In creating Human Factors & Ergonomics in Practice, Shorrock and Williams have given us an accessible balance between text and narrative that will be a benchmark for a long time to come.

 

Some of the chapter contributors that I instantly recognized are: Erik Hollnagel, Sidney Dekker, Ron Gantt, Paul Salmon, Daniel Hummerdal, Martin Bromiley and Don Harris. Divided into four parts, Claire and Steven have masterfully brought together expert theorists and expert practitioners in the field of HF/E to tell their stories that explain so much, especially to a layman like me.

 

Coaching for Performance: Growing Human Potential and Purpose(4thed). John Whitmore, Nicholas Brearley Publishing. 2009

 

The first edition was published in 1992 and I wish I had read it back then. This essential book covers all the basics such as what coaching and what coaching is not, the GROW model – that stalwart of the art of coaching, along with some great questions that you can use with it. All this is done in excellent detail along with some great practical advice and expert insight into details such as barriers to coaching, coaching teams, leadership coaching and transpersonal coaching.

 

Although not specifically focused on, I found Whitmore’s book invaluable as I tried to become more effective in investigating workplace safety incidents and attempting to help other people learn how to do their investigations.

 

Controlling Risk in a Dangerous World: 30 Techniques for Operating Excellence. Cpt. Jim Wetherbee, Morgan James Publishing. 2016

 

This is a brilliant book – “Operators don’t manage risk; they control risk.”

 

Wetherbee uses personal stories (and those of others he knows and has worked with over many years) very effectively. We learn through stories and he is a master story-teller.

 

Rather than a more high-level review, I am going to hone in on each of his “30 Techniques for Operating Excellence”, quoting them and then adding my thoughts and explorations as to questions and topics of interest related to the technique that may be useful in our day-to-day work as a leader or as an investigator after something has gone wrong. I know this review will end up being way too long, but I really want you to get enough of a sense of where Wetherbee is going that you will simply go out and buy the book.

 1. Develop and Maintain Risk Awareness

Operators cannot identify hazards reliably or control risk effectively without mastering ‘Develop and Maintain Risk Awareness’

To start with, purposefully developing an awareness of risk. Look for it. Think about it. Chronic unease plays a part here. The more this is done, the more the awareness moves away from directed attention and conscious effort to becoming automatic over time. Ability becomes skill when we no longer have to think consciously about performance.

Leadership / Investigation Thoughts:

  • How does the equipment being used work? Do those using the equipment know how it works?
  •  What are the vulnerabilities of the equipment? Especially those that could injure.
  • Is the Original Equipment Manufacturer’s manual around, and is it read?
  • Are drawings and technical details understood and used to help teams understand the risks associated with what they are working on?
  • Just as a task is started, is the risk profile understood? And did it change through the task?
    • If so, how was the change identified and managed?
  • What is done to maintain mindfulness of what is going on during a task?

 

2. Control Risk

  • Identify hazards – This simply must be learned and practiced
  • Assess risk
    • Will I, or anyone on my team be injured? If so, how badly? How can I prevent an incident?
  • Implement hazard controls
    • Apply the hierarchy of controls – top to bottom

Leadership / Investigation Thoughts:

  • What process is used by those doing the work, to identify the hazards?
  • How did they learn how to identify and perceive hazards?
  • Do they use vision to identify hazards?
  • What about lines of fire, anomalies, changes, subtle energies and subtle changes?
  • Do the practitioners / operators / maintainers ask: “Will I, or anyone on my team be injured? If so, how badly? How can I prevent an incident?”
  • What are the risk intelligence and risk attitudes of the individual team members?
  • Better risk attitude people have a higher perception of risk and a lower propensity to accept risk. The opposite tend to be over-confient.
  • Risk intelligence is more about whether they know the limits of their understanding.
  • Do practitioners / operators / maintainers try to improve their risk attitude by enhancing their perception of risk and reducing their personal propensity to accept risk, commensurate with the importance of the mission and their ability to control the hazards.

 

3. Follow Procedures (and Rules) Thoughtfully

There are only two ways to get into trouble with procedures: not following them, and following them blindly.

Managers should ask each front-line leader in the organization these questions:

  • Do you think all your operating procedures are accurate? (‘Accurate’ includes being effective and representative of the organization’s collective wisdom on the best way to accomplish a task or activity?
  • Does each of your operators think all of your procedures are accurate – and will help him or her be successful?

If the answer is ‘NO’ to question one, then fix the procedures and do not expect or demand compliance. Same for question two.

One of the best ways to reduce errors is to use procedures when conducting complicated operations. It is important to remember, however, that violating a rule does not always result in an accident or injury. Rare events happen rarely.

This is how operators should be thoughtful when using a procedure thoughtfully.

  • Use the procedure – Every step
  • When any step shouldn’t be used as specified, follow an established process

Leadership / Investigation Thoughts:

  • What procedures need to be known to get the task completed?
  • How do practitioners / operators / maintainers know that they know the procedures well enough?
  • How critical is it for the procedures of interest to be followed precisely?
  • Are all the procedures accurate? (Able to be followed and will result in safe work every time)
  • What happens when the procedures of interest cannot be followed?
  • Are adaptive ‘principles-based techniques’ used when procedures do not exist?
  • Do the practitioners / operators / maintainers understand:
    • The worst consequence if they do not control the hazards,
    • The criticality or importance, of the mission they are involved in,
    • Their ability to controlthe hazards?

 

4. Employ Two-Person Rule

  • I intend to…
  • Hold hand over controls briefly before activating (telegraphing the action)
  • Verbalise intended action, to self and/or others around: “I am going to open valve B21”

Leadership / Investigation Thoughts:

  • When two or more people are on a task, what level of involvement do they have in the thinking and process of a task?
  • Do practitioners / operators / maintainers use a form of:
    • I intend to…
    • Hold hand over controls briefly before activating (telegraphing the action)
    • Verbalise intended action, to self and/or others around: “I am going to open valve B21”

 

5. Identify Trigger Steps (execution steps with immediate consequences)

  • A trigger step is one that has immediate consequences. There is no time to stop and go back.
  • Identify these and apply extra vigilance prior to actioning the step. (I’m about to do something that will have consequences. Have I checked everything one last time? Have I forgotten anything?)
  • A simple example is leaving a hotel room. Stop before you close the door and double check you have the room key.

Leadership / Investigation Thoughts:

  • Do the practitioners / operators / maintainers know what trigger steps exist for their task?
  • How do they prepare and manage for them?

 

6. Perform Verification

  • Simple verification
    • Just check it. Review the procedure after the action and check that all steps were carried out as per the procedure.
  • Redundant verification
    • Two different types of gauges that monitor the same variables. Or two different people cross-checking that a task has been completed. For example, “Cabin crew, set doors to automatic and cross-check”
  • Dissimilar verification
    •  Is there an independent system we can check? Or cross-check? Can we look at the system from a different perspective? For example, after setting four switches two down and two up, look at the pattern of switches and check it is as planned.

Leadership / Investigation Thoughts:

  • After each step or part process of a procedure in a task, what do the practitioners / operators / maintainers do to verify that the step was carried out as per the procedure or ‘principle-based technique’?

 

7. Protect Equipment and Systems

Taking care of your gear will prevent failures and extend the life of the equipment, also minimising the likelihood of failures.

 Leadership / Investigation Thoughts:

  • What are the vulnerabilities of the equipment and systems associated with the work?
  • Which bits are pushed beyond their capability during work?
  • Does / did the work involve working outside the spec / guidelines of the equipment?

 

8. Expect Failures (System and Human)

Expect failure in yourself, what you are using and everyone else. (I assume everyone else on the road is trying to kill me). Chronic unease fits in here. Always have a back-up plan. Anticipate a failure and already know what you are going to do. Always be ready for failure.

 Leadership / Investigation Thoughts:

  • What failures are normally expected when doing (specific) work?
  • What redundancy do the practitioners / operators / maintainers build into the way they work?
  • What contingencies / recovery plans do operator / maintainers build into their work?

 

9. Develop Error Wisdom (Individual and Collective)

In order to understand how errors are made and to learn how to develop and use techniques to avoid adverse consequences from errors made so easily.

  • We learn from making mistakes
  • Own up to your (to yourself at least) mistakes and errors
  • Look for them in all you do. We need to deliberately and skilfully learn from mistakes
  • Create a personal list of errors against procedures for a shift / week / month and explore it.
  • Become error wise

Leadership / Investigation Thoughts:

  • What mistakes usually occur during the (specific) work being done?
  • Which bits of a procedure do the practitioners / operators / maintainers usually make mistakes in?

 

10. Use Error-Mitigation Techniques

  • Reduce the likelihood of errors (before the operation)
  • Capture incipient errors before they occur (during the operation)
  • Mitigate the consequences  of errors (after the operation)

Leadership / Investigation Thoughts:

  • What Human Factors / Ergonomics factors / design issues exist around the work being undertaken?
  • Exactly, how are practitioners / operators / maintainers trained in the task / procedure?
  • Exactly, how do practitioners /  operators / maintainers use checklists and procedures?
  • Are mistakes and errors discussed in the team quickly?

 

11. Develop and Execute A Plan (For All Critical Phases of Operations)

You must have an overall understanding of the mission, capabilities, purpose of the activity, equipment and systems to be used beforeyou can explore the critical phases and build a plan.

Leadership / Investigation Thoughts:

  • How do the practitioners / operators / maintainers:
    • Build an outward focus and sense emergent hazards immediately
    • Predict trigger steps, consequences and ‘What if?” scenarios (chronic unease)
    • Expect failure
    • Conduct task post-mortems
    • Be ready to change the plans if needed.
    • Maintain a suitable level of situational awareness
    • Anticipate increasing risk and always be ready to invoke a contingency plan (See technique 12)?

 

12. Have a Continuation Plan

As you prepare for the task, and after you have identified your trigger steps and critical phases, develop in your mind what you will do if any of these steps do not go as planned. How will you recover from the situation beforeit develops into a catastrophe or injury?

Leadership / Investigation Thoughts:

  • What are the trigger steps and critical phases of the task?
  • For each trigger step, what is the plan if it goes south?

 

13. Preserve Options During Operations

  • Always keep at least one option on the table
  • Do not rely on hope
  • Do not get trapped without an escape route

Leadership / Investigation Thoughts:

  • What is the practitioners / operators / maintainer’s final get-out-of-gaol option or escape route (and that would normally, or was designed to prevent the incident)?

 

14. Reduce Exposure to Hazards

This is the most powerful technique. The further you operate from danger, the more likely will be your chances to complete the mission.

Leadership / Investigation Thoughts:

  • In what ways do the team or practitioners / operators / maintainer attempt to / succeed in minimizing exposure to hazards during a task?

 

15. Maintain Positive Control (When Moving Objects)

  • Understand control
  • Maintain connection
  • Prevent unintended collisions

i.     Maintain accurate knowledge of local environment

ii.     Anticipate potential conflicts or collisions

iii.     Assume the worst

Leadership / Investigation Thoughts:

  • If the task is related to a moving object, does the team understand the control approach, how to maintain connection with the object, or anticipate failure and attempt to control it?

 

16. Balance Confidence with Humility (Individual and Organizational)

Supreme confidence must be tempered with healthy self-doubt. Without confidence, operators will make mistakes. Without humility, operators will not realize they are making mistakes. Always ask yourself “What have we missed?” “What mistakes have we made which can be corrected before it is too late?”

Leadership / Investigation Thoughts:

  • How do the practitioners / operators / maintainers, at some point or points, ask themselves “what have we missed?”, What mistakes have we made which can be corrected before it is too late?”

 

17. Communicate Effectively and Verify Communications

  • Communicate earlier rather than later
  • Communicate during operation (effectively and efficiently)
  • Remember that communication is a two-way process so verify understanding of the communication

Leadership / Investigation Thoughts:

  • How are intentions communicated before and during the task?
  • How does the team verify that communications before and during the task were effective?

 

18. Be Prepared Mentally

This is all about maintaining the right attitude before and during an operation. This can be different for different people in different situations. It is all about the attitude that works best for a person to resist the psychological pressure and avoid cognitive incapacitation. A simple example is stage fright. A simple mindset in this case is to remember that the only important people in the room are the audience, not you. You are irrelevant. The audience receiving a good experience is all that matters.

Leadership / Investigation Thoughts:

  • How do the practitioners / operators / maintainers prepare mentally for the task?
  • What is the attitude to the job amongst the individuals in the team?

 

19. Be Mindful During Operations

  • Technical knowledge
  • Teamwork
  • T-0 vigilance (T minus zero)
  • Cognition (controlling and automatic)
  • Fields of vision

Leadership / Investigation Thoughts:

  • What is the:

o  The technical knowledge of the practitioner / operator / maintainer and the team,

o  The team members (and the individual practitioner / operator / maintainer’s own) strengths and weaknesses related to the task,

o  Their level of T-zero vigilance,

o  The balance between controlling and automatic cognition,

o  The ranging and variation in their focus (narrow – world view) during the task

 

20. Think Fast and Act Deliberately

The best operators must think fast. Yet not act too fast. They must assess the situation and process information quickly but must not rush and make mistakes. Operating excellence comes from being deliberate, not being fast.

Leadership / Investigation Thoughts:

  • What is the balance between thinking and ‘deciding quickly and acting’? (jump in or slow contemplation)

 

21. Recognize Divergence

The best operators are highly skilled in identifying the early signals of an impending accident and taking corrective action to prevent the accident and improve operating performance. They identify “weak signals”.

Leadership / Investigation Thoughts:

  • How does the practitioner / operator / maintainer keep an eye out for weak signals, and then act to prevent an incident?
    • Change in scope of work
    • Unexpected event or situation
    • Abnormal event or situation
    • Minor failure
    • Hurrying
    • Distraction
    • Fatigue
    • Psychological of physical stress
  • Below are from “The Multitasking Myth” Loukopolous, Dismukes and Barshi – Ashgate, 2009 (to be added to the list above):
    • Interruptions and distractions
    • Tasks that cannot be executed in their normal practiced sequence
    • Unanticipated new task demands arise
    • Multiple tasks that must be performed concurrently.

 

22. Share and Challenge Mental Models

Collectively, through sharing and conversations and challenging, teams, build a common mental model of the work and each person’s role within it. The same is true for each member; sharing their mental models helps them have clarity. This reminds me of the intent model of David Marquet.

Leadership / Investigation Thoughts:

  • How do individuals share the mental models about the work amongst the team prior to and during the work?
  • How do they signal their collective / individual intent/s? (Do they telegraph or speak the intent prior to the task? for example)

 

23. Challenge “Go” Deliberations

The idea here is not only challenge the ‘no-go’ decisions, but also the ‘go’ decisions. Spend sufficient time to think about whether it is actually right to proceed, rather than just doing it.

Leadership / Investigation Thoughts:

  • Do the team stop or pause at any point to consider whether going ahead is the correct path (as compared to stopping)

 

24. Be Assertive (To Authority) When Necessary

Speak up. You are the one at risk, not the manager. If you are the manager, encourage and reward speaking up.

Leadership / Investigation Thoughts:

  • How often does the practitioner / operator / maintainer ‘Speak up?”

 

25. Be Cognizant of Limitations (In The Sociotechnical System)

You need to be very aware of your own limitations, the limitations of the other team members, the limitations of the tools, equipment, software and the limitations of the procedures you are using. Make your decisions and actions based on these limitations.

Leadership / Investigation Thoughts:

  • What are the limitations of: the practitioner / operator / maintainer, the team members, tools, equipment, software, the procedures?

 

26. Assess Competence (In Team Members)

Do not just rely on the fact that people have been trained.

Leadership / Investigation Thoughts:

  • What is the competence balance amongst the team with respect to the task at hand?

 

27. Acknowledge (Personal) Weaknesses

Groups perform better than individuals because the group can overcome the weaknesses of some members with the strengths of other members. The group greatly benefits when team members share their weaknesses. Look for, and value people who understand and acknowledge their weaknesses, know the limits of their capability, and are eager to learn and develop ways of mitigate their weaknesses.

Leadership / Investigation Thoughts:

  • What are the individual practitioner / operator / maintainer’s strengths and weaknesses and do their team members know them?

 

28. Admit Errors

When an error is admitted quickly and candidly to teammates as soon as the error is recognized, corrective actions can be taken to mitigate the consequences more effectively and completely.

Leadership / Investigation Thoughts:

  • As errors occur, how and when does the operator / maintainer admit them to their team mates?

 

29. Use Methods to Aid Weak Prospective Memory

Humans are poor at remembering to do things into the future. Put your house keys next to your car keys so you don’t forget them. Set alarms so you don’t forget things at certain times.

Leadership / Investigation Thoughts:

  • When faced with interruptions / divergence from the task (see No.21), what does the practitioner / operator / maintainer do to aid prospective memory?

 

30. Demand Operating Excellence From Myself First (Then Inspire Others)

Do as I do, as well as what I say. You cannot demand operating excellence from others if you don’t believe in it and do it yourself.

Leadership / Investigation Thoughts:

  • How does the practitioner / operator / maintainer describe their level of operating excellence as compared with the other team members.

 

Simplicity in Safety Investigations: A Practitioner’s Guide to Applying Safety Science. Ian Long, Routledge. 2018

 

Hey, of course I am going to put this one in :)

I must confess that it is not easy, or even possible to critically review your own book, so I will instead copy and paste in a review from Stephen Marriot at IOSH Magazine.

“This is not a big book, but it packs a lot of ideas into 142 pages. The author, now a consultant but formerly in a senior OSH post at Australian miner and nickel refiner BHP Billiton, has a lot of experience to draw on but he is also clearly well read. One of the strengths of this book is how he harnesses theories from writers such as Todd Conklin and Daniel Kahneman to the service of accident analysis.

Rating: 4/5

Long makes a virtue of this “recombinant innovation”, making new techniques by combining existing ones.

His strongest message, channelling Erik Hollnagel’s Safety II and Sidney Dekker’s safety differently approaches (see above), is that investigations do not have to be restricted to unpicking things that have gone wrong. Long’s recommended “outcome analysis” technique can be applied equally to a period with no recordable incidents as to a safety failure.

The basic investigation approach he advocates is a gap analysis between “work as done” – what was happening at the time of the accident, “work as normal” and “work as intended” – what the procedures or method statements prescribe. This can be applied to a small local investigation by the people involved in a task or a larger manager-led exercise after a serious accident or near-miss.

Data gathering should use a PEEPO structure, he argues, dividing information into the categories of people, environment, equipment, procedures/documentation and organisation.

He has sound advice about scene preservation, team formation and the attitude investigators should adopt: open-minded and curious, cultivating what he calls “generous listening” and using a coaching approach to draw information out of interviewees rather than closing down an inquiry with leading questions.

The jacket blurb suggests the book could be used by supervisors and managers as well as safety professionals. Long helpfully splits out the more detailed explanations of theoretical underpinnings such as “shared space” theory or the various heuristics that can bias investigators, into a section headed “The technical and scientific stuff”, which leaves a manageable 56 pages that could be passed on to a non-practitioner as a primer.

This is a well-written and well-edited book; for many readers used to a more functional approach it may not bring simplicity to their investigations, but it will surely add rigour.”

Stephen Marriot, ioshmagazine.com

 

Summary of mega-blog

 

Although I have only covered 20 books here I hope that in the reading of the books themselves, you become interested in the topics covered and think and talk about them with others. I also hope you read more of the books on the topics of interest to you and continue to help make your workplace and the world a safer place.

 

Ian

Raeda Consulting January 02, 2018

Does labeling concepts, theories and ideas really help us much in safety?

Labels

Does labeling concepts and ideas like ‘Resilience Engineering’, ‘Standardized Work’, ‘Procedural and Practical Drift’, and ‘Human Factors / Ergonomics’ (HF/E) help people understand 'safety'? Here I chat about what they look like together and without the labels

The trouble with labels – combining ‘Resilience Engineering’, ‘Standardized Work’, ‘Procedural and Practical Drift’, and ‘Human Factors / Ergonomics’ (HF/E).

There are so many buzzwords in the world of ‘safety’ and the provision of safe work that it is sometimes hard to differentiate them or make sense of them – either together or as separate concepts and ideas. And yet, are they not all the same thing? I have picked four of the ones that are getting a fair bit of air play in the worlds I am working in and thought it was worth chatting about them a bit and seeing how they all fit together, if they actually do! I am not saying there is a GUT (Grand Unified Theory) of safety but I am tempted to dream of a description of how safe work is prepared and undertaken that we can all get our heads around.

 

Let’s look at each one and then see if there are common ways of describing them as one concept. Well, we will see how we go anyway…

 

Resilience Engineering is a concept well covered by Erik Hollnagel in lots of books, papers and conversations. In a recent book of his (Safety-II in Practice: Developing The Resilience Potentials – Routledge 2018) he has proposed that the following four potentials are necessary for resilient performance:

 

“The potential to respond. Knowing what to do or being able to respond to regular and irregular changes, disturbances and opportunities by activating prepared actions, by adjusting the current mode of functioning, or by inventing or creating new ways of doing things.

 

The potential to monitor. Knowing what to look for or being able to monitor that which affects or could affect an organisation’s performance in the near term –positively or negatively. (In practice, this means within the time frame of ongoing operations, such as the duration of a flight or the current segment of a procedure.) The monitoring must cover an organisation’s own performance as well as what happens in the operating environment.

 

The potential to learn. Knowing what has happened or being able to learn from experience, in particular to learn the right lessons from the right experiences. This includes both single-loop learning from specific experiences and the double-loop learning that is used to modify the goals or objectives. It also includes changing the values or criteria used to tailor work to a situation.

 

The potential to anticipate. Knowing what to expect or being able to anticipate developments further into the future, such as potential disruptions, novel demands or constraints, new opportunities or changing operating conditions.”

 

To me, this means that a work team (as an example) has an understanding of how they will cope with changes, upsets and other interruptions during work and are able to adjust their performance and bounce back before something actually goes wrong.

Procedural and Practical Drift comes from a few sources. Scott Snook, in “Friendly Fire: The Accidental Shootdown of US. Black Hawks over Iraq – Princeton University Press 2000, talks about the slow and often inevitable changes in the way work is done over time and how significantly it can impact outcomes. Another simple example is related to the way cracked and broken foam was repaired on the Space Shuttle over time. (See Columbia Accident Investigation Board Report Volume 1 August 2003). Sidney Dekker in Drift Into Failure: From Hunting Broken Components to Understanding Complex Systems – Ashgate 2011, talks extensively about changes in the way work is done that occur, not only in practice but that are then proceduralised. This is really brought home as he describes in detail the loss of Alaska flight 261 as the jackscrew lubrication periodicity moved, with complete approval, from 300 to up to 2550 hours resulting in the loss of the aircraft along with the 88 souls on board.

Standardized Work comes from the Toyota Production System and was recently described to me as “a highly defined, documented method which describes how a task should be executed every time. It empowers teams to own a safer, more productive way of working centred on human movement.” In another example, which I like better, Janet Dozier in her blog in 2013 entitled Does Standard Work Destroy Creativity? talks about “Standardized work establishes the best method to perform a task with the least amount of waste while providing the best patient care.  It is an agreed-upon method and procedure for the best sequence and timing to perform a task.” Although this is specific to heath care, the analogies seem obvious to other domains and cover the intent better for me.

Last but not least, Human Factors / Ergonomics (HF/E) is all about the understanding of interactions among humans and other elements of a system. It is not simple human movement but the interrelationships between the human and the system that is important here. A book that had a profound impact on my understanding of HF/E was the recent one edited by Steven Shorrock and Claire Williams entitled Human Factors & Ergonomics in Practice: Improving System Performance and Human Well-Being in the Real World. Published by CRC Press in 2017. I thoroughly recommend it.

I would like to explore the common elements of Resilience Engineering, Drift, HF/E and Standardized Work in terms of Work-As-Done and Work-As-Intended. These differ only in name from Erik Hollnagel’s ideas of Work-As-Done and Work-As-Imagined and come from my book Simplicity in Safety Investigations: A Practitioner’s Guide to Applying Safety Science. Published byRoutledge 2017. The name change is purely based on what worked in the field as we developed the investigation approach and not from any intent to suggest Hollnagel’s labels are not suited to the uses he puts them, which they absolutely are.

HF/E, Standardized Work and Resilience Engineering are all about setting up the Work-As-Intended. Drift is all about the recognition that whilst Work-As-Intended is all well and good, the real world dictates that Work-As-Done is the real driver of safety and drifts over time.

If we hold that Standardized Work is the bit that lays down how the work is supposed to be undertaken (Work-As-Intended), then the scientific discipline that helps those creating this Work-As-Intended is HF/E and if it is done in conjunction with ensuring that elements of Resilience Engineering have gone into the thinking and that those actually at risk during the task – those doing the work are integrally involved in the creating of the Work-As-Intended and its on-going monitoring, then Drift can also be managed.

 

In a way that does not use the buzzwords above we might see the following:

A team is about to do a task and they are working out how to do the job so that they get it done productively and also safely. They talk as a team about how they have done the task in the past, what worked, what didn’t work, what could go wrong, what to monitor or look out for as they do the task, and what they could do to adapt the task if it did start to go wrong. They also look back and chat about whether the way they are doing it now has changed over time – have they always done it this way? During the conversation they would also talk about the specific actions they are doing, what they are going to interact with – what systems, equipment or process they are involved in and whether there are bits of that they need some help understanding more. If so they might get some help from a HF/E practitioner (bugger, I used one the terms – unavoidably I think) to help them make sure they are getting the science right.

Once they have thought through all of this, they lock it all down in a Job Safety Analysis or work procedure and then each time they do the task, they check to make sure stuff has not changed or slipped from the method they reckon is the best for the task at hand. If things have changed they stop and work out what to do now. If nothing has changed they get on with job and have a bit of a post-mortem afterwards to see how it all went and whether the way they did the task matched how they thought they were going to do it.

Overall I think we don’t always help ourselves when we try to attach labels to things, or use language that is not well or easily understood. Especially as we all have our own perceptions of what things mean. Labels, apart from tending to take detail away from the concept, can mean different things to different people.

By helping translate technically correct but inaccessible buzzwords and labels into simple stories we can often help people understand what it is we are trying to say or achieve in safety and this can result in a better understanding of what creates safe work.

 

Page 2 of 712345...Last »
February 2021
M T W T F S S
« Apr    
1234567
891011121314
15161718192021
22232425262728