Episode 26 – Featuring Nippin Anand
Organising for safety: How structure creates culture
Episode 8 - Featuring Professor Andrew Hopkins
Welcome to another episode of Embracing Differences with me Nippin Anand where I am joined by the world-leading safety scientist, Professor Andrew Hopkins, who takes the bull by the horns and helps us understand why we should be spending more time and effort to understand the structure of an organisation, in an intelligent but very accessible manner.
*Link to Andrew Hopkins’s latest book – Organising for Safety: How structure creates culture. https://www.amazon.co.uk/Organising-Safety-structure-creates-culture/dp/1925894150
[00:00:00] Nippin Anand: Consider this – You have been on a Safety Differently, Safety II, New View or a HOP course and you come out absolutely transformed. You found answers to so many questions that have bothered you for so many years as a HSE professional. After a reflective weekend, you get back to the office. Your boss walks up to you and asks “Hey man! How was the course last week? You tell him everything you have learned and how you spend the entire weekend thinking about how many great ideas you’ve had in mind for your next site visit and you make a strong business case, not just a safety case for what you say. Your boss looks at you incurious and disinterested and “Says sounds good! Can you please get that investigation closed out today? The CEO is expecting to present it to the board tomorrow and please keep it simple.” as he walks away. It’s more than six months you’ve been on the course and you read so many books but you still haven’t had a chance to influence the desired change that you aim for.
[00:01:04]: If you’re wondering what you might be missing to bring contemporary safety thinking into your organisation, this podcast will make absolute sense to you. In an organisation where senior leadership is rewarded a bonus that measures five times more than their salaries from bringing in new business and penalised a mere 3% cut in their annual salary when it comes to fatality. Convincing your bosses to switch from old to new view, Safety I to Safety II or human error to human performance, will only take you so far until you realise that you have hit the ceiling. It’s a problem that most contemporary thinkers and safety scientists don’t contemplate much less, discuss it openly in their world.
[00:01:47] Welcome to another episode of Embracing Differences with me, Nippin Anand where I’m joined by the world leading safety scientist Prof. Andrew Hopkins, who takes the bull by the horns and helps us understand why we should be spending more time in efforts to understand the structure of an organisation in an intelligent but very accessible manner. What I learned from Andrew Hopkins is that organisations are less about building relationships trust or a common language to communicate over concerns and more about understanding power relationships.
Of course, Hopkins avoid using abstract concepts like power greed or opacity. Instead, he builds upon a far more comprehensible and intuitively recognisable term – structure. Meaning, what incentivises or deters people in position of power to do what they do. Talking to another safety scientist and a great friend, Ivan Pupulidy just last week, I brought up my discussion Hopkins to him and said “Ivan, this is what Hopkins has to say!” Ivan responded to me saying “Come to think of it Nippin, we talk about local rationality. Why does it make sense for people to do what they do? How often do we attempt to make sense of board room decisions and the dynamics between the CEO and board members using the principles of local rationality?” Hmm. Brilliant point! Thank you, Ivan! So why are all our efforts to understand local rationality limited to workers and managers only? Hasn’t this discourse about local rationality limited over framework of thinking? Listen to Andrew Hopkins you bet it would make you think and I always say this you have no reasons to believe Andrew or for that matter any so-called expert keep an open mind.
[00:03:32] Andrew Hopkins: Greetings to you all! My name is Andrew Hopkins, as you probably know I’m fascinated by the world of work, by industry and bring a sociological perspective to that. I am a sociologist; I’ve spent most of my life in the Australian National University in Canberra. Some of you’ll might know of my books so it’s a pleasure to be here today!
[00:03:53] Nippin Anand: Great! We are also very excited to have you. Would you like to give us an introduction to what is it that you want to talk about today?
[00:04:00] Andrew Hopkins: Yes, certainly. Thanks, Nippin. It’s quite an opportunity you give me because I’d like to speak about my latest book* called, “Organising for safety: How structure creates culture” and in some respects that encapsulates an argument that I’ve been making in many of my books for years but I’m making it an all-concentrated way here then I have previously.
[00:04:22] So, let me start by saying, suppose you want to create a certain culture in your organisation. Say a culture that emphasizes safety, operational excellence or whatever it might be or seeking to create. The argument in my book in a nutshell is that to do this you need to set up the appropriate organisational structure. It’s that structure which will give you the culture that you want. Now before I developed that I just want to acknowledge this is a controversial argument that there are other points of view around and then the leading one which I want to just mention is that the way you create a culture is by running educational campaigns the aim of which is to change the hearts and minds of people, to change the way they think, to change their values and that’s a very frequent strategy which organisations adopt when they are seeking to change the culture of the organisation.
[00:05:18] To give you an example that comes from the petroleum company, Shell – A number of years ago now, they set out to change their culture, increase their focus on operational excellence and they ran what they called a ‘hearts and minds campaign’. It was based upon the work of Patrick Hudson. Some of you will know his work and in particular his notion of the organisational maturity ladder or the safety culture ladder which divides the cultures of organisations into five different categories the lowest one is pathological and the highest one is generative which simply means high functioning safety culture.
[00:05:52] So, their aim was to move their organisation up that ladder. They did this by running their campaign, they put all their 250,000 employees through educational campaign and it was successful in some respects people learned the language of the maturity scale alert about pathological cultures and generative cultures they could use that language but nothing changed significantly in the organisation.
[00:06:18] Patrick Hudson, the author of this approach of this, wrote later an analysis of it and what he said was that nothing changed because the organisational structure had changed. What they needed to do was to set in place systems of reward and recognition which would encourage those behaviours which they were seeking to create but they hadn’t done that so nothing has changed. There are many other stories in BP who had a similar experience which I won’t talk about now, but that’s the background which leads me to think structural change which is vital.
[00:07:00] So, I want to give you an example of what I’m talking about. The first one comes from NASA- The National Aeronautics and Space Administration in the US. It concerns the space shuttle, Columbia, that shuttled in 2003 was destroyed when it to return to earth and seven astronauts died. What happened was that pieces of external cladding off an external fuel tank foam cladding fell off the fuel tank at the time of launch and hit the shuttle and dented the surface of the shuttle. Now this was happening routinely at launch so there were a lot of these dents on the shuttle but certainly wasn’t the designers but because it happened routinely without damage each had been normalised. This deviation or what I’d call an anomaly had been normalised and it was now seen as acceptable risk. On the occasion in question with this particular launch in 2003, a piece of foam fell off the external fuel tank and hit the leading edge of the wing of the shuttle and made a hole in that leading edge so the wind shuttle returned to the earth atmosphere, the air rushed into that hole and heated up the shuttle and destroyed it.
[00:08:17] Now there was a major inquiry into this at Columbia Accident Investigation. It was I guess a recognition of the culture of the organisation that was the problem. That was their primary focus as they described. It as a broken culture. The culture of NASA was – faster, better, cheaper – with no mention of safety there. So that from safety point of view this was a broken culture. Not it’s interesting to me that’s their analysis the root cause of the problem is the broken culture. So what’s this solution? How do you change a culture? The way you change that culture is by making an organisational change.
[00:08:59] What they recommended was the creation of particular organisational entity within NASA, a technical engineering authority which would sit outside of the shuttle launch organisation. So it was not constrained by questions of cost or schedule which dominated the operation of the launch of the shuttle organisation. They sit outside that and had the authority to intervene within shuttle organisation on technical questions. It would determine what was anomalous and was not, what needed to be done about anomalies and that would ensure that kind of normalisation which occurred in the Colombia case would not occur again.
[00:09:38] So that’s a very good example I think of what I mean by organisational change which is needed to create the culture you want. It’s interesting that the board drew on the US submarine Navy as its primary example. This is a much-celebrated case which many theorists go back to because in 1963 which is more that 50 years ago. The first nuclear submarine was introduced into the Navy and they lost a nuclear submarine in peacetime accident couldn’t afford to do this. Turns out they had been losing submarines in peacetime accidents at a remarkable rate about 1 every three years since in the previous 50 odd years. So, a colossal rate of summary loss they put up with and they realised they couldn’t continue so they introduced this programme at ‘sub safe’ which was again an external organisation standing outside of the submarine operations still within Navy of course, but outside of submarine operations and with the authority to intervene within the submarine programme – It’s normal operations.
[00:10:53] And since that time there have been no submarine losses with one possible exception but basically no submarine losses in peacetime since. So that’s if you like a piece of empirical evidence for how effective that kind of intervention is or can be. So that’s really the what lies behind my view that a culture of operational excellence does depend on having that kind of organisational structure where there is a technical organisation separated from the routine up day to day operations but that technical organisation has authority to intervene in the way I’ve described.
[00:11:32] Now the main case study in my book is BP. I’ve studied the oil company BP after the Macondo which was the name of the oil well which blew out in 2010 in the Gulf of Mexico. What’s really interesting about case study is it’s before and after because they entirely changed their organizational structure as a result of the accident. I think it killed 11 people but it also did cost more than 60 billion US dollars and had almost destroyed the company and quite interesting that it’s when companies have a near death experience like this that they make the kinds of organisational changes.
[00:12:13] So, let me just talk about this before and after comparison. Before the accident, BP was a highly decentralised organisation. It was probably the most decent realised all the oil and gas companies. So, its operations in the Gulf of Mexico, of course it’s a worldwide company but it had operations in the Gulf of Mexico and that operation functioned as pretty much no autonomous company. it was responsible to head office for making money of course, other than that, the central corporate structure did not exercise control over what was going on in the Gulf of Mexico. In particular, did not exercise any control over the quality of engineering. So, it’s engineers there were under control as an independent business unit. These engineers were under constant pressure to save money and cut corners. That’s really one of the root causes what happened because when they’re under those pressures they do cut corners and they stop asking what is good engineering practise and they start asking a subtly different question which is – What is good enough?
[00:13:30] I hope you can understand very different question which leads to the erosion of quality, overtime. So, I would describe the culture of BP in the Gulf of Mexico, the culture of engineers as a cavalier culture or a careless culture. Digging into that a little more deeply to understand why that happened we have to understand the reporting structures within the organisation.
[00:13:58] So those engineers were reporting directly to line managers and the result of that was that the performance agreements were with the line managers whose primary concern was profitable production. So, their bonuses were determined by cost cutting production. If that’s the way you reward your engineers, then overtime, that becomes their mindset and that is what happened in the Gulf of Mexico.
[00:14:25] That said, the decentralise structure in the way that I’ve just described was really a very fundamental course clearly BP thought so too and that’s why they changed their structure following that accident. The new structure was one in which the engineers, wherever they were in the world were under centralised controls from head office in London. No longer were they answerable to local commercial managers. They were providing services to them but they were answerable up an engineering functional line to a chief engineer, in effect in London and their career prospects would be determined on how well they perform in that context, not by commercial success in the local operation.
[00:15:13] So, that was one factor and the other was to do with safety in operational risk function which they had created. So, they had a centralise operational risk which means major accidents. So, they created a function called safety and major accident risk which was run out of London and had enormous control over what’s going on in BP’s far-flung empire. It had several 100 employees in this function who were embedded in business units around the world and sat on management teams in various levels and could influence decision making at various levels but who reported up their own separate functional line to head office operated very much like the technical engineering authority recommended by the Columbia Accident Investigation. To me that is a very important model and the most recent example would be Boeing. Boeing had lost two 737 Max aircrafts just a couple of years ago and again the fundamental cause of that was the decentralised operation business units within Boeing were operating more or less autonomously.
[00:16:41] However, Boeing has now changed that structure. So, its engineers no longer report to local business unit but there are reports up a separate line to a chief engineer who is independent of any of those business. So, they’ve learned and implemented the same lesson. There are some qualifications that are needed to make this argument now. For a structure like this, like the safety in operational risks function in BP, cannot guarantee safety, it cannot improve your chances operating safely and BP has had some significant near misses since introducing this new system but it hasn’t had another major accident.
[00:17:25] It had two major accidents before introducing this new system – The Macondo accident and five years earlier than that we had the Texas city refinery disaster which killed 15 people and cost the company billions of dollars. But since the reorganisation, it has not had a major accident now that’s probably not yet statistically significant. It’s very encouraging for BP and for those who are following the BP model. But this organizational structural model I’m talking about can be undermined in various ways. The most obvious one is if you don’t resource it properly, it won’t function properly. So that’s something you have to watch out for, you have to be aware that this is expensive. People will resist this model because it costs money and so that’s a way one of the problems with trying to introduce a model like this. The second change will undermine his bonus arrangements which operating companies. if you reward or people who are whose primary function is risk management, if you reward those people on the basis of company productivity and profit you undermine the capacity to perform their risk management job integrity.
[00:18:37] There are many examples to support what I’ve just said but I don’t have time to talk about that now. I think also, if your organisation is the one which discourages the reporting of bad news, if it does not encourage the reporting of bad news, this will also tend to undermine whatever systems you have in place.
[00:18:58] Finally, the attitude of the board or the company is critically important. If the board’s primary concern is to protect itself from bad news then nothing is going to save the organization in the long run. I’ve had the experience talking to people on boards and I realise that’s their primary concern. They don’t want me to give them the bad news. The point I’m making it is, the companies which have had these near-death experiences introduced the kinds of changes that I’m talking about and that to me is a very powerful piece of evidence in support of what I’m saying.
[00:19:42] Before I finish, I’d like to broaden the discussion a bit and say that this notion, that organisational structure creates culture is much broader than safety and applies in situations which have nothing to do with safety. Let me give you 2 examples. One concerns a culture which I’ve studied or had exposure to is in the railways, is the culture of punctuality or what is often called ‘on time running’ and it’s a very strong culture which is maintained in all kinds of ways.
[00:20:18] It’s one of the striking things about the railway system is that on time running is an absolute value. They don’t always succeed but is a very powerful drive to run on time. The way I became aware of it was when I was studying an accident in a railway system, where the driver had been speeding. The cause of the accident was driving speed and when we looked into why the driver did that, was because of power this ‘on time culture’. He was running late so he had to speed, result of which was a derailment which killed around seven people. The question I then ask myself is “Where does this come from?” about the language that they used to describe this was “On time running is king!”. That was the question that came up again and again in the enquiry. It was the structure of the organization that created this. First of all, there is public pressure that trains run on time which is translated to political pressure. Often there are regulators who will penalize train companies for not running on time. So, the organization has a whole structure to get people to run on time. There’s inspectors, there’s a signalling structure, they record the data on arrival time so they got to arrive at their destination say, within three minutes. If they don’t there are questions asked, the drivers are interviewed. The company will normally monitor arrival times twice a day at peak hours so they absolutely on to this and that’s the reason why this culture is paramount.
[00:20:51] The second example I want to give is McDonald’s – the fast-food company. McDonald’s has a highly decentralised business model. Its individual outlets are franchisers which means that McDonald’s itself is the company is not really too committed to the success or failure of those franchisees. It’s up to them to make money and to be profitable if they fail that’s their problem, not really McDonald’s. So, it’s a highly decentralised business model from that point of view. What’s really important to McDonald’s is not the survival of any one of those franchise operations. It’s the quality, predictability and uniformity in the service and the product.
[00:22:36] McDonald’s controls that very closely and centrally. It has a system of inspectors and quality control so that anyone going to McDonald’s store anyone in the world knows the quality of service which they are going to be receiving. So that’s the point is that if something is really important to an organization, they will control it centrally. If it isn’t important, they will allow more decentralization. So that is the main argument of my book.
[00:23:11] It includes more critiques along the way. I do critique of the anarchist school of safety which I associate with the names of Dekker and Hollnagel. That school is basically arguing that safety is best left to the workers and may be true in some circumstances but is certainly not true for major accident risk industries. I make a critique to the concept of ‘visibly felt leadership’ which is a term that’s very fashionable at the moment. So, you’ll find a lot of other kinds of ideas along the way. Thank you.
[00:23:50] Nippin Anand: The terms independence and authority, kept coming back in this conversation and one could argue in some ways that a safety department and most organisation actually is independent and seemingly has the authority to do what they are supposed to do.
Can you elaborate a bit more on this concept of independence and authority? What kind of authority are we talking about here?
[00:24:13] Andrew Hopkins: Yes, some of these very large organisations will have a director of safety who sits on the management board and in that they have a degree of independence from any of the business units. So that’s a tick in that respect. However, they don’t exercise real authority. Their role almost always is ultimately advisory. What you find is they’ll say, “We set the standards we developed, particular standards in the corporate centre but we have a role in enforcing them. From time to time, we carry out audits but our fundamental role when it comes to dealing with the business units is advisory. We are resource which is available for those units if that’s what they want but it is not our responsibility to ensure compliance with those standards and we don’t have the authority to intervene a shut down AN operation if it’s not in compliance with the standards.” So that’s what I mean when I say the role of a safety department doesn’t always have the requisite authority.
[00:25:23] Nippin Anand: Great! Thank you! Thank you very much for that. I really enjoyed talking to you.
What do you think? I thought it was a top provoking session that challenges outdent many levels especially at the time when we have been bogged down by so many competing theories safety Sciences and this complexity narrative. After listening to Andrew Hopkins, I’m thinking why should leadership come to come to terms with admitting that there are recurring patterns of resource constraints or goal conflicts in everyday work or start to accept that there is a significant disconnect between boardroom thinking and control room realities. No matter how well we dress them up these are systemic problems that would always create brushed under the carpet if the rewards and incentives are not designed to make them visible and intelligible.
[00:26:17] So, thank you professor Andrew Hopkins for so many empirical examples and reminding us that any genuine attempt to improve safety should first begin with understanding and addressing the structure of our organisation and we should always start from the top. What is mode Hopkins’s work extends far beyond safety to include quality, reliability, operational excellence, technical excellence and even the long-term survival of an organisation. Fascinating! If you find the podcast interesting, I invite you and encourage you to read and Hopkin’s latest book, ‘Organising for safety: how structure creates culture’ and I’ve included the link below for you in this podcast transcription. So, thank you once again for listening to the podcast and next week I will bring you something even more interesting till, then bye bye!
Want to talk? Reach out to Nippin Anand:
Subscribe to the podcast
A series of podcasts with thought leaders and safety scientists.
Diversity, inclusion and systemic leadership: A conversation with Rakesh Maharaj
Episode 25 – Featuring Rakesh Maharaj