Introducing barriers has become a standard way of managing risk and safety – but it is not necessarily the best one
On 6 March 1987, the passenger ferry Herald of Free Enterprise capsized on departure from Zeebrugge, resulting in the death of 193 passengers and crew. It was considered one of the most severe accidents in the history of the maritime industry. The official inquiry into the case put the cause down to a ‘disease of sloppiness’ within the organisation. The Herald along with Estonia, Scandinavian Star and similar high-profile accidents, led to the introduction of the ISM Code and the adoption of the safety management system. This year marks the 30th anniversary of the Herald of Free Enterprise, and we are faced with some difficult questions. What has the ISM Code achieved and what lies ahead? If safety is understood and managed as a measure of ‘unsafety’ (accidents, incidents, near misses, defects and non-conformances), we are faced with a dilemma. On the one hand, we are striving hard to minimise ‘unsafety’ by setting ambitious goals (such as zero accidents). On the other hand, technological advances and cost pressures are making our goals unrealistic. Improving safety by focusing on ‘unsafety’ alone can only work to a certain extent. Beyond that, this approach becomes futile without a shift in thinking.
Plugs and barriers
Humans have made significant progress in disciplining workforces and addressing the problem of ‘human error’. In fact, most accidents are based on human failures. In the last few decades significant progress has been made in understanding the various sources of errors (for example, James Reason distinguishes between skill, knowledge and rule-based errors based on Jens Rasmussen’s model of human information processing). Several studies have been carried out to expand on the catalogue of errors aimed at controlling those errors by first introducing barriers and then plugging any ‘holes’ in those barriers. (The term ‘barrier’ is commonly used in barrier-based safety management.) But the moment we introduce a barrier or a plug we are faced with even more challenges. First, the commitment to invest in a barrier (or plug) comes with certain expectations that it will justify the cost of investments. A key benefit of introducing bridge navigation alarm systems was that it allowed companies to consider reductions in manning levels. Second, introducing a barrier also means a commitment to maintain the functionality of that barrier throughout its operational life. Without this commitment, the barrier itself becomes a risk. And when technologies advance faster than our ability to regulate them, introducing barriers without suitable control mechanisms such as training and development, maintenance regimes or some other form of regulation creates more problems than they solve. ECDIS assisted groundings are just one example.

Process safety management
Focusing on malfunctions, failures and errors leads us to another problem. Imagine a system where energy flows from one direction to another. A threat enters the system at one end. If the system is not adequately designed and protected by barriers, the same threat can result in a catastrophic failure on the other hand (see figure 1). Focusing purely on errors makes at least some sense if the system we are dealing with is purely mechanical and if the barriers are capable of functioning with minimal human intervention. A seized valve can be replaced and a corroded section of pipework can be cropped and replaced. The laws of physics determine the safety of such systems. But what if the system is more than just mechanical? What if humans are involved in making the system function, and the success and failure of the system is determined by human action? The answer is process safety management and it sounds impressive.
Patent and latent errors
The Herald of Free Enterprise departed from the port before departure checks were completed. The vessel was still three feet down by the head with ballasting in progress when she departed from port. At the time of departure, the chief officer assumed that the assistant bosun had proceeded to harbour stations, when in fact the bosun was fast asleep in his room and was not awakened by the station’s call. The bosun was of the view that it was not his job to close or even ensure that the bow doors were closed prior to departure. And the Master felt that unless told otherwise he would assume that the bow doors would have been closed prior to sailing. If we looked at these issues as patent (human) errors and latent (organisational) errors we would come up with a list of barriers and plugs to put in place to prevent the accident. Barriers included the lack of a rest hour log, bridge team management training to ensure closed loop communication, an alarm management system, clear reporting lines and detailed job descriptions. But if we examine most accident reports, many of the ‘errors’ are recurring issues. For example, this was not the first time the bow doors on the Herald had been left open. Similarly, the day of the incident was not the first time the fin stabilisers on the Finnarrow were left open. In the case of the Hoegh Osaka, the practice of loading undeclared cargoes was certainly not new within the industry.
Near miss reporting
Both patent and latent errors are part of normal work in a resource constrained work environment and most errors are recoverable if detected in time. The trick is in paying attention to the most frequent errors and encouraging reporting of them. Near miss reporting is an appropriate tool to make this possible. It is unfortunate that parts of the maritime industry seem to have grossly misunderstood the purpose of near miss reporting and turned it into a number crunching exercise to satisfy insatiable KPIs. An error detection and reporting system that may have been used constructively is sometimes employed to control the behaviour of those at the bottom end of the labour markets, such as crew not wearing personal protective gear. I am not against reminding people about using PPE. But if that is how we perceive most errors, it misses the point of near miss reporting. On the other hand, if we broaden our understanding of ‘errors’, we may become less concerned about missing log book entries and typographical errors and concentrate on what matters. Unfortunately, however, it is the former that take up most of our resources.
Concurrence of normal events
Imagine standing on the opposite side of a door as someone slams it opens towards you. Chances are it may hurt you. If we approached this problem in a mechanical fashion, we could replace the door with sliding doors (an expensive affair), signpost it, or paint the floor yellow – which would be cheapest. But note that the phenomenon is not permanent. It is transient. Two minor events, the abrupt opening of the door and someone approaching from the other side, have coincided and resulted in an abnormal situation. If we introduce barriers to prevent every such event, very soon workers will stop paying attention to the control measures, and may even laugh at us. Similarly, imagine driving a car in wet weather with tyres that are worn, but within the legal limits, and having consumed a glass of wine (still within legal alcohol limits). Then there is a sudden bend on the road. If the car rolls, what is the root cause? Many accidents are based on a concurrence of events which were all within the limits of defined tolerance, and actions which are part of normal work that combine to emerge as an undesirable outcome. The cause and the consequence are out of proportion. Of course, you could argue that the person on the other side of the door may have lost situational awareness and the driver in the car had become complacent, and overburden the system with even more controls, to prevent these things, but this is far from an ideal solution. To overcome this problem, it is important to look at ‘errors’ – or minor deviations and actions that are part of normal work – in different situations and test the capability of the system to recover from those errors. If verbal communication with a seafarer is problematic, it is important to identify situations where this could become critical for system safety, such as hand steering in narrow channels or during emergencies. In this way we make effective use of resources and exercise control where it matters.
What lies ahead
In the 30 years since the capsize of the Herald of Free Enterprise, we have made significant progress as an industry. But this progress has also led us into believing that we can explain the ‘root cause’ behind most accidents by looking for errors and failures. Our search for errors, near misses, incidents and non-conformances has become more intense. In an attempt to make the system safer, every inspection and investigation must come up with a list of root causes and non-conformances. We have arrived at a stage where our control mechanisms have fallen behind our abilities to manage the risks that we face. Further improvements will require a shift in the way we think about managing safety. If we think about safety in a purely mechanistic way, and workers are treated as components in the system waiting to be blamed, we have been misled about managing safety. Things that mostly go right will occasionally go wrong where humans are involved. But this does not mean that we must over-react to every error and frantically introduce barriers and plugs for the sake of doing so. If we did so, the system will slow down to such an extent that no work will ever get done. For instance, the Hoegh Osaka lost stability and ran aground despite – or because of – a total of 213 checks for cargo operations alone. The industry is not in a position to afford more regulation and controls and the cognitive load on the average seafarer has reached its peak. Perhaps we need to remove some barriers and unplug some holes by looking at safety differently. The alternative is to hold senior management accountable for the functioning of each barrier. That, after all, was the intention some 30 years ago.
Disclaimer – The views expressed in this article may not be the views of the organisation that the author represents.