Statistics is an area which can look impressive in mathematical circles. In a nutshell, given sufficient samples and assuming all the factors are considered, past trends can be a reasonable indicator of future events. However, as weather forecasters know all too well, the field of statistics can only result in a probability. It may be predicted that the chances of rain today are 80%. But there is always a chance of 20% that it will not rain. Reliance on probabilities to forecast individual specific events precisely is therefore flawed, because a probability is by definition based on an uncertain set of factors and the probability is only valid given sufficient samples.
In the real world, risk is a reality. Throughout history, humans have tried to mitigate against risks, from avoiding sabre-toothed tigers to depressurising a vessel carefully before drilling into it for maintenance. Those risks that cannot be adequately mitigated need to be avoided or simply accepted (move far away from sabre-toothed tigers or do not ever drill into a pressure vessel).
How is risk usually assessed?
In hazardous industries, risk assessments are a fundamental part of the management of safety. Risk assessments are used to identify those risks serious enough to demand attention. A common approach used in industry to quantify risk is to consider the probability of an incident on a scale of 1 to 5, and at the same time the consequences should the incident occur, also on a scale of 1 to 5. The product of the two numbers is the overall risk figure. The risk can be plotted on a graph or so-called ‘heat map’ where the top right quadrant shows risks with a high probability and a serious consequence and the bottom left quadrant shows low probability and low consequence.
In general, because companies cannot concentrate on all risks, they tend to look at the Top 10 or some other ranking. These Top 10 risks are typically found in the hot zone of the heat map (top right quadrant). This approach is simple, practical and useful, but is however flawed in three main respects:
1. The probability of a risk occurring is based on judgment, is a statistical metric and is therefore imprecise in predicting specific future events.
2. The risks with very low probabilities and very high consequences are sometimes not in the Top 10. (For example a nuclear accident: high consequence, low probability).
3. The risk can change over time for any number of reasons such as plant modifications, operational changes or new factors. The time between the risk assessment and the actual work in hazardous environments can be the difference between an accident taking place or apparent ‘safe work’.
Leading indicators of safety are sometimes used to predict the underlying probability of an incident. Whether or not this is a reliable tool is a whole debate in its own right, but companies often use these because they are practical and useful. For example, the number of accidents per man hour worked, or the number of near misses, are both leading indicators that can predict an increase in the underlying probability of an accident. Furthermore, a near miss usually results in some actions taken to avoid the incident in future, thereby reducing the risk over time. When these indicators increase, further action needs to be taken (so the theory says) to address those factors that are resulting in unsafe conditions. Again this approach can be flawed if it is not realised that leading indicators are also statistically derived and therefore imprecise. Also, management is often totally unaware of what action is actually required to contain rising indicators, especially if the causes are behavioural or cultural in nature.
The right software can help
Software systems that address safety holistically need to embrace several factors. They need to recognise the value of leading indicators and have a good incident and near-miss management capability and enable behavioural safety observations and measurements. They need to recognise the importance of assessing safety related risks at multiple levels – in the engineering and design process (such as HAZOP outputs), as well as in the actual operations (such as permit to work). They need to recognise the dynamic nature of operational environments and have good change management processes to measure the impact of modifications on operational risk. Finally, they need to have the capability to relate patterns and links in the data to warn people of risks that are the combined result of multiple simultaneous factors.
For example, maintenance work on equipment + recent modification to equipment + previous incidents related to equipment + standing work procedure in use = overall risk. This overall risk is something for example that is not evident to people who inspect the work sites, but is the result of advanced system analytics that can correlate data intelligently to derive new insights. Few EHS systems achieve this level of vital insight which is likely to be successfully developed only by those vendors who focus on operational safety systems.
Be wary of inappropriate statistics and oversimplified risk management processes. Be thorough in approach and have multiple strategies to manage safety. Finally seek systems that have an holistic view on safety and at the same time are practical and easy to use. Once the system is in place, look to improve the quality of risk information continuously by adding modules such as incident management, permit to work, engineering change management and advanced analytics that generate new safety-related insights.
For more information contact Gavin Halse, ApplyIT, +27 (0)31 514 7300, [email protected], www.applyit.com
© Technews Publishing (Pty) Ltd | All Rights Reserved