Skip to main content

Why we need a new science of safety

It is often said that our approach to health and safety has gone mad. But the truth is that it needs to go scientific. Managing risk is ultimately linked to questions of engineering and economics. Can something be made safer? How much will that safety cost? Is it worth that cost?

Decisions under uncertainty can be explained using utility, a concept introduced by Swiss mathematician Daniel Bernoulli 300 years ago, to measure the amount of reward received by an individual. But the element of risk will still be there. And where there is risk, there is risk aversion.
Risk aversion itself is a complex phenomenon, as illustrated by psychologist John W. Atkinson’s 1950s experiment, in which five-year-old children played a game of throwing wooden hoops around pegs, with rewards based on successful throws and the varying distances the children chose to stand from the pegs.

The risk-confident stood a challenging but realistic distance away, but the risk averse children fell into two camps. Either they stood so close to the peg that success was almost guaranteed or, more perplexingly, positioned themselves so far away that failure was almost certain. Thus some risk averse children were choosing to increase, not decrease, their chance of failure.

So clearly high aversion to risk can induce some strange effects. These might be unsafe in the real world, as testified by author Robert Kelsey, who said that during his time as a City trader, “bad fear” in the financial world led to either “paralysis… or nonsensical leaps”. Utility theory predicts a similar effect, akin to panic, in a large organisation if the decision maker’s aversion to risk gets too high. At some point it is not possible to distinguish the benefits of implementing a protection system from those of doing nothing at all.

So when it comes to human lives, how much money should we spend on making them safe? Some people prefer not to think about the question, but those responsible for industrial safety or health services do not have that luxury. They have to ask themselves the question: what benefit is conferred when a safety measure “saves” a person’s life?

The answer is that the saved person is simply left to pursue their life as normal, so the actual benefit is the restoration of that person’s future existence. Since we cannot know how long any particular person is going to live, we do the next best thing and use measured historical averages, as published annually by the Office of National Statistics. The gain in life expectancy that the safety measure brings about can be weighed against the cost of that safety measure using the Judgement value, which mediates the balance using risk-aversion.

The Judgement (J) value is the ratio of the actual expenditure to the maximum reasonable expenditure. A J-value of two suggests that twice as much is being spent as is reasonably justified, while a J-value of 0.5 implies that safety spend could be doubled and still be acceptable. It is a ratio that throws some past safety decisions into sharp relief.

For example, a few years ago energy firm BNFL authorised a nuclear clean-up plant with a J-value of over 100, while at roughly the same time the medical quango NICE was asked to review the economic case for three breast cancer drugs found to have J-values of less than 0.05.
Risky business. shutterstock
The Government of the time seemed happy to sanction spending on a plant that might just prevent a cancer, but wanted to think long and hard about helping many women actually suffering from the disease. A new and objective science of safety is clearly needed to provide the level playing field that has so far proved elusive.


Putting a price on life


Current safety methods are based on the “value of a prevented fatality” or VPF. It is the maximum amount of money considered reasonable to pay for a safety measure that will reduce by one the expected number of preventable premature deaths in a large population. In 2010, that value was calculated at £1.65m.

This figure simplistically applies equally to a 20-year-old and a 90-year-old, and is in widespread use in the road, rail, nuclear and chemical industries. Some (myself included) argue that the method used to reach this figure is fundamentally flawed.

In the modern industrial world, however, we are all exposed to dangers at work and at home, on the move and at rest. We need to feel safe, and this comes at a cost. The problems and confusions associated with current methods reinforce the urgent need to develop a new science of safety. Not to do so would be too much of a risk.

---------------------------------------------------------
The ConversationThis blog is written by Cabot Institute member Philip Thomas, Professor of Risk Management, University of Bristol.  This article was originally published on The Conversation. Read the original article.
Philip Thomas

Popular posts from this blog

Converting probabilities between time-intervals

This is the first in an irregular sequence of snippets about some of the slightly more technical aspects of uncertainty and risk assessment.  If you have a slightly more technical question, then please email me and I will try to answer it with a snippet. Suppose that an event has a probability of 0.015 (or 1.5%) of happening at least once in the next five years. Then the probability of the event happening at least once in the next year is 0.015 / 5 = 0.003 (or 0.3%), and the probability of it happening at least once in the next 20 years is 0.015 * 4 = 0.06 (or 6%). Here is the rule for scaling probabilities to different time intervals: if both probabilities (the original one and the new one) are no larger than 0.1 (or 10%), then simply multiply the original probability by the ratio of the new time-interval to the original time-interval, to find the new probability. This rule is an approximation which breaks down if either of the probabilities is greater than 0.1. For example

1-in-200 year events

You often read or hear references to the ‘1-in-200 year event’, or ‘200-year event’, or ‘event with a return period of 200 years’. Other popular horizons are 1-in-30 years and 1-in-10,000 years. This term applies to hazards which can occur over a range of magnitudes, like volcanic eruptions, earthquakes, tsunamis, space weather, and various hydro-meteorological hazards like floods, storms, hot or cold spells, and droughts. ‘1-in-200 years’ refers to a particular magnitude. In floods this might be represented as a contour on a map, showing an area that is inundated. If this contour is labelled as ‘1-in-200 years’ this means that the current rate of floods at least as large as this is 1/200 /yr, or 0.005 /yr. So if your house is inside the contour, there is currently a 0.005 (0.5%) chance of being flooded in the next year, and a 0.025 (2.5%) chance of being flooded in the next five years. The general definition is this: ‘1-in-200 year magnitude is x’ = ‘the current rate for eve

Coconuts and climate change

Before pursuing an MSc in Climate Change Science and Policy at the University of Bristol, I completed my undergraduate studies in Environmental Science at the University of Colombo, Sri Lanka. During my final year I carried out a research project that explored the impact of extreme weather events on coconut productivity across the three climatic zones of Sri Lanka. A few months ago, I managed to get a paper published and I thought it would be a good idea to share my findings on this platform. Climate change and crop productivity  There has been a growing concern about the impact of extreme weather events on crop production across the globe, Sri Lanka being no exception. Coconut is becoming a rare commodity in the country, due to several reasons including the changing climate. The price hike in coconuts over the last few years is a good indication of how climate change is affecting coconut productivity across the country. Most coconut trees are no longer bearing fruits and thos