LL: Technology and Human Behaviour


Technology May Compound Human Behaviour Risk

By Lloyd’s of London, November 19 2013.

In her recent book, business thinker Margaret Heffernan explores how human behaviour contributes to organisational failure and how human risks can be compounded by technology. Lloyds.com asks what insurers and risk managers can take from her research.

You say that technology can amplify human risks. What are the potential implications for the insurance industry, which increasingly uses modelling to understand and price risk?

My concern is that we will give up on humans (as fallible) and trust instead to technology because, on one level, it is less prone to sloppy mistakes. But since technology is created by humans, and built on, for example, economic or financial principles which have been derived by humans, we have the potential to over-estimate fatally the perfection of technology. The technology is only ever as good as the insight, experience and knowledge embedded in it. The idea that we eliminate risk when we eliminate humans is, I believe, naïve.

What can insurers and risk managers learn from your research?

There is a biological basis to bias which means we all are biased; it’s not an issue of good or bad, smart or stupid people. We are all biased. What this means is that we would do better to identify and understand our biases in order to ensure a balance of bias; rather like a balanced portfolio, this could reduce risk by ensuring that we have a variety of views and see different things.

I think risk managers have a particularly difficult job because no one really wants them to do anything except provide a clean bill of health. This puts them under pressure to certify things that are dangerous and to minimize risks that can end up being very costly. Moreover, they usually have authority to look only at some things and not others. A whole chain of decisions have contributed to past industrial disasters, but who was willing to speak up and say: when you combine all of these strategies, you compound risk?

And what about the insurance industry, what should they be doing as the experts on risk?

As experts on risk, insurance companies need to think long and hard about the complex consequences of business models; that they are dependent on many other businesses. How far, for example, are supermarkets dependent on suppliers who themselves are dependent on suppliers – who may further be dependent on a vast array of independent suppliers? Where are the risks – and how many are there? How many different companies are embedded in the company’s network and how exposed is the business to a wide range of national and regulatory differences? It is harder than ever today to gain a clear line of sight across businesses and thus into the absolute risk that they carry. As businesses grow more complex in their partnerships and alliances, there is a significant issue for insurers about boundaries: where exposures start and end.

During your career you have looked into many cases of organisational failure. What is often the biggest cause?

I have a perhaps unhealthy interest in what I think of as corporate car crashes. It’s hard to say what the single biggest cause is, but one contender would be bad management.

In your book Willful Blindness you argue that the biggest risks are often the ones we don’t see. Is wilful blindness just another way of describing cultural or behavioural risk?

Wilful blindness is just one form of human risk to which all organisations are susceptible and to which few (if any) are immune. I’m particularly interested too in the degree to which wilful blindness can be embedded inside algorithms as algorithms contain assumptions. As they become more complex, being able to see what they assume and what they ignore becomes increasingly difficult.

See Lloyd’s of London, Technology May Compound Human Behaviour Risk, Lloyd’s, November 19 2013.

(Emphasis added)