The dragons of the unknown

This is a series about problems that fascinate me and which I think are important and interesting. The series draws on important work from the fields of safety critical systems and from the study of complexity, specifically complex socio-technical systems. The issues raised here will become increasingly important in coming years as systems become more complex, and the problems that people encounter as a result become more serious and intractable. This is the theme of my keynote at EuroSTAR in The Hague (November 12th-15th 2018).

The first post is a reflection, based on personal experience, on the corporate preference for building bureaucracy rather than dealing with complex reality, “Facing the dragons part 1 – corporate bureaucracies”. The second post is about the nature of complex systems, “part 2 – crucial features of complex systems”, and argues that the response of software testers to the problems arising from complexity has been too limited. The third follows on from part 2, and talks about the impossibility of knowing exactly how complex socio-technical systems will behave with the result that it is impossible to specify them precisely, “part 3 – I don’t know what’s going on”.

The fourth post, “part 4 – a brief history of accident models”, looks at accident models, i.e. the way that safety experts mentally frame accidents when they try to work out what caused them.

The fifth post, “part 5 – accident investigations and treating people fairly”, looks at weaknesses in the way that we have traditionally investigated accidents and failures; we have assumed neat linearity with clear cause and effect. In particular, our use of root cause analysis, and willingness to blame people for accidents is hard to justify on practical and ethical grounds.

The sixth post, “part 6 – Safety II, a new way of looking at safety” looks at the response of the safety critical systems community to these problems.

In the seventh post, when I get round to it, I will try to discuss some of the implications for software testing of the issues I have raised here, the need to look from more than one viewpoint, to think about how users can keep systems going, and dealing with the inevitability of failure. That will lead us to resilience engineering.

Advertisements