The dragons of the unknown; part 6 – Safety II, a new way of looking at safety

Introduction

This is the sixth post in a series about problems that fascinate me, that I think are important and interesting. The series draws on important work from the fields of safety critical systems and from the study of complexity, specifically complex socio-technical systems. This will be the theme of my keynote at EuroSTAR in The Hague (November 12th-15th 2018).

The first post was a reflection, based on personal experience, on the corporate preference for building bureaucracy rather than dealing with complex reality, “Facing the dragons part 1 – corporate bureaucracies”. The second post was about the nature of complex systems, “part 2 – crucial features of complex systems”. The third followed on from part 2, and talked about the impossibility of knowing exactly how complex socio-technical systems will behave with the result that it is impossible to specify them precisely, “part 3 – I don’t know what’s going on”.

The fourth post, “part 4 – a brief history of accident models”, looks at accident models, i.e. the way that safety experts mentally frame accidents when they try to work out what caused them.

The fifth post, “part 5 – accident investigations and treating people fairly”, looks at weaknesses of of the way that we have traditionally investigated accidents and failures, assuming neat linearity with clear cause and effect. In particular, our use of root cause analysis, and willingness to blame people for accidents is hard to justify.

This post looks at the response of the safety criticial community to such problems and the necessary trade offs that a practical response requires. The result, Safety II, is intriguing and has important lessons for software testers.

More safety means less feedback

2017 - safest year in aviation historyIn 2017 nobody was killed on a scheduled passenger flight (sadly that won’t be the case in 2018). This prompted the South China Morning Post to produce this striking graphic, which I’ve reproduced here in butchered form. Please, please look at the original. My version is just a crude taster.

Increasing safety is obviously good news, but it poses a problem for safety professionals. If you rely on accidents for feedback then reducing accidents will choke off the feedback you need to keep improving, to keep safe. The safer that systems become the less data is available. Remember what William Langewiesche said (see part 4).

“What can go wrong usually goes right – and then people draw the wrong conclusions.”

If accidents have become rare, but are extremely serious when they do occur, then it will be highly counter-productive if investigators pick out people’s actions that deviated from, or adapted, the procedures that management or designers assumed were being followed.

These deviations are always present in complex socio-technical systems that are running successfully and it is misleading to focus on them as if they were a necessary and sufficient cause when there is an accident. The deviations may have been a necessary cause of that particular accident, but in a complex system they were almost certainly not sufficient. These very deviations may have previously ensured the overall system would work. Removing the deviation will not necessarily make the system safer.

There might be fewer opportunities to learn from things going wrong, but there’s a huge amount to learn from all the cases that go right, provided we look. We need to try and understand the patterns, the constraints and the factors that are likely to amplify desired emergent behaviour and those that will dampen the undesirable or dangerous. In order to create a better understanding of how complex socio-technical systems can work safely we have to look at how people are using them when everything works, not just when there are accidents.

Safety II – learning from what goes right

Complex systems and accidents might be beyond our comprehension but that doesn’t mean we should just accept that “shit happens”. That is too flippant and fatalistic, two words that you can never apply to the safety critical people.

Safety I is shorthand for the old safety world view, which focused on failure. Its utility has been hindered by the relative lack of feedback from things going wrong, and the danger that paying insufficient attention to how and why things normally go right will lead to the wrong lessons being learned from the failures that do occur.

Safety ISafety I assumed linear cause and effect with root causes (see part 5). It was therefore prone to reaching a dangerously simplistic verdict of human error.

This diagram illustrates the focus of Safety I on the unusual, on the bad outcomes. I have copied, and slight adapted, the Safety I and Safety II diagrams from a document produced by Eurocontrol, (The European Organisation for the Safety of Air Navigation) “From Safety-I to Safety-II: A White Paper” (PDF, opens in new tab).

Incidentally, I don’t know why Safety I and Safety II are routinely illustrated using a normal distribution with the Safety I focus kicking in at two standard deviations. I haven’t been able to find a satisfactory explanation for that. I assume that this is simply for illustrative purposes.

Safety IIIf Safety I wants to prevent bad outcomes, in contrast Safety II looks at how good outcomes are reached. Safety II is rooted in a more realistic understand of complex systems than Safety I and extends the focus to what goes right in systems. That entails a detailed examination of what people are doing with the system in the real world to keep it running. Instead of people being regarded as a weakness and a source of danger, Safety II assumes that people, and the adaptations they introduce to systems and processes, are the very reasons we usually get good, safe outcomes.

If we’ve been involved in the development of the system we might think that we have a good understanding of how the system should be working, but users will always, and rightly, be introducing variations that designers and testers had never envisaged. The old, Safety I, way of thinking regarded these variations as mistakes, but they are needed to keep the systems safe and efficient. We expect systems to be both, which leads on to the next point.

There’s a principle in safety critical systems called ETTO, the Efficiency Thoroughness Trade Off. It was devised by Erik Hollnagel, though it might be more accurate to say he made it explicit and popularised the idea. The idea should be very familiar to people who have worked with complex systems. Hollnagel argues that it is impossible to maximise both efficiency and thoroughness. I’m usually reluctant to cite Wikipedia as a source, but its article on ETTO explains it more succinctly than Hollnagel himself did.

“There is a trade-off between efficiency or effectiveness on one hand, and thoroughness (such as safety assurance and human reliability) on the other. In accordance with this principle, demands for productivity tend to reduce thoroughness while demands for safety reduce efficiency.”

Making the system more efficient makes it less likely that it will achieve its important goals. Chasing these goals comes at the expense of efficiency. That has huge implications for safety critical systems. Safety requires some redundancy, duplication and fallbacks. These are inefficient. Efficiencies eliminate margins of error, with potentially dangerous results.

ETTO recognises the tension between organisations’ need to deliver a safe, reliable product or service, and the pressure to do so at the lowest cost possible. In practice, the conflict in goals is usually fully resolved only at the sharp end, where people do the real work and run the systems.

airline job adAs an example, an airline might offer a punctuality bonus to staff. For an airline safety obviously has the highest priority, but if it was an absolute priority, the only consideration, then it could not contemplate any incentive that would encourage crews to speed up turnarounds on the ground, or to persevere with a landing when prudence would dictate a “go around”. In truth, if safety were an absolute priority, with no element of risk being tolerated, would planes ever take off?

People are under pressure to make the systems efficient, but they are expected to keep the system safe, which inevitably introduces inefficiencies. This tension results in a constant, shifting, pattern of trade-offs and compromises. The danger, as “drift into failure” predicts (see part 4), is that this can lead to a gradual erosion of safety margins.

The old view of safety was to constrain people, reducing variability in the way they use systems. Variability was a human weakness. In Safety II variability in the way that people use the system is seen as a way to ensure the system adapts to stay effective. Humans aren’t seen as a potential weakness, but as a source of flexibility and resilience. Instead of saying “they didn’t follow the set process therefore that caused the accident”, the Safety II approach means asking “why would that have seemed like the right thing to do at the time? Was that normally a safe action?”. Investigations need to learn through asking questions, not making judgments – a lesson it was vital I learned as an inexperienced auditor.

Emergence means that the behaviour of a complex system can’t be predicted from the behaviour of its components. Testers therefore have to think very carefully about when we should apply simple pass or fail criteria. The safety critical community explicitly reject the idea of pass/fail, or the bimodal principle as they call it (see part 4). A flawed component can still be useful. A component working exactly as the designers, and even the users, intended can still contribute to disaster. It all depends on the context, what is happening elsewhere in the system, and testers need to explore the relationships between components and try to learn how people will respond.

Safety is an emergent property of the system. It’s not possible to design it into a system, to build it, or implement it. The system’s rules, controls, and constraints might prevent safety emerging, but they can only enable it. They can create the potential for people to keep the system safe but they cannot guarantee it. Safety depends on user responses and adaptations.

Adaptation means the system is constantly changing as the problem changes, as the environment changes, and as the operators respond to change with their own variations. People manage safety with countless small adjustments.

we don't make mistakesThere is a popular internet meme, “we don’t make mistakes – we do variations”. It is particularly relevant to the safety critical community, who have picked up on it because it neatly encapsulates their thinking, e.g. this article by Steven Shorrock, “Safety-II and Just Culture: Where Now?”. Shorrock, in line with others in the safety critical community, argues that if the corporate culture is to be just and treat people fairly then it is important that the variations that users introduce are understood, rather than being used as evidence to punish them when there is an accident. Pinning the blame on people is not only an abdication of responsibility, it is unjust. As I’ve already argued (see part 5), it’s an ethical issue.

Operator adjustments are vital to keep systems working and safe, which brings us to the idea of trust. A well-designed system has to trust the users to adapt appropriately as the problem changes. The designers and testers can’t know the problems the users will face in the wild. They have to confront the fact that dangerous dragons are lurking in the unknown, and the system has to trust the users with the freedom to stay in the safe zone, clear of the dragons, and out of the disastrous tail of the bell curve that illustrates Safety II.

Safety II and Cynefin

If you’re familiar with Cynefin then you might wonder about Safety II moving away from a focus on the tail of the distribution. Cynefin helps us understand that the tail is where we can find opportunities as well as threats. It’s worth stressing that Safety II does encompass Safety I and the dangerous tail of the distribution. It must not be a binary choice of focusing on either the tail or the body. We have to try to understand not only what happens in the tail, how people and systems can inadvertently end up there, but also what operators do to keep out of the tail.

The Cynefin framework and Safety II share a similar perspective on complexity and the need to allow for, and encourage, variation. I have written about Cynefin elsewhere, e.g. in two articles I wrote for the Association for Software Testing, and there isn’t room to repeat that here. However, I do strongly recommend that testers familiarise themselves with the framework.

To sum it up very briefly, Cynefin helps us to make sense of problems by assigning them to one of four different categories, the obvious, the complicated (the obvious and complicated being related in that problems have predictable causes and resolutions), the complex and the chaotic. Depending on the category different approaches are required. In the case of software development the challenge is to learn more about the problem in order to turn it from a complex activity into a complicated one that we can manage more easily.

Applying Cynefin would result in more emphasis on what’s happening in the tails of the distribution, because that’s where we will find the threats to be avoided and the opportunities to be exploited. Nevertheless, Cynefin isn’t like the old Safety I just because they both focus on the tails. They embody totally different worldviews.

Safety II is an alternative way of looking at accidents, failure and safety. It is not THE definitive way, that renders all others dated, false and heretical. The Safety I approach still has its place, but it’s important to remember its limitations.

Everything flows and nothing abides

Thinking about linear cause and effect, and decomposing components are still vital in helping us understand how different parts of the system work, but they offer only a very limited and incomplete view of what we should be trying to learn. They provide a way of starting to build our understanding, but we mustn’t stop there.

We also have to venture out into the realms of the unknown and often unknowable, to try to understand more about what might happen when the components combine with each other and with humans in complex socio-technical systems. This is when objects become processes, when static elements become part of a flow that is apparent only when we zoom out to take in a bigger picture in time and space.

The idea of understanding objects by stepping back and looking at how they flow and mutate over time has a long, philosophical and scientific history. 2,500 years ago Heraclitus wrote.

“Everything flows and nothing abides. Everything gives way and nothing stays fixed.”

Professor Michael McIntyre (Professor of Atmospheric Dynamics, Cambridge University) put it well in a fascinating BBC documentary, “The secret life of waves”.

“If we want to understand things in depth we usually need to think of them both as objects and as dynamic processes and see how it all fits together. Understanding means being able to see something from more than one viewpoint.”

In my next post I will try to discuss some of the implications for software testing of the issues I have raised here, the need to look from more than one viewpoint, to think about how users can keep systems going, and dealing with the inevitability of failure. That will lead us to resilience engineering.everything flows

Advertisements

The dragons of the unknown; part 5 – accident investigations and treating people fairly

Introduction

This is the fifth post in a series about problems that fascinate me, that I think are important and interesting. The series draws on important work from the fields of safety critical systems and from the study of complexity, specifically complex socio-technical systems. This will be the theme of my keynote at EuroSTAR in The Hague (November 12th-15th 2018).

The first post was a reflection, based on personal experience, on the corporate preference for building bureaucracy rather than dealing with complex reality, “Facing the dragons part 1 – corporate bureaucracies”. The second post was about the nature of complex systems, “part 2 – crucial features of complex systems”. The third followed on from part 2, and talked about the impossibility of knowing exactly how complex socio-technical systems will behave with the result that it is impossible to specify them precisely, “part 3 – I don’t know what’s going on”.

The fourth post “part 4 – a brief history of accident models” looks at accident models, i.e. the way that safety experts mentally frame accidents when they try to work out what caused them. This post looks at weaknesses of of the way that we have traditionally investigated accidents and failures, assuming neat linearity with clear cause and effect. In particular, our use of root cause analysis, and willingness to blame people for accidents is hard to justify.

The limitations of root cause analysis

root cause (fishbone) diagramOnce you accept that complex systems can’t have clear and neat links between causes and effects then the idea of root cause analysis becomes impossible to sustain. “Fishbone” cause and effect diagrams (like those used in Six Sigma) illustrate traditional thinking, that it is possible to track back from an adverse event to find a root cause that was both necessary and sufficient to bring it about.

The assumption of linearity with tidy causes and effects is no more than wishful thinking. Like the Domino Model (see “part 4 – a brief history of accident models”) it encourages people to think there is a single cause, and to stop looking when they’ve found it. It doesn’t even offer the insight of the Swiss Cheese Model (also see part 4) that there can be multiple contributory causes, all of them necessary but none of them sufficient to produce an accident. That is a key idea. When complex systems go wrong there is rarely a single cause; causes are necessary, but not sufficient.

complex airline systemHere is a more realistic depiction of what a complex socio-technical system. It is a representation of the operations control system for an airline. The specifics don’t matter. It is simply a good illustration of how messy a real, complex system looks when we try to depict it.

This is actually very similar to the insurance finance applications diagram I drew up for Y2K (see “part 1 – corporate bureaucracies”). There was no neat linearity. My diagram looked just like this, with a similar number of nodes, or systems most of which had multiple two-way interfaces with others. And that was just at the level of applications. There was some intimidating complexity within these systems.

As there is no single cause of failure the search for a root cause can be counter-productive. There are always flaws, bugs, problems, deviances from process, variations. So you can always fix on something that has gone wrong. But it’s not really a meaningful single cause. It’s arbitrary.

The root cause is just where you decide to stop looking. The cause is not something you discover. It’s something you choose and construct. The search for a root cause can mean attention will focus on something that is not inherently dangerous, something that had previously “failed” repeatedly but without any accident. The response might prevent that particular failure and therefore ensure there’s no recurrence of an identical accident. However, introducing a change, even if it’s a fix, to one part of a complex system affects the system in unpredictable ways. The change therefore creates new possibilities for failure that are unknown, even unknowable.

It’s always been hard, even counter-intuitive, to accept that we can have accidents & disasters without any new failure of a component, or even without any technical failure that investigators can identify and without external factors interfering with the system and its operators. We can still have air crashes for which no cause is ever found. The pressure to find an answer, any plausible answer, means there has always been an overwhelming temptation to fix the blame on people, on human error.

Human error – it’s the result of a problem, not the cause

If there’s an accident you can always find someone who screwed up, or who didn’t follow the rules, the standard, or the official process. One problem with that is the same applies when everything goes well. Something that troubled me in audit was realising that every project had problems, every application had bugs when it went live, and there were always deviations from the standards. But the reason smart people were deviating wasn’t that they were irresponsible. They were doing what they had to do to deliver the project. Variation was a sign of success as much as failure. Beating people up didn’t tell us anything useful, and it was appallingly unfair.

One of the rewarding aspects of working as an IT auditor was conducting post-implementation reviews and being able to defend developers who were being blamed unfairly for problem projects. The business would give them impossible jobs, complacently assuming the developers would pick up all the flak for the inevitable problems. When auditors, like me, called them out for being cynical and irresponsible they hated it. They used to say it was because I had a developer background and was angling for my next job. I didn’t care because I was right. Working in a good audit department requires you to build up a thick skin, and some healthy arrogance.

There always was some deviation from standards, and the tougher the challenge the more obvious they would be, but these allegedly deviant developers were the only reason anything was delivered at all, albeit by cutting a few corners.

It’s an ethical issue. Saying the cause of an accident is that people screwed up is opting for an easy answer that doesn’t offer any useful insights for the future and just pushes problems down the line.

Sidney Dekker used a colourful analogy. Dumping the blame on an individual after an accident is “peeing in your pants management” (PDF, opens in new tab).

“You feel relieved, but only for a short while… you start to feel cold and clammy and nasty. And you start stinking. And, oh by the way, you look like a fool.”

Putting the blame on human error doesn’t just stink. It obscures the deeper reasons for failure. It is the result of a problem, not the cause. It also encourages organisations to push for greater automation, in the vain hope that will produce greater safety and predictability, and fewer accidents.

The ironies of automation

An important part of the motivation to automate systems is that humans are seen as unreliable & inefficient. So they are replaced by automation, but that leaves the humans with jobs that are even more complex and even more vulnerable to errors. The attempt to remove errors creates fresh possibilities for even worse errors. As Lisanne Bainbridge wrote in a 1983 paper “The ironies of automation”;

“The more advanced a control system is… the more crucial may be the contribution of the human operator.”

There are all sorts of twists to this. Automation can mean the technology does all the work and operators have to watch a machine that’s in a steady-state, with nothing to respond to. That means they can lose attention & not intervene when they need to. If intervention is required the danger is that vital alerts will be lost if the system is throwing too much information at operators. There is a difficult balance to be struck between denying operators feedback, and thus lulling them into a sense that everything is fine, and swamping them with information. Further, if the technology is doing deeply complicated processing, are the operators really equipped to intervene? Will the system allow operators to override? Bainbridge makes the further point;

“The designer who tries to eliminate the operator still leaves the operator to do the tasks which the designer cannot think how to automate.”

This is a vital point. Systems are becoming more complex and the tasks left to the humans become ever more demanding. System designers have only a very limited understanding of what people will do with their systems. They don’t know. The only certainty is that people will respond and do things that are hard, or impossible, to predict. That is bound to deviate from formal processes, which have been defined in advance, but these deviations, or variations, will be necessary to make the systems work.

Acting on the assumption that these deviations are necessarily errors and “the cause” when a complex socio-technical system fails is ethically wrong. However, there is a further twist to the problem, summed up by the Law of Stretched Systems.

Stretched systems

Lawrence Hirschhorn’s Law of Stretched Systems is similar to the Fundamental Law of Traffic Congestion. New roads create more demand to use them, so new roads generate more traffic. Likewise, improvements to systems result in demands that the system, and the people, must do more. Hirschhorn seems to have come up with the law informally, but it has been popularised by the safety critical community, especially by David Woods and Richard Cook.

“Every system operates always at its capacity. As soon as there is some improvement, some new technology, we stretch it.”

And the corollary, furnished by Woods and Cook.

“Under resource pressure, the benefits of change are taken in increased productivity, pushing the system back to the edge of the performance envelope.”

Every change and improvement merely adds to the stress that operators are coping with. The obvious response is to place more emphasis on ergonomics and human factors, to try and ensure that the systems are tailored to the users’ needs and as easy to use as possible. That might be important, but it hardly resolved the problem. These improvements are themselves subject to the Law of Stretched Systems.

This was all first noticed in the 1990s after the First Gulf War. The US Army hadn’t been in serious combat for 18 years. Technology had advanced massively. Throughout the 1980s the army reorganised, putting more emphasis on technology and training. The intention was that the technology should ease the strain on users, reduce fatigue and be as simple to operate as possible. It didn’t pan out that way when the new army went to war. Anthony H. Cordesman and Abraham Wagner analysed in depth the lessons of the conflict. They were particularly interested in how the technology had been used.

“Virtually every advance in ergonomics was exploited to ask military personnel to do more, do it faster, and do it in more complex ways… New tactics and technology simply result in altering the pattern of human stress to achieve a new intensity and tempo of combat.”

Improvements in technology create greater demands on the technology – and the people who operate it. Competitive pressures push companies towards the limits of the system. If you introduce an enhancement to ease the strain on users then managers, or senior officers, will insist on exploiting the change. Complex socio-technical systems always operate at the limits.

This applies not only to soldiers operating high tech equipment. It applies also to the ordinary infantry soldier. In 1860 the British army was worried that troops had to carry 27kg into combat (PDF, opens in new tab). The load has now risen to 58kg. US soldiers have to carry almost 9kg of batteries alone. The Taliban called NATO troops “donkeys”.

These issues don’t apply only to the military. They’ve prompted a huge amount of new thinking in safety critical industries, in particular healthcare and air transport.

The overdose – system behaviour is not explained by the behaviour of its component technology

Remember the traditional argument that any system that was not determimistic was inherently buggy and badly designed? See “part 2 – crucial features of complex systems”.

In reality that applies only to individual components, and even then complexity & thus bugginess can be inescapable. When you’re looking at the whole socio-technical system it just doesn’t stand up.

Introducing new controls, alerts and warnings doesn’t just increase the complexity of the technology as I mentioned earlier with the MIG jet designers (see part 4). These new features add to the burden on the people. Alerts and error message can swamp users of complex systems and they miss the information they really need to know.

I can’t recommend strongly enough the story told by Bob Wachter in “The overdose: harm in a wired hospital”.

A patient at a hospital in California received an overdose of 38½ times the correct amount. Investigation showed that the technology worked fine. All the individual systems and components performed as designed. They flagged up potential errors before they happened. So someone obviously screwed up. That would have been the traditional verdict. However, the hospital allowed Wachter to interview everyone involved in each of the steps. He observed how the systems were used in real conditions, not in a demonstration or test environment. Over five articles he told a compelling story that will force any fair reader to admit “yes, I’d have probably made the same error in those circumstances”.

Happily the patient survived the overdose. The hospital staff involved were not disciplined and were allowed to return to work. The hospital had to think long and hard about how it would try to prevent such mistakes recurring. The uncomfortable truth hey had to confront was that there were no simple answers. Blaming human error was a cop out. Adding more alerts would compound the problems staff were already facing; one of the causes of the mistake was the volume of alerts swamping staff making it hard, or impossible, to sift out the vital warnings from the important and the merely useful.

One of the hard lessons was that focussing on making individual components more reliable had harmed the overall system. The story is an important illustration of the maxim in the safety critical community that trying to make systems safer can make them less safe.

Some system changes were required and made, but the hospital realised that the deeper problem was organisational and cultural. They made the brave decision to allow Wachter to publicise his investigation and his series of articles is well worth reading.

The response of the safety critical community to such problems and the necessary trade offs that a practical response requires, is intriguing with important lessons for software testers. I shall turn to this in my next post, “part 6 – Safety II, a new way of looking at safety”.

The dragons of the unknown; part 4 – a brief history of accident models

Introduction

This is the fourth post in a series about problems that fascinate me, that I think are important and interesting. The series draws on important work from the fields of safety critical systems and from the study of complexity, specifically complex socio-technical systems. This will be the theme of my keynote at EuroSTAR in The Hague (November 12th-15th 2018).

The first post was a reflection, based on personal experience, on the corporate preference for building bureaucracy rather than dealing with complex reality, “The dragons of the unknown; part 1 – corporate bureaucracies”. The second post was about the nature of complex systems, “part 2 – crucial features of complex systems”. The third followed on from part 2, and talked about the impossibility of knowing exactly how complex socio-technical systems will behave with the result that it is impossible to specify them precisely, “part 3 – I don’t know what’s going on”.

This post looks at accident models, i.e. the way that safety experts mentally frame accidents when they try to work out what caused them.

Why do accidents happen?

Taybridge_from_law_02SEP05I want to take you back to the part of the world I come from, the east of Scotland. The Tay Bridge is 3.5 km long, the longest railway bridge in the United Kingdom. It’s the second railway bridge over the Tay. The first was opened in 1878 and came down in a storm in 1879, taking a train with it and killing everyone on board.

The stumps of the old bridge were left in place because of concern that removing them would disturb the riverbed. I always felt they were there as a lesson for later generations. Children in that part of Scotland can’t miss these stumps. I remember being bemused when I learned about the disaster. “Mummy, Daddy, what are those things beside the bridge? What…? Why…? How…?” So bridges could fall down. Adults could get it wrong. Things could go badly wrong. There might not be a happy ending. It was an important lesson in how the world worked for a small child.

Accident investigations are difficult and complex even for something like a bridge appears, at first sight, to be straightforward in concept and function. The various factors that featured in the inquiry report for the Tay Bridge disaster included the bridge design, manufacture of components, construction, maintenance, previous damage, wind speed, train speed and the state of the riverbed.

These factors obviously had an impact on each other. That’s confusing enough, and it’s far worse for complex socio-technical systems. You could argue that a bridge is either safe or unsafe, usable or dangerous. It’s one or the other. There might be argument about where you would draw the line, but most people would be comfortable with the idea of a line. Safety experts call that idea of a line separating the unbroken from the broken as the bimodal principle (not to be confused with Gartner’s bimodal IT management); a system is either working or it is broken.Tay Bridge 2.0 'pass'

Thinking in bimodal terms becomes pointless when you are working with systems that run in a constantly flawed state, one of partial failure, when no-one knows how these systems work or even exactly how they should work. This is all increasingly recognised. But when things go wrong and we try to make sense of them there is a huge temptation to arrange our thinking in a way with which we are comfortable, we fall back on mental models that seem to make sense of complexity, however far removed they are from reality. These are the envisioned worlds I mentioned in part 1.

We home in on factors that are politically convenient, the ones that will be most acceptable to the organisation’s culture. We can see this, and also how thinking has developed, by looking at the history of the conceptual models that have been used for accident investigations.

Heinrich’s Domino Model (1931)

Domino Model (fig 3)The Domino Model was a traditional and very influential way to help safety experts make sense of accidents. Accidents happened because one factor kicked into another, and so on down the line of dominos, as illustrated by Heinrich’s figure 3. Problems with the organisation or environment would lead to people making mistakes and doing something dangerous, which would lead to accidents and injury. It assumed neat linearity & causation. Its attraction was that it appealed to management. Take a look at the next two diagrams in the sequence, figures 4 and 5.

Domino Model (figs 4-5)The model explicitly states that taking out the unsafe act will stop an accident. It encouraged investigators to look for a single cause. That was immensely attractive because it kept attention away from any mistakes the management might have made in screwing up the organisation. The chain of collapsing dominos is broken by removing the unsafe act and you don’t get an accident.

The model is consistent with beating up the workers who do the real work. But blaming the workers was only part of the problem with the Domino Model. It was nonsense to think that you could stop accidents by removing unsafe acts, variations from process, or mistakes from the chain. It didn’t have any empirical, theoretical or scientific basis. It was completely inappropriate for complex systems. Thinking in terms of a chain of events was quite simply wrong when analysing these systems. Linearity, neat causation and decomposing problems into constituent parts for separate analysis don’t work.

Despite these fundamental flaws the Domino Model was popular. Or rather, it was popular because of its flaws. It told managers what they wanted to hear. It helped organisations make sense of something they would otherwise have been unable to understand. Accepting that they were dealing with incomprehensible systems was too much to ask.

Swiss Cheese Model (barriers)

Swiss Cheese ModelJames Reason’s Swiss Cheese Model was the next step and it was an advance, but limited. The model did recognise that problems or mistakes wouldn’t necessarily lead to an accident. That would happen only if a series of them lined up. You can therefore stop the accident recurring by inserting a new barrier. However, the model is still based on the idea of linearity and of a sequence of cause and effect, and also the idea that you can and should decompose problems for analysis. This is a dangerously limited way of looking at what goes wrong in complex socio-technical systems, and the danger is very real with safety critical systems.

Of course, there is nothing inherently wrong with analysing systems and problems by decomposing them or relying on an assumption of cause and effect. These both have impeccably respectable histories in science and philosophy. Reducing problems to their component parts has its intellectual roots in the work of Rene Descartes. This approach implies that you can understand the behaviour of a system by looking at the behaviour of its components. Descartes’ approach (the Cartesian) fits neatly with a Newtonian scientific worldview, which holds that it is possible to identify definite causes and effects for everything that happens.

If you want to understand how a machine works and why it is broken, then these approaches are obviously valid. They don’t work when you are dealing with a complex socio-technical system. The whole is quite different from the sum of its parts. Thinking of linear flows is misleading when the different elements of a system are constantly influencing each other and adapting to feedback. Complex systems have unpredictable, emergent properties, and safety is an emergent outcome of complex socio-technical systems.

All designs are a compromise

Something that safety experts are keenly aware of is that all designs are compromises. Adding a new barrier, as envisaged by the Swiss Cheese Model, to try and close off the possibility of an accident can be counter-productive. Introducing a change, even if it’s a fix, to one part of a complex system affects the whole system in unpredictable and possibly harmful ways. The change creates new possibilities for failure, that are unknown, even unknowable.

It’s not a question of regression testing. It’s bigger and deeper than that. The danger is that we create new pathways to failure. The changes might initially seem to work, to be safe, but they can have damaging results further down the line as the system adapts and evolves, as people push the system to the edges.

There’s a second danger. New alerts or controls increase the complexity with which the user has to cope. That was a problem I now recognise with our approach as auditors. We didn’t think through the implications carefully enough. If you keep throwing in fixes, controls and alerts then the user will miss the ones they really need to act on. That reduces the effectiveness, the quality and ultimately the safety of the system. I’ll come back to that later. This is an important paradox. Trying to make a system more reliable and safer make it more dangerous and less reliable.MiG-29

The designers of the Soviet Union’s MiG-29 jet fighter observed, “the safest part is the one we could leave off”, (according to Sidney Dekker in his book “The field guide to understanding human error”).

Drift into failure

A friend once commented that she could always spot one of my designs. They reflected a very pessimistic view of the world. I couldn’t know how things would go wrong, I just knew they would and my experience had taught me where to be wary. Working in IT audit made me very cautious. Not only had I completely bought into Murphy’s Law, “anything that can go wrong will go wrong” I had my own variant; “and it will go wrong in ways I couldn’t have imagined”.
William Langewiesche is a writer and former commercial pilot who has written extensively on aviation. He provided an interesting and insightful correction to Murphy, and also to me (from his book “Inside the Sky”).

“What can go wrong usually goes right”.

There are two aspects to this. Firstly, as I have already discussed, complex socio-technical systems are always flawed. They run with problems, bugs, variations from official process, and in the way people behave. Despite all the problems under the surface everything seems to go fine, till one day it all goes wrong.

The second important insight is that you can have an accident even if no individual part of the system has gone wrong. Components may have always worked fine, and continue to do so, but on the day of disaster they combine in unpredictable ways to produce an accident.

Accidents can happen when all the components have been working as designed, not just when they fail. That’s a difficult lesson to learn. I’d go so far as to say we (in software development and engineering and even testing) didn’t want to learn it. However, that’s the reality, however scary it is.

Sidney Dekker developed this idea in a fascinating and important book, “Drift Into Failure”. His argument is that we are developing massively complex systems that we are incapable of understanding. It is therefore misguided to think in terms of system failure arising from a mistake by an operator or the sudden failure of part of the system.

“Drifting into failure is a gradual, incremental decline into disaster driven by environmental pressure, unruly technology and social processes that normalise growing risk. No organisation is exempt from drifting into failure. The reason is that routes to failure trace through the structures, processes and tasks that are necessary to make an organization successful. Failure does not come from the occasional, abnormal dysfunction or breakdown of these structures, processes or tasks, but is an inevitable by-product of their normal functioning. The same characteristics that guarantee the fulfillment of the organization’s mandate will turn out to be responsible for undermining that mandate…

In the drift into failure, accidents can happen without anything breaking, without anybody erring, without anyone violating rules they consider relevant.”

The idea of systems drifting into failure is a practical illustration of emergence in complex systems. The overall system adapts and changes over time, behaving in ways that could not have been predicted from analysis of the components. The fact that a system has operated safely and successfully in the past does not mean it will continue to do so. Dekker says;

“Empirical success… is no proof of safety. Past success does not guarantee future safety.”

Dekker’s argument about drifting to failure should strike a chord with anyone who has worked in large organisations. Complex systems are kept running by people who have to cope with unpredictable technology, in an environment that increasingly tolerates risk so long as disaster is averted. There is constant pressure to cut costs, to do more and do it faster. Margins are gradually shaved in tiny incremental changes, each of which seems harmless and easy to justify. The prevailing culture assumes that everything is safe, until suddenly it isn’t.

Langewiesche followed up his observation with a highly significant second point;

“What can go wrong usually goes right – and people just naturally draw the wrong conclusions.”

When it all does go wrong the danger is that we look for quick and simple answers. We focus on the components that we notice are flawed, without noticing all the times everything went right even with those flaws. We don’t think about the way people have been keeping systems running despite the problems, or how the culture has been taking them closer and closer to the edge. We then draw the wrong conclusions. Complex systems and the people operating them are constantly being pushed to the limits, which is an important idea that I will return to.

It is vital that we understand this idea of drift, and how people are constantly having to work with complex systems under pressure. Once we start to accept these ideas it starts to become clear that if we want to avoid drawing the wrong conclusions we have to be sceptical about traditional approaches to accident investigation. I’m talking specifically about root cause analysis, and the notion that “human error” is a meaningful explanation for accidents and problems. I will talk about these in my next post, “part 5 – accident investigations and treating people fairly”.

The dragons of the unknown; part 3 – I don’t know what’s going on

Introduction

This is the third post in a series about problems that fascinate me, that I think are important and interesting. The series draws on important work from the fields of safety critical systems and from the study of complexity, specifically complex socio-technical systems. This will be the theme of my keynote at EuroSTAR in The Hague (November 12th-15th 2018).

The first post was a reflection, based on personal experience, on the corporate preference for building bureaucracy rather than dealing with complex reality, “The dragons of the unknown; part 1 – corporate bureaucracies”. The second post was about the nature of complex systems, “part 2 – crucial features of complex systems”. This one follows on from part 2, which talked about the impossibility of knowing exactly how complex socio-technical systems will behave with the result that it is impossible to specify them precisely.

The starting point for a system audit

When we audited a live system the specifications of the requirements and design didn’t matter – they really didn’t. This was because;

  • specs were a partial and flawed picture of what was required at the time that the system was built,
  • they were not necessarily relevant to the business risks and problems facing the company at the time of the audit,
  • the system’s compliance, or failure to comply, with the specs told us nothing useful about what the system was doing or should be doing (we genuinely didn’t care about “compliance”),
  • we never thought it was credible that the specs would have been updated to reflect subsequent changes,
  • we were interested in the real behaviour of the people using the system, not what the analysts and designers thought they would or should be doing.

audit starting pointIt was therefore a complete waste of time in a tightly time boxed audit if we waded through the specs. Context driven testers have been fascinated when I’ve explained that we started with a blank sheet of paper. The flows we were interested in were the things that mattered to the people that mattered.audit starting point (2)

We would identify a key person and ask them to talk us through the business context of the system, sketching out how it fitted into its environment. The interfaces were where we always expected things to go wrong. The scope of the audit was dictated by the sketches of the people who mattered, not the system documentation.

IDEF0 notationWe might have started with a blank sheet but we were highly rigorous. We used a structured methods modelling technique called IDEF0 to make sense of what we were learning, and to communicate that understanding back to the auditees to confirm that it made sense.

We were constantly asking, “How do you know that the system will do what we want? How will you get the outcomes you need?. What must never happen? How does the system prevent that? What must always happen? How does the system ensure that?” It’s a similar approach to the safety critical idea of always events and never events. It is particularly popular in medical safety circles.

We were dealing with financial systems. Our concern could be summarised as; how do we know that the processing is complete, accurate, authorised and timely? It was almost a mantra; complete, accurate, authorised and timely.

These are all constrained by each other, and informed by the context, i.e. sufficiently accurate for business objectives given the need to provide the information within an acceptable time. We had to understand the current context. Context was everything.

Once we had a good idea of the processes, the outputs, the key risks and the controls that were needed, we would attack the system to see if we could force it to do what it shouldn’t, or prevent it doing what it was required to do. We would try to approach the testing with the mindset of a dishonest or irresponsible user. At that time I had never heard of exploratory testing. Training in that would have been invaluable.

We would also speak to ordinary users and watch them in action. Our interviews, observations, and our own testing told us far more about the system and how it was being used than the formal system documentation could. It also told us more than we could learn from the developers who looked after the systems. They would often be taken by surprise by what we discovered about how users were really working with their systems.

We were always asking questions to help us identify the controls that would give us the right outcomes. This is very similar to the way experts look at safety critical systems. Safety is a control problem, a question of ensuring there are mechanisms or practices in place that will help the system and its users from straying into dangerous territory. System developers cannot know how their systems will be used as part of a complex socio-technical system. They might think they do, but users will always take the system into unknown territory.

“System as imagined” versus “system as found”

The safety critical community makes an important distinction between the system as imagined and the system as found. The imagined system is neat and tidy. It is orderly, without noise, confusion and distraction. Real people are absent, or not meaningfully represented.into the unknown

A user who is working with a system for several hours a day for years on end will know all about the short cuts, hacks and vulnerabilities that are available. They make the system do things that the designers never imagined. They will understand the gaps in the system, the gaps in the designers’ understanding. The users would then have to use their own ingenuity. These user variations are usually beneficial and help the system work as the business requires. They can also be harmful and provide opportunities for fraud, a big concern in an insurance company. Large insurers receive and pay out millions of pounds a day, with nothing tangible changing hands. They have always been vulnerable to fraud, both by employees and outsiders.

how honest are the usersI investigated one careful user who stole over a million pounds, slice by slice, several thousand pounds a week, year after year, all without attracting any attention. He was exposed only by an anonymous tip off. It was always a real buzz working on those cases trying to work out exactly what the culprit had done and how they’d exploited the systems (note the plural – the best opportunities usually exploited loopholes between interfacing systems).

What shocked me about that particular case was that the fraudster hadn’t grabbed the money and run. He had settled in for a long term career looting systems we had thought were essentially sound and free of significant bugs. He was confident that he would never be caught. After piecing together the evidence I knew that he was right. There was nothing in the systems to stop him or to reveal what he had done, unless we happened to investigate him in detail.

Without the anonymous tip from someone he had double crossed he would certainly have got away with it. That forced me to realise that I had very little idea what was happening out in the wild, in the system as found.

The system as found is messy. People are distracted and working under pressure. What matters is the problems and the environment the people are dealing with, and the way they have to respond and adapt to make the system work in the mess.

There are three things you really shouldn’t say to IT auditors. In ascending facepalm order.three things you don't say to IT auditors

“But we thought audit would expect …”.

“But the requirements didn’t specify…”.

“But users should never do that”.

The last was the one that really riled me. Developers never know what users will do. They think they do, but they can’t know with any certainty. Developers don’t have the right mindset to think about what real users will do. Our (very unscientific and unevidenced) rule of thunb was as follows. 10% of people will never steal, regardles of the temptation. 10% will always try to steal, so systems must produce and retain the evidence to ensure they will be caught. The other 80% will be honest so long as we don’t put temptation in their way, so we have to explore the application to find the points of weakness that will tempt users.

Aside from their naivety, in auditors’ eyes, regarding fraud and deliberate abuse of the system, developers, and even business analysts, don’t understand the everyday pressures users will be under when they are working with complex socio-technical systems. Nobody knows how these systems really work. It’s nothing to be ashamed of. It’s the reality and we have to be honest about that.

One of the reasons I was attracted to working in audit and testing, and lost my enthusiasm for working in information security, was that these roles required me to think about what was really going on. How is this bafflingly complex organisation working? We can’t know for sure. It’s not a predictable, deterministic machine. All we can say confidently is that certain factors are more likely to produce good outcomes and others are more likely to give us bad outcomes.

If anyone does claim they do fully understand a complex socio-technical system then one of the following applies.

  • They’re bullshitting, which is all too common, and are happy to appear more confident than they have any right to be. Sadly it’s a good look in many organisations.
  • They’re deluded and genuinely have no idea of the true complexity.
  • They understand only part of the system – probably one of the less complex parts, and they’re ignoring the rest. In fairness, they might have made a conscious decision to focus only on the part that they can understand. However, other people might not appreciate the significance of that qualification, and no-one might spot that the self-professed expert has defined the problem in a way that is understandable but not realistic.
  • They did have a good understanding of the system once upon a time, when it was simpler, before it evolved into a complex beast.

It is widely believed that mapmakers in the Middle Ages would fill in the empty spaces with dragons. It’s not true. It’s just a myth, but it is a nice idea. It is a neat analogy because the unknown is scary and potentially dangerous. That’s been picked up by people working with safety critical systems, specifically the resilience engineering community. They use phrases like “jousting with dragons” and “facing the dragons at the borderlands”.here be dragons

Safety critical experts use this metaphor of dangerous dragons for reasons I have been outlining in this series. Safety critical systems are complex socio-technical systems. Nobody can specify how these systems will behave, what people will have to do to keep them running, running safely. The users will inevitably take these systems into unknown, and therefore dangerous, territory. That has huge implications for safety critical systems. I want to look at how the safety community has responded to the problem of trying to understand why systems can have bad outcomes when they can’t even know how systems are supposed to behave. I will pick that up in later posts in this series.

In the next post I will talk about the mental models we use to try and understand failures and accidents, “part 4 – a brief history of accident models”.

The dragons of the unknown; part 2 – crucial features of complex systems

Introduction

This is the second post in a series about problems that fascinate me, that I think are important and interesting. The series draws on important work from the fields of safety critical systems and from the study of complexity, specifically complex socio-technical systems. This will be the theme of my keynote at EuroSTAR in The Hague (November 12th-15th 2018).

The first post was a reflection, based on personal experience, on the corporate preference for building bureaucracy rather than dealing with complex reality, “The dragons of the unknown; part 1 – corporate bureaucracies”. This post is about the nature of complex systems and discusses some features that have significant implications for testing. We have been slow to recognise the implications of these features.

Complex systems are probabilistic (stochastic) not deterministic

A deterministic system will always produce the same output, starting from a given initial state and receiving the same input. Probabilistic, or stochastic, systems are inherently unpredictable and therefore non-deterministic. Stochastic is defined by the Oxford English Dictionary as “having a random probability distribution or pattern that may be analysed statistically but may not be predicted precisely.”

Traditionally, non-determinism meant a system was badly designed, inherently buggy, and untestable. Testers needed deterministic systems to do their job. It was therefore the job of designers to produce systems that were deterministic, and testers would demonstrate whether or not the systems met that benchmark. Any non-determinism meant a bug had to be removed.

Is that right or nonsense? Well, neither, or rather it depends on the context you choose. It depends what you choose to look at. You can restrict yourself to a context where determinism holds true, or you can expand your horizons. The traditional approach to determinism is correct, but only within carefully defined limits.

You can argue, quite correctly, that a computer program cannot have the properties of a true complex system. A program does what it’s coded to do: outputs can always be predicted from the inputs, provided you’re clever enough and you have enough time. For a single, simple program that is certainly true. A fearsomely complicated program might not be meaningfully deterministic, but we can respond constructively to that with careful design, and sensitivity to the needs of testing and maintenance. However, if we draw the context wider than individual programs the weaker becomes our confidence that we can know what should happen.

Once you’re looking at complex socio-technical systems, i.e. systems where people interact with complex technology, then any reasonable confidence that we can predict outcomes accurately has evaporated. These are the reasons.

Even if the system is theoretically still deterministic we don’t have brains the size of a planet, so for practical purposes the system becomes non-deterministic.

The safety critical systems community likes to talk about tractable and intractable systems. They know that the complex socio-technical systems they work with are intractable, which means that they can’t even describe with confidence how they are supposed to work (a problem I will return to). Does that rule out the possibility of offering a meaningful opinion about whether they are working as intended?

That has huge implications for testing artificial intelligence, autonomous vehicles and other complex technologies. Of course testers will have to offer the best information they can, but they shouldn’t pretend they can say these systems are working “as intended” because the danger is that we are assuming some artificial and unrealistic definition of “as intended” that will fit the designers’ limited understanding of what the system will do. I will be returning to that. We don’t know what complex systems will do.

In a deeply complicated system things will change that we are unaware of. There will always be factors we don’t know about, or whose impact we can’t know about. Y2K changed the way I thought about systems. Experience had made us extremely humble and modest about what we knew, but there was a huge amount of stuff we didn’t even know we didn’t know. At the end of the lengthy, meticulous job of fixing and testing we thought we’d allowed for everything, in the high risk, date sensitive areas at least. We were amazed how many fresh problems we found when we got hold of a dedicated mainframe LPAR, effectively our own mainframe, and booted it up with future dates.

We discovered that there were vital elements (operating system utilities, old vendor tools etc) lurking in the underlying infrastructure that didn’t look like they could cause a problem but which interacted with application code in ways we could not have predicted when run with Y2K dates. The systems had run satisfactorily in test enviroments that were built to mirror production, but they crashed when they ran on a mainframe with the future dates. We were experts, but we hadn’t known what we didn’t know.

The behaviour of these vastly complicated systems was indistinguishable from complex, unpredictable systems. When a test passes with such a system there are strict limits to what we should say with confidence about the system.

As Michael Bolton tweeted;

Michael Bolton's tweet“A ‘passing’ test doesn’t mean ‘no problem’. It means ‘no problem *observed*. This time. With these inputs. So far. On my machine’.”

So, even if you look at the system from a narrow technical perspective, the computerised system only, the argument that a good system has to be deterministic is weak. We’ve traditionally tested systems as if they were calculators, which should always produce the same answers from the same sequence of button presses. That is a limited perspective. When you factor in humans then the ideal of determinism disintegrates.

In any case there are severe limits to what we can say about the whole system from our testing of the components. A complex system behaves differently from the aggregation of its components. It is more than the sum. That brings us to an important feature of complex systems. They are emergent. I’ll discuss this in the next section.

My point here is that the system that matters is the wider system. In the case of safety critical systems, the whole, wider system decides whether people live or die.

Instead of thinking of systems as being deterministic, we have to accept that complex socio-technical systems are stochastic. Any conclusions we reach should reflect probability rather than certainty. We cannot know what will happen, just what is likely. We have to learn about the factors that are likely to tip the balance towards good outcomes, and those that are more likely to give us bad outcomes.

I can’t stress strongly enough that lack of determinism in socio-technical systems is not a flaw, it’s an intrinsic part of the systems. We must accept that and work with it. I must also stress that I am not dismissing the idea of determinism or of trying to learn as much as possible about the behaviour of individual programs and components. If we lose sight of what is happening within these it becomes even more confusing when we try to look at a bigger picture. Likewise, I am certainly not arguing against Test Driven Development, which is a valuable approach for coding. Cling to determinism whenever you can, but accept its limits – and abandon all hope that it will be available when you have to learn about the behaviour of complex socio-technical systems.

We have to deal with whole systems as well as components, and that brings me to the next point. It’s no good thinking about breaking the system down into its components and assuming we can learn all we need to by looking at them individually. Complex systems have emergent behaviour.

Complex systems are emergent; the whole is greater than the sum of the parts

It doesn’t make sense to talk of an H2O molecule being wet. Wetness is what you get from a whole load of them. The behaviour or the nature of the components in isolation doesn’t tell you about the behaviour or nature of the whole. However, the whole is entirely consistent with the elements. The H2O molecules are governed by the laws of the periodic table and that remains so regardless of whether they are combined. But once they are combined they become water, which is unquestionably wet and is governed by the laws of fluid dynamics. If you look at the behaviour of free surface water in the oceans under the influence of wind then you are dealing with a stochastic process. Individual waves are unpredictable, but reasonable predictions can be made about the behaviour of a long series of waves.

As you draw back and look at the wider picture, rather than the low level components you see that the components are combining in ways that couldn’t possibly have been predicted simply by looking at the components and trying to extrapolate.

Starlings offer another good illustration of emergence. These birds combine in huge flocks to form murmurations, amazing, constantly evolving aerial patterns that look as if a single brain is in control. The individual birds are aware of only seven others, rather than the whole murmuration. They concentrate on those neighbours and respond to their movements. Their behaviour isn’t any different from what they can do on their own. However well you understood the invidual starling and its behaviour you could not possibly predict what these birds do together.


Likewise with computer systems, even if all of the components are well understood and working as intended the behaviour of the whole is different from what you’d expect from simply looking at these components. This applies especially when humans are in the loop. Not only is the whole different from the sum of the parts, the whole system will evolve and adapt unpredictably as people find out what they have to do to make the system work, as they patch it and cover for problems and as they try to make it work better. This is more than a matter of changing code to enhance the system. It is about how people work with the system.

Safety is an emergent property of complex systems. The safety critical experts know that they cannot offer a meaningful opinion just by looking at the individual components. They have to look at how the whole system works.

In complex systems success & failure are not absolutes

Success & failure are not absolutes. A system might be flawed, even broken, but still valuable to someone. There is no right, simple answer to the question “Is it working? Are the figures correct?”

Appropriate answers might be “I don’t know. It depends. What do you mean by ‘working’? What is ‘correct’? Who is it supposed to be working for?”

The insurance finance systems I used to work on were notoriously difficult to understand and manipulate. 100% accuracy was never a serious, practicable goal. As I wrote in “Fix on failure – a failure to understand failure”;

“With complex financial applications an honest and constructive answer to the question ‘is the application correct?’ would be some variant on ‘what do you mean by correct?’, or ‘I don’t know. It depends’. It might be possible to say the application is definitely not correct if it is producing obvious garbage. But the real difficulty is distinguishing between the seriously inaccurate, but plausible, and the acceptably inaccurate that is good enough to be useful. Discussion of accuracy requires understanding of critical assumptions, acceptable margins of error, confidence levels, the nature and availability of oracles, and the business context of the application.”

I once had to lead a project to deliver a new sub-system that would be integrated into the main financial decision support system. There were two parallel projects, each tackling a different line of insurance. I would then be responsible for integrating the new sub-systems to the overall system, a big job in itself.

The other project manager wanted to do his job perfectly. I wanted to do whatever was necessary to build an acceptable system in time. I succeeded. The other guy delivered late and missed the implementation window. I had to carry on with the integration without his beautiful baby.

By the time the next window came around there were no developers available to make the changes needed to bring it all up to date. The same happened next time, and then the next time, and then… and eventually it was scrapped without ever going live.

If you compared the two sub-systems in isolation there was no question that the other man’s was far better than the one I lashed together. Mine was flawed but gave the business what they needed, when they needed it. The other was slightly more accurate but far more elegant, logical, efficient and lovingly crafted. And it was utterly useless. The whole decision support system was composed of sub-systems like mine, flawed, full of minor errors, needing constant nursing, but extremely valuable to the business. If we had chased perfection we would never have been able to deliver anything useful. Even if we had ever achieved perfection it would have been fleeting as the shifting sands of the operational systems that fed it introduced new problems.

The difficult lesson we had to learn was that flaws might have been uncomfortable but they were an inescapable feature of these systems. If they were to be available when the business needed them they had to run with all these minor flaws.

Richard Cook expanded on this point in his classic, and highly influential article from 1998 “How complex systems fail”. He put it succinctly.

“Complex systems run in degraded mode.”

Cook’s arguments ring true to those who have worked with complex systems, but it hasn’t been widely appreciated in the circles of senior management where budgets, plans and priorities are set.

Complex systems are impossible to specify precisely

SystemanticsCook’s 1998 paper is important, and I strongly recommend it, but it wasn’t quite ground breaking. John Gall wrote a slightly whimsical and comical book that elaborated on the same themes back in 1975. “Systemantics; how systems work and especially how they fail”. Despite the jokey tone he made serious arguments about the nature of complex systems and the way that organisations deal, and fail to deal, with them. Here is a selection of his observations.

“Large systems usually operate in failure mode”.

“The behaviour of complex systems… living or non-living, is unpredictable”.

“People in systems do not do what the system says they are doing”.

“Failure to function as expected is an intrinsic feature of systems”

John Gall wrote that fascinating and hugely entertaining back more than forty years ago. He nailed it when he discussed the problems we’d face with complex socio-technical systems. How can we say the system is working properly if we neither know how it is working, or even how it is supposed to work? Or what the people are doing within the system?

The complex systems we have to deal with are usually socio-technical systems. They operate in a social setting, with humans. People make the systems work and they have to make decisions under pressure in order to keep the system running. Different people will do different things. Even the same person might act differently at different times. That makes the outcomes from such a system inherently unpredictable. How can we specify such a system? What does it even mean to talk of specifying an unpredictable system?

That’s something that the safety critical experts focus on. People die because software can trip up humans even when it is working smoothly as designed. This has received a lot of attention in medical circles. I’ll come back to that in a later post.

That is the reality of complex socio-technical systems. These systems are impossible to specify with complete accuracy or confidence, and certainly not at the start of any development. Again, this is not a bug, but an inescapable feature of complex socio-technical systems. Any failure may well be in our expectations, a flaw in our assumptions and knowledge, and not necessarily the system.

This reflected my experience with the insurance finance systems, especially for Y2K, and it was also something I had to think seriously about when I was an IT auditor. I will turn to that in my next post, “part 3 – I don’t know what’s going on”.

The dragons of the unknown; part 1 – corporate bureaucracies

Introduction

This is the first post in a series about problems that fascinate me, that I think are important and interesting. The series will draw on important work from the fields of safety critical systems and from the study of complexity, specifically complex socio-technical systems. I’m afraid I will probably dwell longer on problems than answers. One of the historical problems with software development and testing has been an eagerness to look for and accept easy, but wrong answers. We have been reluctant to face up to reality when we are dealing with complexity, which doesn’t offer simple or clear answers.

This will be the theme of my keynote at EuroSTAR in The Hague (November 12th-15th 2018).

Complexity is intimidating and it’s tempting to pretend the world is simpler than it is. We’ve been too keen to try & reshape reality so that it will look like something we can manage neatly. That mindset often dovetails neatly with the pressures of corporate life and it is possible to go far in large organisations while denying and evading reality. It is, however, bullshit.

A bit about my background

When I left university I went to work for one of the big, international accountancy firms as a trainee chartered accountant. It was a bewildering experience. I felt clueless. I didn’t understand what was going on. I never did feel comfortable that I understood what we were doing. It wasn’t that I was dimmer than my colleagues. I was the only one who seemed to question what was going on and I felt confused. Everyone else took it all at face value but the work we were doing seemed to provide no value to anyone.

Alice managing flamingosAt best we were running through a set of rituals to earn a fee that paid our salaries. The client got a clean, signed off set of accounts, but I struggled to see what value the information we produced might have for anyone. None of the methods we used seemed designed to tell us anything useful about what our clients were doing. It all felt like a charade. I was being told to do things and I just couldn’t see how anything made sense. I may as well have been trying to manage that flamingo, from Alice in Wonderland. That picture may seem a strange one to use, but it appeals to me; it sums up my confusion well. What on earth was the point of all these processes? They might as well have been flamingos for all the use they seemed. I hadn’t a clue.

I moved to a life assurance company, managing the foreign currency bank accounts. That entailed shuffling tens of millions of dollars around the world every day to maximise the overnight interest we earned. The job had a highly unattractive combination of stress and boredom. A simple, single mistake in my projections of the cash flowing through the accounts on one day would cost far more than my annual salary. The projections weren’t an arithmetical exercise. They required judgment and trade offs of various factors. Getting it right produced a sense of relief rather than satisfaction.

The most interesting part of the job was using the computer systems to remotely manage the New York bank accounts (which was seriously cutting edge for the early 1980s) and discussing with the IT people how they worked. So I decided on a career switch into IT, a decision I never regretted, and arrived in the IT division of a major insurance company. I loved it. I think my business background got me more interesting roles in development, involving lots of analysis and design as well as coding.

After a few years I had the chance to move into computer audit, part of the group audit department. It was a marvellous learning experience, seeing how IT fitted into the wider business, and seeing all the problems of IT from a business perspective. That transformed my outlook and helped me navigate my way round corporate bureaucracies, but once I learned to see problems and irresponsible bullshit I couldn’t keep my mouth shut. I didn’t want to, and that’s because of my background, my upbringing, and training. I acquired the reputation for being an iconoclast, an awkward bastard. I couldn’t stand bullshit.

The rise of bullshit jobs

My ancestors had real jobs, tough, physical jobs as farmhands, stonemasons and domestic servants till the 20th century when they managed to work their way up into better occupations, like shopkeeping, teaching, sales interspersed with spells in the military during the two world wars. They were still real jobs, where it was obvious if you didn’t do anything worthwhile, if you weren’t achieving anything.

I had a very orthodox Scottish Presbyterian upbringing. We were taught to revere books and education. We should work hard, learn, stand our ground when we know we are right and always argue our case. We should always respect those who have earned respect, regardless of where they are in society.Scotty from Star Trek

In the original Star Trek Scotty’s accent may have been dodgy, but that character was authentic. It was my father’s generation. As a Star Trek profile puts it; “rank means nothing to Scotty if you’re telling him how to do his job”.

A few years ago Better Software magazine introduced an article I wrote by saying I was never afraid to voice my opinion. I was rather embarrassed when I saw that. Am I really opinionated and argumentative? Well, probably (definitely, says my wife). When I think that I’m right I find it hard to shut up. Nobody does righteous certainty better than Scottish Presbyterians! In that, at least, we are world class, but I have to admit, it’s not always an attractive quality (and the addictive yearning for certainty becomes very dangerous when you are dealing with complex systems). However, that ingrained attitude, along with my experience in audit, did prepare me well to analyse and challenge dysfunctional corporate practices, to challenge bullshit and there has never been any shortage of that.

Why did corporations embrace harmful practices? A major factor was that they had become too big, complex and confusing for anyone to understand what was going on, never mind exercise effective control. The complexity of the corporation itself is difficult enough to cope with, but the problems it faces and the environment it operates in have also become more complex.

Long and painful, if educational, experience has allowed me to distill the lessons I’ve learned into seven simple statements.

  • Modern corporations, the environment they’re operating in and the problems they face are too complex for anyone to control or understand.
  • Corporations have been taken over by managers and run for their own benefit, rather than customers, shareholders, the workforce or wider society.
  • Managers need an elaborate bureaucracy to maintain even a semblance of control, though it’s only the bureaucracy they control, not the underlying reality.
  • These managers struggle to understand the jobs of the people who do the productive work.
  • So the managers value protocol and compliance with the bureaucracy over technical expertise.
  • The purpose of the corporate bureaucracy therefore becomes the smooth running of the bureaucracy.
  • Hence the proliferation of jobs that provide no real value and exist only so that the corporate bureaucracy can create the illusion of working effectively.

I have written about this phenomenon in a blog series “Corporate bureaucracy and testing” and also reflected in “Testing: valuable or bullshit?” on the specific threat to testing if it becomes a low skilled, low value corporate bullshit job.

The aspect of this problem that I want to focus on in this series is our desire to simplify complexity. We furnish simple explanations for complex problems. I did this as a child when I decided the wind was caused by trees waving their branches. My theory fitted what I observed, and it was certainly much easier for a five year old to understand than variations in atmospheric pressure. We also make convenient, but flawed, assumptions that turn a messy, confusing, complex problem into one that we are confident we can deal with. The danger is that in doing so we completely lose sight of the real problem while we focus on a construct of our own imagination. This is hardly a recent phenomenon.Guns of August

The German military planners of World War One provide a striking example of this escape from reality. They fully appreciated what a modern, industrial war would be like with huge armies and massively destructive armaments. The politicians didn’t get it, but according to “Barbara Tuchman” the German military staff did understand. They just didn’t know how to respond. So they planned to win the sort of war they were already familiar with, a 19th century war.

(General Moltke, the German Chief of Staff,) said to the Kaiser in 1906, ‘It will be a long war that will not be settled by a decisive battle but by a long wearisome struggle with a country that will not be overcome until its whole national force is broken, and a war that will utterly exhaust our own people, even if we are victorious.’ It went against human nature, however – and the nature of General Staffs – to follow the logic of his own prophecy. Amorphous and without limits, the concept of a long war could not be scientifically planned for as could the orthodox, predictable and simple solution of decisive battle and short war. The younger Moltke was already Chief of Staff when he made his prophecy, but neither he nor his Staff, nor the Staff of any other country made any effort to plan for a long war.

The military planners yearned for a problem that allowed an “orthodox, predictable and simple solution”, so they redefined the problem to fit that longing. The results were predictably horrific.

There is a phrase for the mental construct the military planners chose to work with; an “envisioned world” (PDF – opens in new tab). That paper, by David Woods, Paul Feltovich, Robert Hoffman, and Axel Roesler is a fairly short and clear introduction to the dangers of approaching complex systems with a set of naively simplistic assumptions. Our natural, human bias towards over-simplification has various features. In each case the danger is that we opt for a simplified perspective, rather than a more realistic one.

We like to think of activities as a series of discrete steps that can be analysed individually, rather than as continuous processes that cannot meaningfully be broken down. Similarly, we prefer to see processes as being separable and independent, rather than envisage them all interacting with the wider world. We are inclined to consider activities as if they were sequential when they actually happen simultaneously. We instinctively want to assume homogeneity rather than heterogeneity, so we mentally class similar things as if they were exactly the same, thus losing sight of nuance and important distinctions; we assume regularity when the reality is irregular. We look at elements as if there is only one perspective when there might be multiple viewpoints. We like to assume any rules or principles are universal when they might really be local and conditional, relevant only to the current context. We inspect the surface and shy away from considering deep analysis that might reveal awkward complications and subtleties.

These are all relevant considerations for testers, but there are three more that are all related and are particularly important when trying to learn how complex socio-technical systems work.

  • We look on problems as if they are static objects, when we should be thinking of them as dynamic, flowing processes. If we focus on the static then we lose sight of the problems or opportunities that might arise as the problem, or the application, changes over time or space.
  • We treat problems as if they are simple and mechanical, rather than organic with unpredictable, emergent properties. The implicit assumption is that we can know how whole systems will behave simply by looking at the behaviour of the components.
  • We pretend that the systems are subject to linear causes and effects with the same cause always producing the same effect. The possibilities of tipping points and cascading effects is ignored.

Complex socio-technical systems are not static, simple or linear. Testers have to recognise that and frame their testing to take account of the reality, that these systems are dynamic, organic and non-linear. If they don’t and if they try to restrict themselves to the parts of the system that can be treated as mechanical rather than truly complex, the great danger is that testing will become just another pointless, bureaucratic job producing nothing of any real value. I have worked both as an external auditor and an internal auditor. Internal audit has a focus and a mindset that allows it to deliver great value, when it is done well. External audit has been plagued by a flawed business model that is struggling with the complexity of modern corporations and their accounts. The external audit model requires masses of inexperienced, relatively lowly paid staff, carrying out unskilled checking of the accounts and producing output of dubious value. The result can fairly be described as a crisis of relevance for external audit.

I don’t want to see testing suffer the same fate, but that is likely if we try to define the job as one that can be carried out by large squads of poorly skilled testers. We can’t afford to act as if the job is easy. That is the road to irrelevance. In order to remain relevant we must try to meet the real needs of those who employ us. That requires us to deal with the world as it is, not as we would like it to be.

My spell in IT audit forced me to think seriously about all these issues seriously for the first time. The audit department in which I worked was very professional and enlightened, with some very good, very bright people. We carried out valuable, risk-based auditing when that was at the leading edge of internal audit practice. Many organisations have still not caught up and are mired in low-value, low-skilled, compliance checking. That style of auditing falls squarely into the category of pointless, bullshit jobs. It is performing a ritual for the sake of appearances

My spell as an auditor transformed my outlook. I had to look at, and understand the bigger picture, how the various business critical applications fitted together, and what the implications were of changing them. We had to confront bullshitters and “challenge the intellectual inadequates”, as the Group Chief Auditor put it. We weren’t just allowed to challenge bullshit; it was our duty. Our organisational independence meant that nobody could pull rank on us, or go over our heads.

I never had a good understanding of what the company was doing with IT till I moved into audit. The company paid me enough money to enjoy a good lifestyle while I played with fun technology. As an auditor I had to think seriously about how IT kept the company competitive and profitable. I had to understand how everything fitted together, understand the risks we faced and the controls we needed.

I could no longer just say “well, shit happens” I had to think “what sort of shit?”, “how bad is it?”, “what shit can we live with?”, “what shit have we really, really got to avoid”, “what are the knock-on implications?”, “can we recover from it?”, “how do we recover?”, “what does ‘happen’ mean anyway?”, “who does it happen to?”, “where does it happen?”.

Everything that mattered fitted together. If it was stand alone, then it almost certainly didn’t matter and we had more important stuff to worry about. The more I learned the more humble I became about the limits of my knowledge. It gradually dawned on me how few people had a good overall understanding of how the company worked, and this lesson was hammered home when we reached Y2K.

When I was drafted onto the Y2K programme as a test manager I looked at the plans drawn up by the Y2K architects for my area, which included the complex finance systems on which I had been working. The plans were a hopelessly misleading over-simplification. There were only three broad systems defined, covering 1,175 modules. I explained that it was nonsense, but I couldn’t say for sure what the right answer was, just that it was a lot more.

I wrote SAS programs to crawl through the production libraries, schedules, datasets and access control records to establish all the links and outputs. I drew up an overview that identified 20 separate interfacing applications with 3,000 modules. That was a shock to management because it had already been accepted that there would not be enough time to test the lower number thoroughly.

My employers realised I was the only available person who had any idea of the complexity of both the technical and business issues. They put me in charge of the development team as well as the testers. That was an unusual outcome for a test manager identifying a fundamental problem. I might not have considered myself an expert, but I had proved my value by demonstrating how much we didn’t know. That awareness was crucial.

That Y2K programme might be 20 years ago but it was painfully clear at the time that we had totally lost sight of the complexity of these finance applications. I was able to provide realistic advice only because of my deep expertise and thereafter I was always uncomfortably aware that I never again had the time to acquire such deep knowledge.

These applications, for all their complexity, were at least rigidly bounded. We might not have known what was going on within them, but we knew where the limits lay. They were all internal to the corporation with a tightly secured perimeter. That is a different world from today. The level of complexity has increased vastly. Web applications are built on layers of abstraction that render the infrastructure largely opaque. These applications aren’t even notionally under the control of organisations in the way that our complex insurance applications were. That makes their behaviour impossible to control precisely, and even to predict as I will discussing in my next post, “part 2 – crucial features of complex systems”.

Has opposition to ISO 29119 really died down?

One of my concerns about the Stop 29119 campaign, ever since it was launched four years ago, was that ISO would try to win the debate by default, by ignoring the opposition. In my CAST 2014 talk, which kicked off the campaign, I talked about ISO’s attempt to define its opponents as being irrelevant. ISO breached its own rules requiring consensus from the profession, and in order to justify doing so they had to maintain a pretence that testers who opposed their efforts were a troublesome, old-fashioned faction that should be ignored.

That’s exactly what has happened. ISO have kept their collective heads down, tried to ride out the storm and emerged to claim that it was all a lot of fuss about nothing; the few malcontents have given up and gone away.

I have just come across a comment in the “talk” section of the Wikipedia article on ISO 29119, arguing for some warning flags on the article to be removed.

“…finally, the objection to this standard was (a) from a small but vocal group and (b) died down – the ballots of member National Bodies were unanimous in favour of publication. Furthermore, the same group objected to IEEE 829 as well.”

The opposition is significantly more than “a small but vocal group”, but I won’t dwell on that point. My concern here is point b. Have the objections died down? Yes, they have in the sense that the opponents of ISO 29119 have been less vocal. There have been fewer talks and articles pointing out the flaws in the principle and the detail of the standard.

However, there has been no change in the beliefs of the opposition. There comes a point when it feels unnecessary, even pointless, to keep repeating the same arguments without the other side engaging. You can’t have a one-sided debate. The Stop 29119 campaigners have other things to do. Frankly, attacking ISO 29119 is a dreary activity compared with most of the alternatives. I would prefer to do something interesting and positive rather than launching another negative attack on a flawed standard. However, needs must.

The argument that “ballots of member National Bodies were unanimous in favour of publication” may be true, but it is a circular point. The opponents of ISO 29119 argued convincingly that software testing is not an activity that lends itself to ISO style standardisation and that ISO failed to gain any consensus outside its own ranks. The fact that ISO are quite happy with that arrangement is hardly a convincing refutation of our argument.

The point about our opposition to the IEEE 829 standard is also true, but it’s irrelevant. Even ISO thought that standard was dated and inadequate for modern purposes. It decided to replace it rather than try to keep updating it. Unfortunately the creators of ISO 29119 repeated the fundamental mistakes that rendered IEEE 829 flawed and unsuitable for good testing.

I was pleased to discover that the author of the Wikipedia comment was on the ISO working group that developed ISO 29119 and wrote a blog defending the standard, or rather dismissing the opposition. It was written four years ago in the immediate aftermath of the launch of Stop 29119. It’s a pity it didn’t receive more attention at the time. The debate was far too one sided and we badly needed contributions from ISO 29119’s supporters. So, in order to provide a small demonstration that opposition to the standard is continuing I shall offer a belated response. I’ll quote Andrew’s arguments, section by section, in dark blue and then respond.

“As a member of the UK Mirror Panel to WG26, which is responsible for the ISO 29119 standard, I am disappointed to read of the objection to the standard led by the International Society for Software Testing, which has resulted in a formal petition to ISO.

I respectfully suggest that their objections would be more effective if they engaged with their respective national bodies, and sought to overcome their objections, constructively.

People who are opposing ISO 29119 claim:

  1. It is costly.
  2. It will be seen as mandatory skill for testers (which may harm individuality and freedom).
  3. It may reduce the ability to experiment and try non-conventional ways.
  4. Once the standard is accepted, testers can be held responsible for project failures (or non-compliance).
  5. Effort will be more on documentation and process rather than testing.
    Let us consider each of these in turn.”

The International Society for Software Testing (ISST) launched the petition against ISO 29119, but this was merely one aspect of the campaign against the standard. Opposition was certainly not confined to ISST. The situation is somewhat confused by the fact that ISST disbanded in 2017. One of the prime reasons was that the “objectives set out by the founders have been met, or are in the capable hands of organisations that we support”. The main organisation referred to here is the larger and more established Association for Software Testing (AST), which can put more resources into the fight. I always felt the main differences between ISST and AST were in style and approach rather than principles and objectives.

The suggestion that the opponents of ISO 29119 should have worked through national ISO bodies is completely unrealistic. ISO’s approach is fundamentally wrong and opponents would have been seen as a wrecking crew preventing any progress. I know of a couple of people who did try and involve themselves in ISO groups and gave up in frustration. The debate within ISO about a standard like 29119 concerns the detail, not the underlying approach. In any case the committment required to join an ISO working group is massive. Meetings are held all over the world. They take up a lot of time and require huge expenses for travel and accommodation. That completely excludes independent consultants like myself.

“Costly

Opponents object to this standard because it is not freely available.

While this is a fair point, it is no different from every other standard that is in place – and which companies follow, often because it gives them a competitive advantage.

Personally, I would like to see more standards placed freely in the public domain, but I am not in a position to do it!”

The cost of the standard is a minor criticism. As a member of the AST’s Committee on Standards and Professional Practice I am fortunate to have access to the documents comprising the standard. These cost hundreds of dollars and I would baulk at buying them for myself. The full set would cost as much as a family holiday. I know which would be more important!

However, the cost does hamper informed debate about the content, and that was the true concern. The real damage of a poorly conceived standard will be poorer quality testing and that will be far more costly than the initial cost of the documents.

“Mandatory

Opponents claim this standard will be seen as a mandatory skill for testers (which may harm individuality and freedom).

ISO 29119 replaces a number of IEEE and British standards that have been in place for many years. And while those standards are seen to represent best practice, they have not been mandatory.”

I have two big issues with this counter argument. Firstly, the standards that ISO 29119 replaced were emphatically not “seen to represent best practice”. If they were best practice there would have been no need to replace them. They were hopelessly out of date but IEEE 829 was unhelpful, even damaging, when it was new.

My second concern is about the way that people respond to a standard. Back in 2009 I wrote this article “Do standards keep testers in the kindergarten?” in Testing Experience magazine arguing against the principle of testing standards, the idea of best practice and the inevitable danger of an unhelpful and dated standard like IEEE 829 being imposed on unwilling testers.

Once you’ve called a set of procedures a standard the argument is over in many organisations; testers are required to use them. It is disingenuous to say that standards are not mandatory. They are sold on the basis that they offer reassurance and that the wise, safe option is to make them compulsory.

I made this argument nine years ago thinking the case against standards had been won. I was dismayed to discover subsequently that ISO was trying to take us back several decades with ISO 29119.

“Experimentation

A formal testing environment should be a place where processes and procedures are in place, and is not one where ‘experiment and non-conventional’ methods are put in place. But having said that, there is nothing within ISO 29199 that prevents other methods being used.”

There may be a problem over the word “experiment” here. Andrew seems to think that testers who talk of experimentation are admitting they don’t know what they’re doing and are making it up as they go along. That would be an unfortunate interpretation. When testers from the Context Driven School refer to experimentation they mean the act of testing itself.

Good testing is a form of exploration and experimentation to find out how the product behaves. Michael Bolton describes that well here. A prescriptive standard that focuses on documentation distracts from, and effectively discourages, such exploring and experimentation. We have argued that at length and convincingly. It would be easier to analyse Andrew’s case if he had provided links to arguments from opponents who had advocated a form of experimentation he disapproves of.

“Accountability

Opponents claim that, once the standard is accepted, testers can be held responsible for project failures (or non-compliance).

As with any process or procedure, all staff are required to ensure compliance with the company manual – and project managers should be managing their projects to ensure that all staff are doing so.

Whether complying with ISO 29119 or any other standard or process, completion of testing and signing off as ‘passed’ carries accountability. This standard does not change that.”

This is a distortion of the opponents’ case. We do believe in accountability, but that has to be meaningful. Accountability must be based on something to which we can reasonably sign up. We strongly oppose attempts to enforce accountability to an irrelevant, poorly conceived and damaging standard. Complying with such a standard is orthogonal to good testing; there is no correlation between the two activities.

At best ISO 29119 would be an irrelevance. In reality it is more likely to be a hugely damaging distraction. If a company imposes a standard that all testers should wear laboratory technicians’ white coats it might look impressively professional, but complying with the standard would tell us nothing about the quality of the testing.

As a former auditor I have strong, well informed, views about accountability. One of ISO 29119’s serious flaws is that it fails to explain why we test. We need such clarity before we can have any meaningful discussion about compliance. I discussed this here, in “Do we want to be ‘compliant’ or valuable?”

The standard defines in great detail the process and the documents for testing, but fails to clarify the purpose of testing, the outcomes that stakeholders expect. To put it bluntly, ISO 29119 is vague about the ends towards which we are working, but tries to be precise about the means of getting there. That is an absurd combination.

ISO 29119 tries to set out rules without principles. Understanding the distinction between rules and principles is fundamental to the process of crafting professional standards that can hold practitioners meaningfully to account. I made this argument in the Fall 2015 edition of Better Software magazine. The article is also available on my blog, “Why ISO 29119 is a flawed quality standard”.

This confusion of rules and principles, means and ends, has led to an obsessive focus on delivering documentation rather than valuable information to stakeholders. That takes us on to Andrew’s next argument.

“Documentation

Opponents claim that effort will be more on documentation and process rather than testing.

I fail to understand this line of reasoning – any formal test regime requires a test specification, test cases and recorded test results. And the evidence produced by those results need argument. None of this is possible without documentation.”

Opponents of ISO 29119 have argued repeatedly and convincingly that a prescriptive standard which concentrates on documentation will inevitably lead to goal displacement; testers will concentrate on the documentation mandated by the standard and lose sight of why they are testing. That was our experience with IEEE 829. ISO 29119 repeats the same mistake.

Andrew’s second paragraph offers no refutation of the opponents’ argument. He apparently believes that we are opposed to documentation per se. That’s a straw man. Andrew justifies ISO 29119’s demand for documentation, which I believe is onerous and inappropriate, by asserting that it serves as evidence. Opponents argue that the standard places far too much emphasis on advance documentation and neglects evidence of what was discovered by the testing.

The statement that any formal test regime requires a test specification and test cases is highly contentious. Auditors would expect to see evidence of planning, but test specifications and test cases are just one way of doing it, the way that ISO 29119 advocates. In any case, advance planning is not evidence that good testing was performed any more than a neat project plan provides evidence that the project ran to time.

As for the results, the section of ISO 29119 covering test completion reports is woefully inadequate. It would be possible to produce a report that complied fully with the standard and offered nothing of value. That sums up the problems with ISO 29119. Testers can comply while doing bad testing. That is in stark contrast to the standards governing more established professions, such as accountants and auditors.

“Conclusion

Someone wise once said:

  1. Argument without Evidence is unfounded.
  2. Evidence without Argument is unexplained.

Having considered the argument put forward, and the evidence to support the case:

  • The evidence is circumstantial with no coherence.
  • The argument is weak, and seems only to support their vested interests.

For a body that represents test engineers, I would have expected better.”

The quote that Andrew uses from “someone wise” actually comes from the field of safety critical systems. There is much that we in conventional software testing can learn from that field. Perhaps the most important lessons are about realism and humility. We must deal with the world as it is, not as we would like it to be. We must accept the limitations of our knowledge and what we can realistically know.

The proponents of ISO 29119 are too confident in their ability to manage in a complex, evolving field using techniques rooted in the 1970s. Their whole approach tempts testers to look for confirmation of what they think they know already, rather than explore the unknown and explain what they cannot know.

Andrew’s verdict on the opposition to ISO 29119 should be turned around and directed at ISO and the standard itself. It was developed and launched in the absence of evidence that it would help testers to do a better job. The standard may have internal consistency, but it is incoherent when confronted with the complexities of the real world.

Testers who are forced to use it have to contort their testing to fit the process. Any good work they do is in spite of the standard and not because of it. It might provide a welcome route map to novice testers, but it offers a dangerous illusion. The standard tells them how to package their work so it appears plausible to those who don’t know any better. It defines testing in a way that makes it appear easier than it really is. But testing is not meant to be easy. It must be valuable. If you want to learn how to provide value to those who pay you then you need to look elsewhere.

Finally, I should acknowledge that some of the work I have cited was not available to Andrew when he wrote his blog in 2014. However, all of the underlying arguments and research that opponents of 29119 have drawn on were available long before then. ISO simply did not want to go looking for them. Our arguments about ISO 29119 being anti-competitive were at the heart of the Stop 29119 campaign. Andrew has not addressed those arguments.

If ISO wants to be taken seriously it must justify ISO 29119’s status as a standard. Principled, evidenced and coherent objections deserve a response. In a debate all sides have a duty to respond to their opponents’ strongest case, rather than evading difficult objections, setting up straw men and selectively choosing the arguments that are easiest to deal with.

ISO must provide evidence and coherent evidence that 29119 is effective. Simply labelling a set of prescriptive processes as a standard and expecting the industry to respect it for that reason will not do. That is the sign of a vested interest seeking to impose itself on a whole profession. No, the opposition has not died down; it has not had anything credible to oppose.