An abdication of managerial responsibility?

An abdication of managerial responsibility?

The two recent Boeing 737 MAX crashes have been grimly absorbing for software developers and testers. It seems that the crashes were caused by the MCAS system, which should prevent a stall, responding to false data from a sensor by forcing the planes into steep dives despite the attempts of the pilots to make the planes climb. The MCAS problem may have been a necessary condition for disaster, but it clearly was not sufficient. There were many other factors involved. Most strikingly, it seems that MCAS itself may have been working as specified but there were problems in the original design and the way it interfaces with the sensor and crew.

I have no wish to go into all this in serious detail (yet), but I read an article on the Bloomberg website, “Boeing’s 737 Max software outsourced to $9-an-hour engineers” which contained many sentences and phrases that jumped off the screen at me. These snippets all point towards issues that concern me, that I’ve been talking and writing about recently, or that I’ve been long aware of. I’d like to run through them. I’ll use a brief quote from the Bloomberg article in each section before discussing the implications. All software designers and testers should reflect on these issues.

The commoditization of software development and testing

“Boeing has also expanded a design center in Moscow. At a meeting with a chief 787 engineer in 2008, one staffer complained about sending drawings back to a team in Russia 18 times before they understood that the smoke detectors needed to be connected to the electrical system, said Cynthia Cole, a former Boeing engineer who headed the engineers’ union from 2006 to 2010.

‘Engineering started becoming a commodity’, said Vance Hilderman, who co-founded a company called TekSci that supplied aerospace contract engineers and began losing work to overseas competitors in the early 2000s.”

The threat of testing becoming a commodity has been a long standing concern amongst testers. To a large extent we’re already there. However, I’d assumed, naively perhaps, that this was a route chosen by organisations that could get away with poor testing, in the short term at least. I was deeply concerned to see it happening in a safety critical industry.

To summarise the problem, if software development and testing are seen as commodities, bought and sold on the basis of price, then commercial pressures will push quality downwards. The inevitable pressure sends cost and prices spiralling down to the level set by the lowest cost supplier, regardless of value. Testing is particularly vulnerable. When the value of the testing is low then whatever cost does remain becomes more visible and harder to justify.

There is pressure to keep reducing costs, and if you’re getting little value from testing just about any cost-cutting measure is going to look attractive. If you head down the route of outsourcing, offshoring and increasing commoditization, losing sight of value, you will lock yourself into a vicious circle of poor quality.

Iain McCowatt’s EuroSTAR webinar on “The commoditization of testing” is worth watching.

ETTO – the efficiency-thoroughness trade-off

…the planemakers say global design teams add efficiency as they work around the clock.

Ah! There we have it! Efficiency. Isn’t that a good thing? Of course it is. But there is an inescapable trade-off, and organisations must understand what they are doing. There is a tension between the need to deliver a safe, reliable product or service, and the pressure to do so at the lowest cost possible. The idea of ETTO, the efficiency-thoroughness trade-off was was popularised by Erik Hollnagel.

Making the organisation more efficient means it is less likely to achieve its important goals. Pursuing vital goals, such as safety, comes at the expense of efficiency, which eliminates margins of error and engineering redundancy, with potentially dangerous results. This is well recognised in safety critical industries, obviously including air transport. I’ve discussed this further in my blog, “The dragons of the unknown; part 6 – Safety II, a new way of looking at safety”.

Drift into failure

“’Boeing was doing all kinds of things, everything you can imagine, to reduce cost, including moving work from Puget Sound, because we’d become very expensive here,’ said Rick Ludtke, a former Boeing flight controls engineer laid off in 2017. ‘All that’s very understandable if you think of it from a business perspective. Slowly over time it appears that’s eroded the ability for Puget Sound designers to design.’”

“Slowly over time”. That’s the crucial phrase. Organisations drift gradually into failure. People are working under pressure, constantly making the trade off between efficiency and thoroughness. They keep the show on the road, but the pressure never eases. So margins are increasingly shaved. The organisation finds new and apparently smarter ways of working. Redundancy is eliminated. The workers adapt the official processes. The organisation seems efficient, profitable and safe. Then BANG! Suddenly it isn’t. The factors that had made it successful turn out to be responsible for disaster.

“Drifting into failure” is an important concept to understand for anyone working with complex systems that people will have to use, and for anyone trying to make sense of how big organisations should work, and really do work. See my blog “The dragons of the unknown; part 4 – a brief history of accident models” for a quick introduction to the drift into failure. The idea was developed by Sidney Dekker. Check out his work.

Conway’s Law

“But outsourcing has long been a sore point for some Boeing engineers, who, in addition to fearing job losses say it has led to communications issues and mistakes.

This takes me to one of my favourites, Conway’s Law. In essence it states that the design of systems corresponds to the design of the organisation. It’s not a normative rule, saying that this should (or shouldn’t) happen. It merely says that it generally does happen. Traditionally the organisation’s design shaped the technology. Nowadays the causation might be reversed, with the technology shaping the organisation. Conway’s Law was intended as a sound heuristic, never a hard and fast rule.

Conway's Law

a slide from one of my courses

Perhaps it is less generally applicable today, but for large, long established corporations I think it still generally holds true.

I’m going to let you in on a little trade secret of IT auditors. Conway’s Law was a huge influence on the way we audited systems and development projects.

corollary to Conway's Law

another slide from one of my courses

Audits were always strictly time boxed. We had to be selective in how we used our time and what we looked at. Modern internal auditing is risk based, meaning we would focus on the risks that posed the greatest threat to the organisation, concentrating on the areas most exposed to risk and looking for assurance that the risks were being managed effectively.

Conway’s Law guided the auditors towards low hanging fruit. We knew that we were most likely to find problems at the interfaces, and these were likely to be particularly serious. This was also my experience as a test manager. In both jobs I saw the same pattern unfold when different development teams, or different companies worked on different parts of a project.

Development teams would be locked into their delivery schedule before the high level requirements were clear and complete, or even mutually consistent. The different teams, especially if they were in different companies, based their estimates on assumptions that were flawed, or inconsistent with other teams’ assumptions. Under pressure to reduce estimates and delivery quickly each team might assume they’d be able to do the minimum necessary, especially at the interfaces; other teams would pick up the trickier stuff.

This would create gaps at the interfaces, and cries of “but I thought you were going to do that – we can’t possibly cope in time”. Or the data that was passed from one suite couldn’t be processed by the next one. Both might have been built correctly to their separate specs, but they weren’t consistent. The result would be last minute solutions, hastily lashed together, with inevitable bugs and unforeseen problems down the line – ready to be exposed by the auditors.

Splitting the work across continents and suppliers always creates big management problems. You have to be prepared for these. The additional co-ordination, chasing, reporting and monitoring takes a lot of effort. This all poses big problems for test managers, who have to be strong, perceptive and persuasive to ensure that the testing is planned consistently across the whole solution, rather than being segmented and bolted together at a late stage for cursory integration testing.

Outsourcing and global teams don’t provide a quick fix. Without strong management and a keen awareness of the risks it’s a sure way to let serious problems slip through into production. Surely safety critical industries would be smarter, more responsible? I learned all this back in the 1990s. It’s not new, and when I read Bloomberg’s account of Boeing’s engineering practices I swore, quietly and angrily.

Consequences

“During the crashes of Lion Air and Ethiopian Airlines planes that killed 346 people, investigators suspect, the MCAS system pushed the planes into uncontrollable dives because of bad data from a single sensor.

That design violated basic principles of redundancy for generations of Boeing engineers, and the company apparently never tested to see how the software would respond, Lemme said. ‘It was a stunning fail,’ he said. ‘A lot of people should have thought of this problem – not one person – and asked about it.’

So the consequences of commoditization, ETTO, the drift into failure and complacency about developing and testing complex, safety critical systems with global teams all came together disastrously in the Lion Air and Ehtiopian Airlines crashes.

A lot of people should certainly have thought of this problem. As a former IT auditor I thought of this passage by Norman Marks, a distinguished commentator on auditing. Writing about risk-based auditing he said;

A jaw-dropping moment happened when I explained my risk assessment and audit plan to the audit committee of the oil company where I was CAE (Tosco Corp.). The CEO asked whether I had considered risks relating to the blending of gasoline, diesel, and jet fuel.

As it happened, I had — but it was not considered high risk; it was more a compliance issue than anything else. But, when I talked to the company’s executives I heard that when Exxon performed an enterprise-wide risk assessment, this area had been identified as their #1 risk!

Poorly-blended jet fuel could lead to Boeing 747s dropping out of the sky into densely-packed urban areas — with the potential to bankrupt the largest (at that time) company in the world. A few years later, I saw the effect of poor blending of diesel fuel when Southern California drivers had major problems and fingers were pointed at us as well as a few other oil companies.

In training courses, when I’ve been talking about the big risks that keep the top management awake at night I’ve used this very example; planes crashing. In big corporations it’s easy for busy people to obsess about the smaller risks, those that delay projects, waste money, or disrupt day to day work. These problems hit us all the time. Disasters happen rarely and we can lose sight of the way the organisation is drifting into catastrophic failure.

That’s where auditors, and I believe testers too, come in. They should be thinking about these big risks. In the case of Boeing the engineers, developers and testers should have spoken out about the problems. The internal auditors should certainly have been looking out for it, and these are the people who have the organisational independence and power to object. They have to be listened to.

An abdication of management responsibility?

“Boeing also has disclosed that it learned soon after Max deliveries began in 2017 that a warning light that might have alerted crews to the issue with the sensor wasn’t installed correctly in the flight-display software. A Boeing statement in May, explaining why the company didn’t inform regulators at the time, said engineers had determined it wasn’t a safety issue.

‘Senior company leadership,’ the statement added, ‘was not involved in the review.’”

Senior management was not involved in the review. Doubtless there are a host of reasons why they were not involved. The bottom line, however, is that it was their responsibility. I spent six years as an IT auditor. In that time only one of my audits led to the group’s chief auditor using that nuclear phrase, which incidentally was not directed at IT management. A very senior executive was accused of “abdicating managerial responsibility”. The result was a spectacular display of bad temper and attempted intimidation of the auditors. We didn’t back down. That controversy related to shady behaviour at a subsidiary where the IT systems were being abused and frauds had become routine. It hardly compared to a management culture that led to hundreds of avoidable deaths.

One of the core tenets of Safety II, the new way of looking at safety, is that there is never a single, root cause for failure in complex systems. There are always multiple causes, all of them necessary, but none of them sufficient, on their own, for disaster. The Boeing 737-MAX case bears that out. No one person was responsible. No single act led to disaster. The fault lies with the corporate culture as a whole, with a culture of leadership that abdicated responsibility, that “wasn’t involved”.

The dragons of the unknown; part 4 – a brief history of accident models

Introduction

This is the fourth post in a series about problems that fascinate me, that I think are important and interesting. The series draws on important work from the fields of safety critical systems and from the study of complexity, specifically complex socio-technical systems. This was the theme of my keynote at EuroSTAR in The Hague (November 12th-15th 2018).

The first post was a reflection, based on personal experience, on the corporate preference for building bureaucracy rather than dealing with complex reality, “The dragons of the unknown; part 1 – corporate bureaucracies”. The second post was about the nature of complex systems, “part 2 – crucial features of complex systems”. The third followed on from part 2, and talked about the impossibility of knowing exactly how complex socio-technical systems will behave with the result that it is impossible to specify them precisely, “part 3 – I don’t know what’s going on”.

This post looks at accident models, i.e. the way that safety experts mentally frame accidents when they try to work out what caused them.

Why do accidents happen?

Taybridge_from_law_02SEP05

I want to take you back to the part of the world I come from, the east of Scotland. The Tay Bridge is 3.5 km long, the longest railway bridge in the United Kingdom. It’s the second railway bridge over the Tay. The first was opened in 1878 and came down in a storm in 1879, taking a train with it and killing everyone on board.

The stumps of the old bridge were left in place because of concern that removing them would disturb the riverbed. I always felt they were there as a lesson for later generations. Children in that part of Scotland can’t miss these stumps. I remember being bemused when I learned about the disaster. “Mummy, Daddy, what are those things beside the bridge? What…? Why…? How…?” So bridges could fall down. Adults could screw up. Things could go badly wrong. There might not be a happy ending. It was an important lesson in how the world worked for a small child.

Accident investigations are difficult and complex even for something like a bridge, which might appear, at first sight, to be straightforward in concept and function. The various factors that featured in the inquiry report for the Tay Bridge disaster included the bridge design, manufacture of components, construction, maintenance, previous damage, wind speed, train speed and the state of the riverbed.

These factors obviously had an impact on each other. That’s confusing enough, and it’s far worse for complex socio-technical systems. You could argue that a bridge is either safe or unsafe, usable or dangerous. It’s one or the other. There might be argument about where you would draw the line, and about the context in which the bridge might be safe, but most people would be comfortable with the idea of a line. Safety experts call that idea of a line separating the unbroken from the broken as the bimodal principle (not to be confused with Gartner’s bimodal IT management); a system is either working or it is broken.

Tay Bridge 2.0 'pass'

Thinking in bimodal terms becomes pointless when you are working with systems that run in a constantly flawed state, one of partial failure, when no-one knows how these systems work or even exactly how they should work. This is all increasingly recognised. But when things go wrong and we try to make sense of them there is a huge temptation to arrange our thinking in a way with which we are comfortable, we fall back on mental models that seem to make sense of complexity, however far removed they are from reality. These are the envisioned worlds I mentioned in part 1.

We home in on factors that are politically convenient, the ones that will be most acceptable to the organisation’s culture. We can see this, and also how thinking has developed, by looking at the history of the conceptual models that have been used for accident investigations.

Heinrich’s Domino Model (1931)

Domino Model (fig 3)

The Domino Model was a traditional and very influential way to help safety experts make sense of accidents. Accidents happened because one factor kicked into another, and so on down the line of dominos, as illustrated by Heinrich’s figure 3. Problems with the organisation or environment would lead to people making mistakes and doing something dangerous, which would lead to accidents and injury. It assumed neat linearity & causation. Its attraction was that it appealed to management. Take a look at the next two diagrams in the sequence, figures 4 and 5.

Domino Model (figs 4-5)

The model explicitly states that taking out the unsafe act will stop an accident. It encouraged investigators to look for a single cause. That was immensely attractive because it kept attention away from any mistakes the management might have made in screwing up the organisation. The chain of collapsing dominos is broken by removing the unsafe act and you don’t get an accident.

The model is consistent with beating up the workers who do the real work. But blaming the workers was only part of the problem with the Domino Model. It was nonsense to think that you could stop accidents by removing unsafe acts, variations from process, or mistakes from the chain. It didn’t have any empirical, theoretical or scientific basis. It was completely inappropriate for complex systems. Thinking in terms of a chain of events was quite simply wrong when analysing these systems. Linearity, neat causation and decomposing problems into constituent parts for separate analysis don’t work.

Despite these fundamental flaws the Domino Model was popular. Or rather, it was popular because of its flaws. It told managers what they wanted to hear. It helped organisations make sense of something they would otherwise have been unable to understand. Accepting that they were dealing with incomprehensible systems was too much to ask.

Swiss Cheese Model (barriers)

Swiss Cheese Model

James Reason’s Swiss Cheese Model was the next step and it was an advance, but limited. The model did recognise that problems or mistakes wouldn’t necessarily lead to an accident. That would happen only if a series of them lined up. You can therefore stop the accident recurring by inserting a new barrier. However, the model is still based on the idea of linearity and of a sequence of cause and effect, and also the idea that you can and should decompose problems for analysis. This is a dangerously limited way of looking at what goes wrong in complex socio-technical systems, and the danger is very real with safety critical systems.

Of course, there is nothing inherently wrong with analysing systems and problems by decomposing them or relying on an assumption of cause and effect. These both have impeccably respectable histories in science and philosophy. Reducing problems to their component parts has its intellectual roots in the work of Rene Descartes. This approach implies that you can understand the behaviour of a system by looking at the behaviour of its components. Descartes’ approach (the Cartesian) fits neatly with a Newtonian scientific worldview, which holds that it is possible to identify definite causes and effects for everything that happens.

If you want to understand how a machine works and why it is broken, then these approaches are obviously valid. They don’t work when you are dealing with a complex socio-technical system. The whole is quite different from the sum of its parts. Thinking of linear flows is misleading when the different elements of a system are constantly influencing each other and adapting to feedback. Complex systems have unpredictable, emergent properties, and safety is an emergent outcome of complex socio-technical systems.

All designs are a compromise

Something that safety experts are keenly aware of is that all designs are compromises. Adding a new barrier, as envisaged by the Swiss Cheese Model, to try and close off the possibility of an accident can be counter-productive. Introducing a change, even if it’s a fix, to one part of a complex system affects the whole system in unpredictable and possibly harmful ways. The change creates new possibilities for failure, that are unknown, even unknowable.

It’s not a question of regression testing. It’s bigger and deeper than that. The danger is that we create new pathways to failure. The changes might initially seem to work, to be safe, but they can have damaging results further down the line as the system adapts and evolves, as people push the system to the edges.

There’s a second danger. New alerts or controls increase the complexity with which the user has to cope. That was a problem I now recognise with our approach as auditors. We didn’t think through the implications carefully enough. If you keep throwing in fixes, controls and alerts then the user will miss the ones they really need to act on. That reduces the effectiveness, the quality and ultimately the safety of the system. I’ll come back to that later. This is an important paradox. Trying to make a system more reliable and safer can make it more dangerous and less reliable.MiG-29

The designers of the Soviet Union’s MiG-29 jet fighter observed, “the safest part is the one we could leave off”, (according to Sidney Dekker in his book “The field guide to understanding human error”).

Drift into failure

A friend once commented that she could always spot one of my designs. They reflected a very pessimistic view of the world. I couldn’t know how things would go wrong, I just knew they would and my experience had taught me where to be wary. Working in IT audit made me very cautious. Not only had I completely bought into Murphy’s Law, “anything that can go wrong will go wrong” I had my own variant; “and it will go wrong in ways I couldn’t have imagined”.

William Langewiesche is a writer and former commercial pilot who has written extensively on aviation. He provided an interesting and insightful correction to Murphy, and also to me (from his book “Inside the Sky”).

“What can go wrong usually goes right”.

There are two aspects to this. Firstly, as I have already discussed, complex socio-technical systems are always flawed. They run with problems, bugs, variations from official process, and in the way people behave. Despite all the problems under the surface everything seems to go fine, till one day it all goes wrong.

The second important insight is that you can have an accident even if no individual part of the system has gone wrong. Components may have always worked fine, and continue to do so, but on the day of disaster they combine in unpredictable ways to produce an accident.

Accidents can happen when all the components have been working as designed, not just when they fail. That’s a difficult lesson to learn. I’d go so far as to say we (in software development and engineering and even testing) didn’t want to learn it. However, that’s the reality, however scary it is.

Sidney Dekker developed this idea in a fascinating and important book, “Drift Into Failure”. His argument is that we are developing massively complex systems that we are incapable of understanding. It is therefore misguided to think in terms of system failure arising from a mistake by an operator or the sudden failure of part of the system.

“Drifting into failure is a gradual, incremental decline into disaster driven by environmental pressure, unruly technology and social processes that normalise growing risk. No organisation is exempt from drifting into failure. The reason is that routes to failure trace through the structures, processes and tasks that are necessary to make an organization successful. Failure does not come from the occasional, abnormal dysfunction or breakdown of these structures, processes or tasks, but is an inevitable by-product of their normal functioning. The same characteristics that guarantee the fulfillment of the organization’s mandate will turn out to be responsible for undermining that mandate…

In the drift into failure, accidents can happen without anything breaking, without anybody erring, without anyone violating rules they consider relevant. The idea of systems drifting into failure is a practical illustration of emergence in complex systems. The overall system adapts and changes over time, behaving in ways that could not have been predicted from analysis of the components. The fact that a system has operated safely and successfully in the past does not mean it will continue to do so. Dekker says;

“Empirical success… is no proof of safety. Past success does not guarantee future safety.”

Dekker’s argument about drifting to failure should strike a chord with anyone who has worked in large organisations. Complex systems are kept running by people who have to cope with unpredictable technology, in an environment that increasingly tolerates risk so long as disaster is averted. There is constant pressure to cut costs, to do more and do it faster. Margins are gradually shaved in tiny incremental changes, each of which seems harmless and easy to justify. The prevailing culture assumes that everything is safe, until suddenly it isn’t.

Langewiesche followed up his observation with a highly significant second point;

“What can go wrong usually goes right – and people just naturally draw the wrong conclusions.”

When it all does go wrong the danger is that we look for quick and simple answers. We focus on the components that we notice are flawed, without noticing all the times everything went right even with those flaws. We don’t think about the way people have been keeping systems running despite the problems, or how the culture has been taking them closer and closer to the edge. We then draw the wrong conclusions. Complex systems and the people operating them are constantly being pushed to the limits, which is an important idea that I will return to.

It is vital that we understand this idea of drift, and how people are constantly having to work with complex systems under pressure. Once we start to accept these ideas it starts to become clear that if we want to avoid drawing the wrong conclusions we have to be sceptical about traditional approaches to accident investigation. I’m talking specifically about root cause analysis, and the notion that “human error” is a meaningful explanation for accidents and problems. I will talk about these in my next post, “part 5 – accident investigations and treating people fairly”.