Cynefin, testing & auditing

Over the last few weeks following my CAST 2014 talk in New York, while the Stop 29119 campaign has been raging, I have been thinking more about some of the underlying issues.

One of these has been the idea of “best practice”, which led me back to the Cynefin Framework. If you don’t know about Cynefin then I strongly recommend that you learn about it and reflect on its implications. The Wikipedia article is a good start, not least because Dave Snowden, Cynefin’s creator, keeps an eye on it. This short video by Snowden is also helpful. NB; the Simple Domain has been renamed as Obvious since the video was made.

An overview of the Cynefin Framework

Cynefin_as_of_1st_June_2014

I have carelessly described the Cynefin Framework as being a quadrant in the past, but that was sloppy. It isn’t. It merely looks like one. It is a collection of five domains that are distinct and clearly defined in principle, but which blur into one another in practice.

In additon to the four domains that look like the cells of a quadrant there is a fifth, in the middle, called Disorder, and this one is crucial to an understanding of the framework and its significance.

Cynefin is not a categorisation model, as would be implied if it were a simple matrix. It is not a matter of dropping data into the framework then cracking on with the work. Cynefin is a framework that is designed to help us make sense of what confronts us, to give us a better understanding of our situation and the approaches that we should take.

The first domain is Obvious, in which there are clear and predictable causes and effects. The second is Complicated, which also has definite causes and effects, but where the connections are not so obvious; expert knowledge and judgement is required.

The third is Complex, where there is no clear cause and effect. We might be able to discern it with hindsight, but that knowledge doesn’t allow us to predict what will happen next; the system adapts continually. Snowden and Boone used a key phrase in their Harvard Business Review article about Cynefin.

”…hindsight does not lead to foresight because the external conditions and systems constantly change.”

The fourth domain is Chaotic. Here, urgent action rather than reflective analysis, is required. The participants must act, sense feedback and respond. Complex situations might be suited to safe probing, which can teach us more about the problem, but such probing is a luxury in the Chaotic domain.

The appropriate responses in all four of these domains are different. In Obvious, the categories are clearly defined, one simply chooses the right one, and that provides the right route to follow. Best practices are appropriate here.

In the Complicated domain there is no single, right category to choose. There could be several valid options, but an expert can select a good route. There are various good practices, but the idea of a single best practice is misconceived.

In the Complex domain it is essential to probe the problem and learn by trial and error. The practices we might follow will emerge from that learning. In Chaos as I mentioned, we simply have to start with action, firefighting to stop the situation getting worse.

The fifth domain is that of Disorder. This is the default position in a sense. It’s where we find ourselves when we don’t know which domain we should really be in. It’s therefore the normal starting point. The great danger is that we don’t choose the appropriate domain, but simply opt for the one that fits our instincts or our training, or that is aligned with the organisation’s traditions and culture, regardless of the reality.

The different domains have blurred edges. In any context there might be elements that fit into different domains if they are looked at independently. I don’t see that as being a problem. As I said, Cynefin is not a neat categorisation model. It is intended to help us make sense of what we face. If reality is messy and blurred then there’s no point trying to force it into a straitjacket.

There is, however, one boundary that is fundamentally different from the others; the border between Obvious and Chaotic. This is not really a boundary at all. It is more of a cliff. If you move from Obvious to Chaotic you don’t glide smoothly into a subtly changing landscape. You fall off the cliff.

Within the Obvious domain the area approaching the cliff is the complacent zone. Here, we think we are working in a neat, ordered environment and “we believe our own myths” as Snowden puts it in the video above. The reality is quite different and we are caught totally unaware when we hit a crisis and plunge off the cliff into chaos.

Cynefin and testing

Cynefin has obvious relevance to testing and to the debate about standards. The testing strategy that we should choose will depend on the domain that makes most sense given our problems. However, testing standards not only assume that testers are operating in the Obvious or possibly Complicated domains, they effectively encourage organisations to constrain testers so that they have to conduct testing there.

Testers who are subjected to inappropriate standards based on heavy advanced documentation are forced into the Obvious domain, with management happily esconced, temporarily, in the complacent zone. Test planning proceeds in a neat and ordered fashion. Reality invariably bites during test execution and the project crashes over the cliff into Chaos.

Time and again this is attributed to bad luck, or individual failings, or specific problems that the project hit. It is useful to look at this pattern through the lens of the Cynefin Framework to see that it is a predictable consequence of pretending that software development is neater and more ordered than it really is. If we have learned anything over the last 20 years it should be that software development is not obvious. It is sometimes complicated, but more often it is complex.

Cynefin and auditing

Cynefin has attracted a good deal of interest in software development circles. There has been only very limited attention paid to its relevance to testing. I have yet to see any discussion of how it relates to auditing, and that surprises me. It helps me to make sense of the way that my audit department operated when I was an IT auditor. It also explains the problems that less professional audit departments suffered and also caused.

The large insurance company where I worked as an auditor had three distinct types of auditor. The branch auditors performed compliance auditing at the 100 or so branches around the UK. They followed a standard audit plan that did not vary from one audit to the next. The operational auditors had a more flexible approach, but they were still largely driven by an advance audit plan.

The IT auditors operated on a quite different basis. There were no standardised plans or checklists. Each audit was different from the last, and they were all planned loosely at first with the course of the audit being determined by the initial findings. There were very few hard and fast rules, and those we did follow applied only within narrow constraints. Basically, our approach was “it all depends”.

That reflected the areas we were looking at; software developments, IT security, live applications, fraud investigations. There were plenty of good practices, but few inviolable best practices.

This all fitted the Cynefin Framework. Put crudely, the branch auditors were checking an Obvious domain, in which each branch followed strictly defined processes that fitted the needs of the staff, office and company. The IT auditors were working in a Complex domain, with the operational auditors somewhere in between, dealing with a largely Complicated domain.

When I observed a less professional audit department they approached similar situations to the ones I faced as an IT auditor, but these auditors were armed with a checklist, like the ones that I’d seen the branch auditors using.

Reflecting on this I see how Cynefin is relevant to auditors in three different and important ways.

Firstly Cynefin should help auditors to make sense of the area under audit. Whether or not the people in that area are taking appropriate actions, that are aligned with the problems and risks, obviously depends on the domain. I’ll call this the natural domain.

Secondly, auditors can use Cynefin to understand the way the area under audit is being managed. Of course this should be the same as the natural domain, but it isn’t necessarily so. Cynefin helps us to understand that, for various reasons, the management regime may be different. The domain of Disorder goes some way to explaining that. Management have opted for the domain that feels comfortable, rather than the one that fits the work.

Thirdly, Cynefin can help auditors to make sense of their job conducting the audit. This should be the same as the natural domain, but it could be different.

The auditors’ perspective has two aspects. How do they conduct the audit? And what influence will the audit, or even the knowledge of an impending audit, have on the area under audit?

It is a fatal trap for auditors to perform an audit with the unchallenged assumption that the audit area belongs in one domain when it should really be in a different one. The classic mistake is to approach a Complex problem with a checklist designed for an Obvious domain. Even if the management believes that the area should be Obvious the underlying reality, the natural domain, may be different. Auditors must expose that disconnect, not reinforce the delusion.

That brings me on to the second aspect of the auditors’ perspective. The act of the audit affects the area being audited. It’s not just a crude case of people cleaning up their act temporarily for the audit and lying to the auditors. That happens of course, but people also try to second guess the auditors, or prepare defensive documentation that has no intrinsic value, or any value at all in many cases.

It is vitally important that good auditors can distinguish between a plausible, auditable surface and the underlying reality. It is all too easy to forge a cosy, informal consensus with the auditees. “You produce something we can check easily, and we’ll say you’re doing a good job.”

Auditing is a difficult and challenging job. It becomes much easier if there is a clear benchmark, or standard, or set of rules to audit against. If auditors fix on a deceptive, irrelevant benchmark then they encourage management to head for the Obvious domain and settle down in the complacent zone when they should really be taxing their brains coping with Complexity.

Cynefin and the flaws at the heart of ISO 29119

I can’t stress strongly enough that Cynefin recognises the value of best practice only in the Obvious domain. Attempting to shoehorn such an inflexible concept into Complicated and Complex environments isn’t just unhelpful. It is damaging.

Cynefin has helped me to articulate some of my deep objections to ISO 29119. The standard encourages everyone to ignore reality and head for Obvious complacency. It sends a very dangerous signal to auditors, offering them a tempting benchmark that will make their job easier. As I’ve said often, neither testing nor auditing are meant to be easy; they are meant to be valuable. ISO 29119 is selling a dangerous delusion that testing is easier than it is and that it is an activity that can be reviewed and audited by checking against a list of recognised best, or good, practices.

ISO 29119 represents an attempt to return to the 70s and 80s, but to do it better this time. It is fundamentally misconceived, and it ignores the thinking and changes that have transformed development and testing over the last couple of decades. It also ignores wider developments in management and complexity theory. How can it possibly claim to be relevant to the 21st century?

4 thoughts on “Cynefin, testing & auditing

  1. Hi James,

    Thank you so much for bringing Cynefin into the testing & auditing context.

    It’s an idea i’ve been mulling over but actually haven’t had the chance to get my teeth into.

    Its interesting to see the sense/illusion of control through the lens of the Cynefin framework

    Another resource that I’ve found useful is “Cynefin for Devs” by Liz Keogh – there’s a blog & video.

    Duncs

  2. James, one thing you could add is the use of heuristics in place of specific policy/procedures as these provide for emergent behaviour. Have a look at Dan Ward’s FIRE: How Fast, Inexpensive, Restrained, and Elegant Methods Ignite Innovation

    Checklists do have value when there is a defined procedure – I do think Gawande’s Checklist Manifesto is worth a read

    Regards

    • Thanks Greg. FIRE does look interesting. I’ll have to check it out further. At first sight it does look consistent with Cynefin as a suitable approach for complex areas.

      I haven’t yet read the Checklist Manifesto. I agree that checklists are valuable, so long as we use them in the appropriate places. They are definitely right for the Obvious domain, They are also potentially valuable for Complicated problems, so long as we use them to check we’ve not forgotten something. Once we start using them as a script that tells us all we need to know we’re in trouble, and that’s what I’ve seen. It puts blinkers on the user. In Complex problems their potential use is even more constrained.

      The distinction I like to make is that checklists can suggest questions, but they don’t necessarily give us answers, and certainly not ALL the answers. They can be a starting point for a discussion and exploration. Once you assume they give you all the answers your basically saying that you were able to anticipate in advance what might go wrong and what was important. It inhibits learning.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.