ISO 29119 and “best practice”; a confused mess

In yesterday’s post about the Cynefin Framework, testing and auditing I wrote that best practice belongs only in situations that are simple and predictable, with clear causes and effects. That aligned closely with what I’ve seen in practice and helped me to make sense of my distrust of the notion of best practice in software development. I’ve discussed that before here.

In stark contrast to the Cynefin Framework, the ISO 29119 lobby misunderstands the concept of best practice and its relevance to software development and testing.

Wikipedia defines a best practice as,

…a method or technique that has consistently shown results superior to those achieved with other means, and that is used as a benchmark.

Proponents of best practice argue that it is generally applicable. It is hard to get to grips with them in argument because they constantly shift their stance and reinterpret the meaning of words to help them evade difficult questions.

If one challenges them with examples of cases when “best practice” might not be useful they will respond that “best” does not necessarily mean best. There might indeed be situations when it doesn’t apply; there may be better choices than “best”. However, “best practice” is a valid term, they argue, even if they admit it really only means “generally the right thing to do”.

Sadly some people really do believe that when defenders of best practice say a practice is “best” they actually mean it, and write contracts or take legal action based on that naïve assumption.

This evasion and confusion is reflected in the promotion of ISO 29119. Frankly, I don’t really know whether the standard is supposed to promote best practice, because its producers don’t seem to know either. They don’t even seem clear about what best practice is.

Is ISO 29119 built on “best practice”?

These two quotes are extracts from “ISO/IEC/IEEE 29119 The New International Software Testing Standards”, written by Stuart Reid, the convener of the ISO 29119 working group. The current version of the document is dated 14th July 2014, but it dates back to last year at least.

Parts of ISO/IEC/IEEE 29119 have already been released in draft form for review (and subsequently been updated based on many thousands of comments) and are already being used within a number of multi-‐national organizations. These organizations are already seeing the benefits of reusing the well-defined processes and documentation provided by standards reflecting current industry best practices.

Imagine an industry where qualifications are based on accepted standards, required services are specified in contracts that reference these same standards, and best industry practices are based on the foundation of an agreed body of knowledge – this could easily be the testing industry of the near future.

This next quote comes from the ISO 29119 website itself.

A risk-based approach to testing is used throughout the standard. Risk-based testing is a best-practice approach to strategizing and managing testing…

That all seems clear enough. Standards reflect best practices. However, Stuart Reid has this to say in the YouTube video currently promoting ISO 29119 (5 minutes 25 seconds in).

A common misconception with standards is that they define best practice. This is obviously not a sensible approach as only one organisation or person can ever be best at any one time and there is only one gold medal for any Olympic event. We all know there is no one best approach to testing. It should be context driven.

If standards defined best practice then most of us would never even bother trying to achieve this unattainable goal knowing it was probably too far away from our current situation. Instead standards should define good practice with the aim of providing an achievable goal for those many organisations whose current practices are deficient in many ways.

Well, I think the world can be forgiven for the misconception that ISO 29119 is intended to define best practice since that is what the convener of the ISO 29119 working group has clearly said.

Stuart Reid repeated that last quote in Madrid in May this year, and the accompanying slide provides plenty to mull over, especially in the light of the assertion that standards are not about “best practice”.

quality & standards

The slide assumes quality improves as one moves from current practice, to good practice and on to best practice. This depiction of neat linear progression implied by the rising quality arrow is flawed. It seems to be based on the assumptions that quality can be tested into products, that potential users of the standard (and every organisation is considered a potential user) are currently not even doing good things, that best practice is the preserve of an elite and that it is by definition better than good practice.

Not only do these assumptions look indefensible to me, they are not even consistent with the words Stuart Reid uses. He says that there is no “best approach” and that testing should be “context driven”, yet uses a slide that clearly implies a standards driven progression from good to best. This is not the meaning that people usually ascribe to “best practice”, defined above. Best practice does not mean doing the best possible job in the current context regardless of what others are doing. If that were its generally accepted meaning then it would not be a controversial concept.

Muddled language, muddled thinking, or both?

It’s hard to say whether all these inconsistencies are just muddled thinking or whether ISO 29119’s defenders are starting to give way on some weak points in an attempt to evade challenge on the core message, that standards represent an accepted body of knowledge and we should buy into them. Is “best practice” now acknowledged to be an inherently unjustifiable concept in testing? Has it therefore been thrown overboard to stop it discrediting the rest of the package, and the various ISO 29119 materials have not all been brought into line yet?

The tone I infer from the various documents is a yearning to be seen as relevant and flexible. Thus we see “best practice” being suddenly disowned because it is becoming an embarrassment. The nod towards “context driven” is deeply unimpressive. It is clearly a meaningless gesture rather than evidence of any interest in real context driven testing. Trotting out the phrase glibly does not make a tester context driven and nor does it make the speaker seem any more relevant, flexible or credible.

If a standard is to mean anything than it must be clear. ISO 29119 seems hopelessly confused about best practice and how it relates to testing. Comparing the vague, marketing froth on which ISO 29119 is based with the clear perspective offered by the Cynefin Framework reveals ISO 29119 as intellectually incoherent and irrelevant to the realities of development and testing.

Ironically the framework that helps us see software development as a messy, confusing affair is intellectually clear and consistent. The framework that assumes development to be a neat and orderly process is a fog of confusion. How can we take a “standard” seriously when its defenders don’t even put up a coherent defence?

Cynefin, testing & auditing

Over the last few weeks following my CAST 2014 talk in New York, while the Stop 29119 campaign has been raging, I have been thinking more about some of the underlying issues.

One of these has been the idea of “best practice”, which led me back to the Cynefin Framework. If you don’t know about Cynefin then I strongly recommend that you learn about it and reflect on its implications. The Wikipedia article is a good start, not least because Dave Snowden, Cynefin’s creator, keeps an eye on it. This short video by Snowden is also helpful. NB; the Simple Domain has been renamed as Obvious since the video was made.

An overview of the Cynefin Framework


I have carelessly described the Cynefin Framework as being a quadrant in the past, but that was sloppy. It isn’t. It merely looks like one. It is a collection of five domains that are distinct and clearly defined in principle, but which blur into one another in practice.

In additon to the four domains that look like the cells of a quadrant there is a fifth, in the middle, called Disorder, and this one is crucial to an understanding of the framework and its significance.

Cynefin is not a categorisation model, as would be implied if it were a simple matrix. It is not a matter of dropping data into the framework then cracking on with the work. Cynefin is a framework that is designed to help us make sense of what confronts us, to give us a better understanding of our situation and the approaches that we should take.

The first domain is Obvious, in which there are clear and predictable causes and effects. The second is Complicated, which also has definite causes and effects, but where the connections are not so obvious; expert knowledge and judgement is required.

The third is Complex, where there is no clear cause and effect. We might be able to discern it with hindsight, but that knowledge doesn’t allow us to predict what will happen next; the system adapts continually. Snowden and Boone used a key phrase in their Harvard Business Review article about Cynefin.

”…hindsight does not lead to foresight because the external conditions and systems constantly change.”

The fourth domain is Chaotic. Here, urgent action rather than reflective analysis, is required. The participants must act, sense feedback and respond. Complex situations might be suited to safe probing, which can teach us more about the problem, but such probing is a luxury in the Chaotic domain.

The appropriate responses in all four of these domains are different. In Obvious, the categories are clearly defined, one simply chooses the right one, and that provides the right route to follow. Best practices are appropriate here.

In the Complicated domain there is no single, right category to choose. There could be several valid options, but an expert can select a good route. There are various good practices, but the idea of a single best practice is misconceived.

In the Complex domain it is essential to probe the problem and learn by trial and error. The practices we might follow will emerge from that learning. In Chaos as I mentioned, we simply have to start with action, firefighting to stop the situation getting worse.

The fifth domain is that of Disorder. This is the default position in a sense. It’s where we find ourselves when we don’t know which domain we should really be in. It’s therefore the normal starting point. The great danger is that we don’t choose the appropriate domain, but simply opt for the one that fits our instincts or our training, or that is aligned with the organisation’s traditions and culture, regardless of the reality.

The different domains have blurred edges. In any context there might be elements that fit into different domains if they are looked at independently. I don’t see that as being a problem. As I said, Cynefin is not a neat categorisation model. It is intended to help us make sense of what we face. If reality is messy and blurred then there’s no point trying to force it into a straitjacket.

There is, however, one boundary that is fundamentally different from the others; the border between Obvious and Chaotic. This is not really a boundary at all. It is more of a cliff. If you move from Obvious to Chaotic you don’t glide smoothly into a subtly changing landscape. You fall off the cliff.

Within the Obvious domain the area approaching the cliff is the complacent zone. Here, we think we are working in a neat, ordered environment and “we believe our own myths” as Snowden puts it in the video above. The reality is quite different and we are caught totally unaware when we hit a crisis and plunge off the cliff into chaos.

Cynefin and testing

Cynefin has obvious relevance to testing and to the debate about standards. The testing strategy that we should choose will depend on the domain that makes most sense given our problems. However, testing standards not only assume that testers are operating in the Obvious or possibly Complicated domains, they effectively encourage organisations to constrain testers so that they have to conduct testing there.

Testers who are subjected to inappropriate standards based on heavy advanced documentation are forced into the Obvious domain, with management happily esconced, temporarily, in the complacent zone. Test planning proceeds in a neat and ordered fashion. Reality invariably bites during test execution and the project crashes over the cliff into Chaos.

Time and again this is attributed to bad luck, or individual failings, or specific problems that the project hit. It is useful to look at this pattern through the lens of the Cynefin Framework to see that it is a predictable consequence of pretending that software development is neater and more ordered than it really is. If we have learned anything over the last 20 years it should be that software development is not obvious. It is sometimes complicated, but more often it is complex.

Cynefin and auditing

Cynefin has attracted a good deal of interest in software development circles. There has been only very limited attention paid to its relevance to testing. I have yet to see any discussion of how it relates to auditing, and that surprises me. It helps me to make sense of the way that my audit department operated when I was an IT auditor. It also explains the problems that less professional audit departments suffered and also caused.

The large insurance company where I worked as an auditor had three distinct types of auditor. The branch auditors performed compliance auditing at the 100 or so branches around the UK. They followed a standard audit plan that did not vary from one audit to the next. The operational auditors had a more flexible approach, but they were still largely driven by an advance audit plan.

The IT auditors operated on a quite different basis. There were no standardised plans or checklists. Each audit was different from the last, and they were all planned loosely at first with the course of the audit being determined by the initial findings. There were very few hard and fast rules, and those we did follow applied only within narrow constraints. Basically, our approach was “it all depends”.

That reflected the areas we were looking at; software developments, IT security, live applications, fraud investigations. There were plenty of good practices, but few inviolable best practices.

This all fitted the Cynefin Framework. Put crudely, the branch auditors were checking an Obvious domain, in which each branch followed strictly defined processes that fitted the needs of the staff, office and company. The IT auditors were working in a Complex domain, with the operational auditors somewhere in between, dealing with a largely Complicated domain.

When I observed a less professional audit department they approached similar situations to the ones I faced as an IT auditor, but these auditors were armed with a checklist, like the ones that I’d seen the branch auditors using.

Reflecting on this I see how Cynefin is relevant to auditors in three different and important ways.

Firstly Cynefin should help auditors to make sense of the area under audit. Whether or not the people in that area are taking appropriate actions, that are aligned with the problems and risks, obviously depends on the domain. I’ll call this the natural domain.

Secondly, auditors can use Cynefin to understand the way the area under audit is being managed. Of course this should be the same as the natural domain, but it isn’t necessarily so. Cynefin helps us to understand that, for various reasons, the management regime may be different. The domain of Disorder goes some way to explaining that. Management have opted for the domain that feels comfortable, rather than the one that fits the work.

Thirdly, Cynefin can help auditors to make sense of their job conducting the audit. This should be the same as the natural domain, but it could be different.

The auditors’ perspective has two aspects. How do they conduct the audit? And what influence will the audit, or even the knowledge of an impending audit, have on the area under audit?

It is a fatal trap for auditors to perform an audit with the unchallenged assumption that the audit area belongs in one domain when it should really be in a different one. The classic mistake is to approach a Complex problem with a checklist designed for an Obvious domain. Even if the management believes that the area should be Obvious the underlying reality, the natural domain, may be different. Auditors must expose that disconnect, not reinforce the delusion.

That brings me on to the second aspect of the auditors’ perspective. The act of the audit affects the area being audited. It’s not just a crude case of people cleaning up their act temporarily for the audit and lying to the auditors. That happens of course, but people also try to second guess the auditors, or prepare defensive documentation that has no intrinsic value, or any value at all in many cases.

It is vitally important that good auditors can distinguish between a plausible, auditable surface and the underlying reality. It is all too easy to forge a cosy, informal consensus with the auditees. “You produce something we can check easily, and we’ll say you’re doing a good job.”

Auditing is a difficult and challenging job. It becomes much easier if there is a clear benchmark, or standard, or set of rules to audit against. If auditors fix on a deceptive, irrelevant benchmark then they encourage management to head for the Obvious domain and settle down in the complacent zone when they should really be taxing their brains coping with Complexity.

Cynefin and the flaws at the heart of ISO 29119

I can’t stress strongly enough that Cynefin recognises the value of best practice only in the Obvious domain. Attempting to shoehorn such an inflexible concept into Complicated and Complex environments isn’t just unhelpful. It is damaging.

Cynefin has helped me to articulate some of my deep objections to ISO 29119. The standard encourages everyone to ignore reality and head for Obvious complacency. It sends a very dangerous signal to auditors, offering them a tempting benchmark that will make their job easier. As I’ve said often, neither testing nor auditing are meant to be easy; they are meant to be valuable. ISO 29119 is selling a dangerous delusion that testing is easier than it is and that it is an activity that can be reviewed and audited by checking against a list of recognised best, or good, practices.

ISO 29119 represents an attempt to return to the 70s and 80s, but to do it better this time. It is fundamentally misconceived, and it ignores the thinking and changes that have transformed development and testing over the last couple of decades. It also ignores wider developments in management and complexity theory. How can it possibly claim to be relevant to the 21st century?

Know thyself (social media and self-knowledge)

I’ve been reading an interesting and thoughtful post by Adam Knight about the effect of social media on the testing community. Adam was right to say that the response to my CAST talk was possible only because of social media. I’d go further than that. I would never have been in New York at all if it had not been for social media. When I became self-employed I had no option but to get out there on the internet, to network and write. That opened doors and eventually led me to CAST in New York.

One of the surprising aspects of my story is that I never intended to go down the route of being a Stop 29119 guy, or even a critic of standards. I wanted to concentrate on other things.

However, I got better and more interesting feedback from what I’d written about standards and governance. I responded to that and the story developed naturally in a direction I’d not anticipated. I hadn’t realised what aspects of my career were unusual, even unique, to me. It was only through interacting with other people and responding to their interest that I came to understand what particular insights I could offer.

If I had set out a fixed timeline of action for the topics I was originally interested in I would probably just have hit a wall, and never noticed the more realistic and relevant alternatives.

I am still plagued by the fear that I’m just bluffing, and that I’m not really qualified to tell people anything. I try to get past that by telling myself that it’s true, but the same applies to everyone else, most of the time, in most contexts. What sets me apart isn’t that I know a whole load about testing, or about auditing.

What does make me unusual is that I’ve worked inside both professions, and also dealt with both from the outside. That makes me extremely unusual, and it is that rarity which gives me an interesting perspective. It was only through being open and public on Twitter, blogs and articles that I was able to work through to that insight and see what I could offer. In marketing terms I didn’t know my own USP (unique selling proposition).

Tacit and explicit knowledge are fascinating and important topics, especially for testers. A blinding flash of understanding that we all need to grasp is that we do not know which aspects of our knowledge and experience are of greatest value. We think we do, and we may be reasonably accurate in our judgement, but these assets change over time. I don’t know all that I’m capable of doing or contributing, but I now have a better understanding that my grasp of my intellectual inventory has always been limited.

Social media, conferences, Twitter, networking all help me enormously to know other people and their work. But they’ve also helped me to know myself better too, and that might be the most important revelation of all.

CAST Live interviews in New York

While I was at CAST 2014 in New York I was interviewed by Benjamin Yaroch and Dee Ann Pizzica for CAST Live. I was a bit apprehensive about doing it, but once we got going I forgot about the camera and was just chatting to Ben and Dee Ann.

Also, Karen Johnson gave a very good interview talking about certification and standards. It explains the background to the Professional Tester’s Manifesto and ISST’s Stop 29119 petition. It is well worth watching.

“Software testers balk at ISO 29119 standards proposal” – my interview

This is the full text of the email interview I gave to, which appeared on August 25th. They used only part of it, which was fine by me. I was delighted they approached me and was happy to get any part of my message out.

What don’t you like about ISO 29119? Will this “standard” have much impact anyway?

ISO 29119 puts too much emphasis on process and documentation, rather than the real testing. Of course that is not its purpose, or the intention of the people who have developed it. However, I have seen in practice how people react when they are dealing with a messy, complex problem, and there are detailed, prescriptive standards and processes on hand. They focus on complying with the standard, and lose sight of the real goal. This sort of goal displacement is a familiar problem in many situations. It is frustrating that so many influential people in IT have failed to acknowledge the problem.

This would not matter so much if testing standards were closely aligned with real, valuable testing. However, the emphasis on documentation encourages heavy up-front commitment to producing paper work, at a time when the nature of the problem isn’t yet understood. I’ve worked on too many projects where most of my test management effort went into detailed documents that were of disappointingly little value when it came to test execution.

There is a danger that lawyers, procurement managers and project managers will focus on the standard and the documentation, and insist that it is always produced. They assume that the standard defines best practice and that failure to comply is evidence of a lack of professionalism. ISO 29119 does allow for “tailored compliance”, where testers might opt out of parts of the standard and document their justification. But the promoters of the standard are pushing the line that failure to comply with the standard will leave testers in a difficult position if there are problems. Stuart Reid has gone on the record saying;

“imagine something goes noticeably wrong. How easy will you find it to explain that your testing doesn’t comply with international testing standards? So, can you afford not to use them?”

Faced with that many testers will feel pressured, or will simply be told, that they have to comply with the full standard to cover their asses.

If ISO 29119 is taken up by governments and big companies, and they insist that suppliers have to comply then it will force large parts of the testing profession to work in this document driven way. It will restrict the opportunities for good thoughtful testers, and so I believe it is anti-competitive.

That’s a strong accusation, but ISO knows its standards must be credible, and that they must be able to demonstrate that they are not anti-competitive. That is why they have rules about the consensus that standards require. Consensus is defined by ISO as “general agreement, characterized by the absence of sustained opposition… by any important part of the concerned interests”.

That is why I called for action at CAST 2014. Testers should speak out so that is clear that there isn’t general agreement, that there is sustained opposition, and that thoughtful, professional testers are very much a concerned interest.

Are there any improvements you could suggest?

No, not really. I think the idea of a standardised approach to software testing is misconceived. It is far more useful to talk about practices and guidelines that are useful in particular contexts. That doesn’t require a standard. If guidelines are abstracted to the level that they are always applicable then they are so vague that they become vacuous and don’t offer helpful and practical advice. If you make them practical then they won’t always be applicable.

However, one improvement I would certainly like to see is not in the standard itself. It’s in the way it’s marketed. I really disapprove of the way that it’s being sold as being more responsible than alternatives, and that it can provide a more certain outcome. There’s a strong message coming from suppliers who use ISO 29119, and who contributed to its creation, that compliance with the standard makes them better than their competitors and guarantees a better service.

What is your particular interest in this specification?

I have worked as an IT auditor as well as a tester. When I look at how ISO 29119 is being sold I see an appeal to fear, a message that signing up will protect people if things go wrong. The implied message is that compliance will allow testers to face audits with confidence. I know from my experience that such a standard will encourage too much emphasis on the wrong thing.

Auditors want to see evidence of good and appropriate testing, not necessarily good planning and documentation. They need to see an appropriate process that works for the company, not a generic standard. A project plan is poor evidence that a project was well managed. It’s the results that matter. Likewise heavy advance documentation isn’t evidence that the testing was good. As an ex-auditor the approach of ISO 29119 concerned me.

I am also keen to help stop poor auditors fixing onto ISO 29119 as something they can audit against to make their job easier. Neither testing nor auditing are meant to be easy. They’re meant to be valuable, and document driven standards encourage the illusion that testing can be easier, and performed by less skilled staff, than is really the case.

“ISO 29119: Why it is Dangerous to the Software Testing Community” (from uTest)

This article originally appeared on the uTest blog on September 2nd 2014.

Two weeks ago, I gave a talk at CAST 2014 (the conference of the Association for Software Testing) in New York, titled “Standards: Promoting quality or restricting competition?”

It was mainly about the new ISO 29119 software testing standard (according to ISO, “an internationally agreed set of standards for software testing that can be used within any software development life cycle or organization”), though I also wove in arguments about ISTQB certification.

My argument was based on an economic analysis of how ISO (the International Organization for Standardization) has gone about developing and promoting the standard. ISO’s behavior is consistent with the economic concept of rent seeking. This is where factions use power and influence to acquire wealth by taking it from others — rigging the market — rather than by creating new wealth.

I argued that ISO has not achieved consensus, or has even attempted to gain consensus, from the whole testing profession. Those who disagree with the need for ISO 29119 and its underlying approach have been ignored. The opponents have been defined as irrelevant.

If ISO 29119 were expanding the market, and if it merely provided another alternative — a fresh option for testers, their employers and the buyers of testing services — then there could be little objection to it. However, it is being pushed as the responsible, professional way to test — it is an ISO standard, and therefore, by implication, the only responsible and professional way.

What is Wrong With ISO 29119?

Well, it embodies a dated, flawed and discredited approach to testing. It requires a commitment to heavy, advanced documentation. In practice, this documentation effort is largely wasted and serves as a distraction from useful preparation for testing.

Such an approach blithely ignores developments in both testing and management thinking over the last couple of decades. ISO 29119 attempts to update a mid-20th century worldview by smothering it in a veneer of 21st century terminology. It pays lip service to iteration, context and Agile, but the beast beneath is unchanged.

The danger is that buyers and lawyers will insist on compliance as a contractual requirement. Companies that would otherwise have ignored the standard will feel compelled to comply in order to win business. If the contract requires compliance, then the whole development process could be shaped by a damaging testing standard. ISO 29119 could affect anyone involved in software development, and not just testers.

Testing will be forced down to a common, low standard, a service that can be easily bought and sold as a commodity. It will be low quality, low status work. Good testers will continue to do excellent testing. But it will be non-compliant, and the testers who insist on doing the best work that they can will be excluded from many companies and many opportunities. Poor testers who are content to follow a dysfunctional standard and unhelpful processes will have better career opportunities. That is a deeply worrying vision of the future for testing.

The response to my talk

I was astonished at the response to my talk. I was hoping that it would provoke some interest and discussion. It certainly did that, but it was immediately clear that there was a mood for action. Two petitions were launched. One was targeted at ISO to call for the withdrawal of ISO 29119 on the grounds that it lacked consensus. This was launched by the International Society for Software Testing.

The other petition was a more general manifesto that Karen Johnson organized for professional testers to sign. It allows testers to register their opposition to ISTQB certification and attempts to standardize testing.

A group of us also started to set up a special interest group within the Association for Software Testing so that we could review the standard, monitor progress, raise awareness and campaign.

Since CAST 2014, there has been a blizzard of activity on social media that has caught the attention of many serious commentators on testing. Nobody pretends that a flurry of Tweets will change the world and persuade ISO to change course. However, this publicity will alert people to the dangers of ISO 29119 and, I hope, persuade them to join the campaign.

This is not a problem that testers can simply ignore in the hope that it will go away. It is important that everyone who will be affected knows about the problem and speaks out. We must ensure that the rest of the world understands that ISO is not speaking for the whole testing profession, and that ISO 29119 does not enjoy the support of the profession.

ISO – the dog that hasn’t barked

On August 12th I gave a talk at CAST 2014, the conference of the Association for Software Testing (AST) in New York, “Standards; promoting quality or restricting competition.” It was mainly about the new ISO 29119 software testing standard, though I also wove in arguments about ISTQB certification.

I was staggered at the response. Iain McCowatt asked what we should do in response. Karen Johnson proposed a petitition, which subsequently became two. Iain set up a petition through the International Society for Software Testing (ISST), directly targetted at ISO.

Karen’s petition is a more general manifesto for all professional testers to sign if they agree with its stance on certification and standards.

I strongly commend both the petition and the manifesto to all testers.

Eric Proegler also set up an AST special interest group to monitor and review the issue.

This action was not confined to the conference. In the last three weeks there has been a blizzard of activity and discussion on social media and throughout the testing community. Many people have blogged and spoken out. I gave a brief interview to, and wrote an article for the uTest blog.

Huib Schoots has drawn together many of the relevant articles on his blog. It’s a valuable resource.

However, my own blog has been embarrassingly silent. I’ve not had time, till now, to write anything here. I’ve seen a big rise in traffic as people have hunted down articles I have previously written. August was the busiest month since I started the blog in 2010.

Significant and sustained opposition

Finally I have had a chance to set some thoughts down. I never expected my talk to get such a huge reaction. The response has not been because of what I said. It has happened because it came at the right time. I caught the mood. Many influential people, whom I respect, realised that it was time to speak out, and to do it in unison.

There has been significant and sustained opposition to ISO 29119 as it has developed over the years. However, it has been piecemeal. Individuals have argued passionately, authoritatively and persuasively against the standard, and they have been ignored.

The most interesting response since my talk, however, has been from the dog that didn’t bark, ISO. Neither ISO nor the 29119 working group has come out with a defence of the standard or the way that it was developed.

They have been accused of failing to provide a credible justification for the standard, of ignoring and excluding opponents, and of engaging in rent-seeking by promoting a factional interest as a generic standard.

And their response? Silence. In three weeks we have heard nothing. In their silence ISO have effectively conceded all the points that the Stop 29119 campaigners have been arguing.

There have been some sporadic and entirely unconvincing attempted defences of the standard, mixed up the with odd spot of risibly offensive ad hominem attacks. Collectively the weakness of these defences exposes ISO 29119 for the sham that it is.

Defence #1 – “It’s a standard”

This is possibly the weakest and most wrong-headed argument. ISO are trading on their brand, their image as a disinterested promoter of quality. It is disappointing how many people accept anything that ISO does at face value.

It is important that promoters of any standard justify it, demonstrate that it is relevant to those who will have to use it and that they enjoy a consensus amongst that community. In practice, ISO can spin the argument around.

Once ISO anoints a document with the magic word “standard” too many people suspend their own judgement. They look for opponents to justify their opposition, with reference to the detail they object to in the standard.

In the absence of such detailed arguments against the standard’s content people feel content with saying “it’s a standard, therefore it is a good thing”. That argument might impress those who know nothing about testing or ISO 29119. It lacks any credibility amongst those who know what they are talking about.

Defence #2 – “It’s better than nothing”

Whoops. Sorry. I have to backtrack rapidly. This argument is even worse than the last one. Something that is lousy is emphatically not better than nothing. Just doing something because it’s better than doing nothing? Oh dear. It’s like being lost in a strange city, without a map, and putting a blindfold over your eyes. Well, why not? You’re lost. You don’t know what you’re doing. Blindfolding yourself has to be better than nothing. Right? Oh dear.

This is the politicians’ fallacy, as explained in the classic comedy “Yes Minister”. Thanks to Scott Nickell for reminding me about it (see comments below).

If an organisation’s testing is so disorganised and immature that ISO 29119 might look like the answer then it is ripe ground for a far better approach. Incompetence is not a justification for wheeling in ISO 29119. It is a justification for doing a better job. Next argument please.

Defence #3 – “ISO 29119 doesn’t have to be damaging. It’s possible to work around it.”

The arguments in favour of ISO 29119 aren’t really getting a whole lot more convincing. Sure, you might be able to mitigate some of the worst effects by tailoring your compliance, or ignoring parts that seem particularly unhelpful. However, that is putting effort into ensuring you aren’t worse off than you are if you do nothing.

Also, if standards and rules are so detailed and impractical that they cannot be followed in full then it exposes people to arbitrary enforcement. Once users start tailoring the standard they will leave themselves open to the accusation that they did not comply in full. If things go well there will be no thanks for tailored compliance.

If there are problems, and there are always problems, then any post mortem will reveal that the standard wasn’t followed in full. The implication will be that the problems were caused by deviation, not that the deviation was the only reason anything worthwhile was achieved. Testers and developers will rightly fear that retribution will be arbitrary and quite unrelated to the level of care and professionalism they brought to their work.

Further, ISO 29119 is being marketed with an appeal to fear, as a way of protecting individuals’ backsides if things go wrong. If managers buy into the standard on that basis, are they really likely to take a chance with tailoring the standard?

ISO 29119 is also being pushed to buyers who don’t understand what good practice in testing means. Are they not likely to play safe by insisting in contracts and internal standards that ISO 29119 is complied with?

No, the possibility that we can “work around it” is not a credible argument in favour of ISO 29119.

Defence #4 – “The dissenters are just self-interested”

The standard is apparently a threat to our “craft” and to our businesses. Well, there’s more to it than that, but even if that were the only objection it would still be an entirely valid one. We have argued strenuously that the standard is anti-competitive. A riposte that we fear it will hit us financially is merely to concede the argument in a graceless way.

Anyway, if I were chasing the money I could have made a lot more, an awful lot more by taking lucrative test management contracts to do poor quality work managing the testing process on dysfunctional projects.

I can do great documentation. I am literate, organised and have the knack of getting tuned into corporate politics. My ability to churn out high quality, professional looking documents that would get the money in from clients was a great asset to my last employer.

Testing? Sorry, I forgot about that. I thought we were talking about documentation and money.

Defence #5 – “They’re whingers who wouldn’t get involved.”

This point ignores the argument that the ISO process for developing standards is carefully designed to exclude those who disagree. Again, the “whingers” line spins the argument around. It is not our responsibility to justify our failure to get involved with rent seekers and try to persuade their committees to desist.

I have seen the ISO 29119 working group’s schedule for meetings worldwide over the last few years. In the unlikely event that I would have been allowed to join, my expenses would have wiped out a huge slice of my income. It would certainly have cost me far more than we’ve spent on family holidays in that period. And for what? I’d have sacrificed all that time and money in order to be ignored. Those gallant souls who have tried to fight from the inside have been ground down and spat out by the system. That’s how the system works, how it is intended to work.

No, I didn’t get involved. I had better things to do with my time. In any case, it seems that my campaigning from the outside has not been ignored!

So what now?

We keep going. All this activity has just been the start. ISO is not going to withdraw the standard. However, the blogs, articles and petititions lay down a marker. It shows that the standard was not developed according to any plausible definition of consensus, and that it lacks credibility.

The opposition will strengthen the resolve of testers who wish to stop their organisations buying into the standard. It will provide ammunition to those who want to persuade lawyers and purchasing managers from requiring compliance to ISO 29119.

This one is going to run. We are not going away, and if ISO continue to ignore us then they will simply make themselves look foolish. Do they care? Or are they confident they have enough power and can just sail on regardless?