What does the audit profession have to say about testing standards?

Yesterday I attended a half day meeting of the Scottish Testing Group in Edinburgh. As ever it was an enjoyable and interesting event that allowed me to catch up with people, meet new folk and listen to interesting talks.

The first talk was by Steve Green. He started with a brisk critique of the ISO 29119 software testing standard, then followed up with his vision of how testing should be performed, based on an exploratory approach.

Steve’s objections to ISO 29119 were as follows.

  1. It is based on a flawed assumption that testing is a mature profession.
  2. The unjustified claim by ISO that consensus has been achieved.
  3. The illusion that burying people in documentation produces “thoroughness” in testing.
  4. It is a documentation standard, not a testing standard.
  5. It will be mandated by organisations that do not understand testing, indeed it is targetted at such organisations.
  6. Exploratory testing is misunderstood and misrepresented.
  7. It reinforces the factory model of testing. It recommends the separation of test design from test execution. There is no feedback loop.
  8. ”It emphasises ass coverage over test coverage” (Michael Bolton).

After this section of the talk Steve took questions, and I joined him in answering a few. The question that intrigued me during Steve’s talk followed my observation that when you plough through the detailed guidance from ISACA (Information Systems Audit & Control Association) and the IIA (Global Institute of Internal Auditors) there is nothing whatsoever to support the use of ISO 29119, IEEE 829 or testing standards in general.

One tester asked if there was anywhere she could go that authoritatively demonstrated the lack of a link. Is there anything that maps between the various documents demonstrating that they don’t align? In effect she was asking for something that says ISO 29119 requires A, B & C, but ISACA & IIA do not require them, and say instead X, Y & Z.

There isn’t such a reference point, and that’s hardly surprising. Neither ISACA nor IIA have anything to say about the relevance of astrology to testing either. They simply ignore it. The only way to establish the lack of relevance is to plough through all the documentation and see that there is nothing to require, or even justify, the use of ISO 29119.

I’ve done that for all the ISACA and IIA material as preparation for my tutorial on auditors and testing at Eurostar last year, and I’ve just checked again prior to repeating the tutorial at Starwest next week.

However, it’s one thing sifting through all the guidance to satisfy myself and be able to state confidently that testing standards are irrelevant to auditors. It’s another matter to demonstrate that objectively to other people. It’s harder work to prove a negative than it would be to prove that there were links. It might be worthwhile taking the work I’ve already done a bit further so that testers like the one who asked yesterday’s question could say to managers “ISO 29119 is irrelevant to the auditors, and here’s the proof”. I intend to do that.

ISACA’s guidance material is particularly important because it’s all tied up in the COBIT 5 framework, which is aligned to the requirements of Sarbanes-Oxley (SOX). So if you comply with COBIT 5 then you can be extremely confident that you comply with SOX.

Maybe it’s because I was an auditor and I have an inflated sense of the importance of audit, but I think it was extremely ill-advised for ISO to develop a software testing standard while totally ignoring what ISACA were doing with COBIT 5. The ISO working group might argue that ISO 29119 is new and that the auditors have not caught up. However, both the high level principles of COBIT 5 and the IIA guidance, and the detailed recommendations within COBIT 5 are all inconsistent with the approach that ISO 29119 takes.

To quote ISACA;

“COBIT … concentrates on what should be achieved rather than how to achieve effective governance, management and control….

Implementation of good practices should be consistent with the enterprise’s governance and control framework, appropriate for the organisation, and integrated with other methods and practices that are being used. Standards and good practices are not a panacea. Their effectiveness depends on how they have been implemented and kept up to date. They are most useful when applied as a set of principles and as a starting point for tailoring specific procedures. To avoid practices becoming shelfware, management and staff should understand what to do, how to do it and why it is important.”

That quote comes from the introduction to COBIT 4.1, but the current version moves clearly in the direction signposted here.

You could comply with ISO 29119 and be compliant with COBIT 5, but why would you? You can achieve compliance with COBIT 5 without the bureaucracy and documentation implied by the standard, and astute auditors might well point out that excessive and misplaced effort was going into a process that produced little value.

It’s not that either ISACA or IIA ignore ISO standards. They refer to many of them in varying contexts. They just don’t refer to testing standards. Why? The simple answer is that ISO have taken years to produce a brand new standard that is now utterly irrelevant, to testers, developers, stakeholders, and auditors.

3 thoughts on “What does the audit profession have to say about testing standards?

  1. For brevity I kept my list of objections to ISO29119 to just 8, but I could have added a lot more. For example:

    9. There is no evidence that following the standard improves the efficiency or effectiveness of the testing. I suspect that the opposite will be the case because the emphasis on documentation will divert finite resources away from testing. It does maximise revenue for test consultancies that are cynical enough to persuade their clients implement it, but that could not possibly have been the intention of the authors. Or could it?

    10. There was (and remains) no demand for the standard. No clients were asking for it and nor were any test professionals. One can only speculate about the motivation for its creation but it is easy to conclude that it is purely commercial – a means for a small group of consultants to carve off a lucrative portion of the testing market.

    11. Whilst anyone could apply to join a working group to influence the contents of the standard there was no mechanism for preventing its creation. Our fundamental objection is that it should not exist at all.

    12. Dissenters on the working groups were ignored and their only choice was to put their name to a standard they didn’t believe in or to leave the group. Over the years most did the latter.

    13. The standard is virtually unreadable – you have to translate every sentence into real English in order to interpret what it means. I’ve read a lot of standards and this is by some way the worst written of them all.

    14. The emphasis on metrics for managing projects. Annex B of Part 1 states “all test efforts need to define and use metrics” and lists several really bad ones including “Defect detection percentage”. This statement is demonstrably untrue and I would fire anyone I catch using this particular metric.

    I have never seen any software testing metric that is statistically valid, and metrics are a terrible way to measure progress. Software development is NOT manufacturing.

    15. Page 13 of Part 1 states “Testing is a process”. No it isn’t. Checking is (or can be) a process. Testing is an investigation – it’s predominantly skills-based with just enough process to be useful. This misunderstanding pretty much undermines everything that follows.

    16. The excessive focus on documentation risks bringing the testing profession into even further disrepute. It is already seen as being inefficient and ineffective and this will just make matters worse.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.