Sick bureaucracies and mere technicians

Why do organisations do things that appear to be downright irrational? The question is of vital importance to testing because many corporations undertake testing in a way that is ineffective and expensive, following techniques (and standards) that lack any basis in evidence, and indeed which are proven to be lousy. If we are to try and understand what often seems to be senseless we have to think about why organisations behave the way that they do. It’s pointless to dismiss such behaviour as stupid. Big organisations are usually run by very intelligent people, and senior managers are generally acting rationally given the incentives and constraints that they see facing them.

In order to make sense of their behaviour we have to understand how they see the world in which they are competing. Unless you are working for a specialist testing services firm you are engaged in a niche activity as far as senior management is concerned. If they think about testing at all it’s an offshoot of software development, which itself is probably not a core activity.

Whatever wider trends, economic forces or fads are affecting corporations, as perceived by their management, will have a knock-on impact all the way through the organisation, including testing. If these wider factors are making an organisation think about adopting testing standards then it is extremely difficult to argue against them using evidence and logic. It’s hard to win an argument that way if you’re dealing with people who are starting from a completely different set of premises.

I’ve looked at this sort of topic before, concentrating more on individuals and the way they behave in organisations. See “Why do we think we’re different?” and “Teddy bear methods”. Here I want to develop an argument that looks at a bigger picture, at the way big corporations are structured and how managers see their role.

Bureaupathology and the denigration of competence

I recently came across this fascinating paper, “Bureaupathology and the denigration of competence”, by Edward Giblin from way back in 1981. As soon as I saw the title I couldn’t wait to read it. I wasn’t disappointed. Giblin identified problems that I’m sure many of you will recognise. His argument was that big corporations often act in ways that are inconsistent with their supposed goals, and that a major reason is the way the hierarchy is structured and managers are rewarded.

Giblin drew on the work of the German sociologist Klaus Offee, who argued against the widely accepted belief that managers are rewarded according to their performance. The more senior the manager the less relevant is specific job knowledge and the more important are social skills. These social skills and the way they are applied are harder to assess than straightforward task skills.

Offee distinguished between “task-continuous” organisations and those which are “task-discontinuous”. In a task-continuous organisation the hierarchy and therefore seniority is aligned with the core task. As you look further up the hierarchy you find deeper expertise. In a task discontinuous organisation you find quite distinct, separate skills at the higher levels.

Good examples of task continuous organisations would be small family businesses, or business, run by experienced experts, with skilled workers and apprentices underneath them. Task discontinuous organisations are far more common nowadays. Modern corporations are almost invariably run by specialist managers who have only a loose grip on the core activity, if they have any proficiency at all. I think this distinction between the two types of organisation has big implications for software testing, and I’ll return to that in my next post.

Social skills trump technical skills

The perceived ability of managers in large corporations is mainly dependent on their social skills, how good they are at persuading other people to do what they want. Such abilities are hard to measure, and managers are more likely to be rewarded for their ability to play the corporate game than for driving towards corporate objectives. Conventional performance evaluation and compensation techniques lack credibility; they don’t effectively link performance with rewards in a way that helps the business. Back in the early 1960s Victor Thompson coined the term bureaupathology to describe organizations in which the smooth running of the bureaucracy took precedence over its real work. In such organizations the bureaucracy is no longer a means to an end. It has become the end. In bureaupathic organizations managers have the freedom and incentive to pursue personal goals that are at variance with the official corporate goals.

With senior managers pursuing their own agenda Giblin argued that twin systems of authority emerge in large organizations. There is the official system, exercised by the generalist managers, and there is an unofficial system utilised by experienced staff relying on “surreptitious behaviour to accomplish the work of the organization”. With much of the real work being carried out informally, even invisibly, the senior managers’ perspective naturally becomes warped.

“As many generalists either didn’t have the technical knowledge to begin with, or allowed it to atrophy over time, they depend solely on the power of their position to maintain their status. Their insecurity… leads to needs that seem irrational from the standpoint of organizational goals, because these needs do not advance these goals. This also leads them, defensively, to denigrate the importance of the technical competence they don’t have.”

Giblin cited the example of a consultancy firm in which the senior partners spent their time on administration and supervision of the professional consultants, with minimal client contact.

“(It) became fashionable among the administrators to refer to outstanding consultants as ‘mere technicians’. This despite the fact that such ‘mere technicians’ were the firm’s only revenue-producing asset.”

Even though these observations and arguments were being made several decades ago the results will be familiar to many today. I certainly recognise these problems; a lack of respect for key practitioners, and the informal networks and processes that are deployed to make the organization work in spite of itself.

Big corporations charge further down the wrong road

Giblin had several suggestions about how organizations could improve matters. These focused on simplifying the management hierarchy and communication, re-establishing the link between managers and the real work and pushing more decision making further down the hierarchy. In particular, he advocated career development for managers that ensured they acquired a solid grounding in the corporation’s business before they moved into management.

The reason I found Giblin’s paper so interesting is that it described and explained damaging behaviour with which I was very familiar and it came up with practical, relevant solutions that have been totally ignored. Over the following decades the problems identified by Giblin grew worse and corporations did the exact opposite of what he recommended. This didn’t happen by accident. It was the result of global trends that swamped any attempts individuals might have made to adopt a more rational approach. These trends and their repercussions have had a significant impact on testing, and they will continue to do so. I will explain in my next post.

Principles for Testing?

In June (2015) I gave a talk, “Principles for Testing?” at a one-day conference of SIGIST (the British Computer Society’s Specialist Group in Software Testing) in London. I intended to work my talk up into an article, but haven’t got round to it (yet). In the meantime I’m posting a PDF of the slides (opens in new tab). Unusually for my talks these slides, along with the notes, carry enough of the story to make some sense on their own.

This was the abstract for the talk.

“There has been much debate in recent years about the balance between principles and rules when regulation is framed. Software development and testing are complex activities that do not lend themselves to fixed rules or prescriptive “best practice”. If stakeholders are to be confident that testers will provide value then perhaps we need clear principles against which testing can be evaluated. Testing lacks principles intended to guide and shape behaviour. I will show how this has contributed to some of the confusion and disagreement arising from ISO 29119 and the Stop 29119 campaign. I will also argue that we can learn from the “rules based versus principles based” debate and I will initiate a discussion about what form principles might take for testing, and where we should look for sources for these principles.”

Plenty of people contributed to the discussion, which was interesting but inconclusive. This is something I will have to persevere with. Please get in touch if you want anything clarified, or if you want to discuss these issues.

I used a brief clip from Apollo 13, which doesn’t appear in the PDF. It was an example of a complex problem requiring experimentation, the sort of situatio where vague principles rather than fixed rules are more helpful. Here is a slightly longer version of that clip.

Audit and Agile (part 2)

This is the second part of my article about the training day I attended on October 8th, organised by the Scottish Chapter of ISACA and presented by Christopher Wright.

In the first part I set the scene and explained why good auditors have no problem in principle with agile development. However, it does pose problems in practice, especially for the inexperienced auditors, who can find agile terrifyingly vague.

In this second part I’ll talk about how auditors should get involved in agile projects and about testing. The emphasis was very much on Scrum, but the points apply in general to all agile development.

The importance of working together constructively

Christopher emphasised that auditors should be proactive. They should get involved in developments as early as possible and encourage developers to speak to them. Developers are naturally suspicious of auditors. They think auditors “want us to stop having fun”. These are messages I’ve been harping on about ever since I started in audit.

Developers, and testers, make assumptions about what auditors will do, and what they will expect. These assumptions can shape behaviour, for the worse, if they are not discussed with the auditors. That can waste a huge amount of time.

Christopher developed an argument that I have also often made. Auditors can see a bigger picture than developers. They will often have wider experience of what can go wrong, and what controls should be in place to protect the company. Auditors can be a useful source of misuse stories. They can even usefully embed themselves in an agile development team writing stories and tests that should help make the application more robust.

Auditors have to go native to a certain extent, accepting agile on its terms and adapting to the culture. Christopher advised the audience to conform to the developers’ dress code; “lose the tie” and remove any unnecessary barriers between the auditors and the developers. The final tip was one that will resonate with context driven testers. “Focus on people and product – not paperwork”.


Discussion of testing comprised just one, relatively small, part of the day. Obviously I would have preferred more time. However, the general guidance throughout the day about working with agile provided a good guide for auditing agile testing. Auditors who have absorbed these general lessons should be able to handle an audit of agile testing.

Christopher did have a couple of specific criticisms of agile testing. He thinks the standard is generally fairly poor, though he did not offer any comparisons with more traditional testing. He also expects to see testing that is repeatable, and wants to see more automated testing where possible. Christopher observed that too few projects develop repeatable, automated tests for regression testing. He is probably right on that point. I’m not sure that this is really just an agile problem.

Traditional projects were often planned and costed in a way that gave the test manager little incentive to make an investment for the future by automating tests. The difference under an agile regime is that a failure to invest in appropriate automation is likely to create problems for the current project rather than leave them lurking for the future support team.

In addition to his comments on automation Christopher’s key points were that auditors should look for evidence of appropriate planning and preparation, and evidence of the test results. There might not be a single, standard, documented agile development method, but each organisation should be clear and consistent about how it does agile.

Christopher did use the word “scripts” a lot, but he made it clear that auditors should not expect an agile test script to be as detailed and prescriptive as a traditional script; it shouldn’t go down to the level of specifying the values to go into every field. Together with the results the script should allow testing to be recreated. The auditor should be able to see exactly what was planned, what was tested and what was found.


The day was interesting and very worthwhile. It was reassuring to see auditors being encouraged to engage with agile in an open minded and constructive manner. It was also encouraging to see auditors responding positively to the message, even if the reality of dealing with agile is rather frightening for many auditors. Good auditors are accustomed to the challenge of being scared by the prospect of difficult jobs. It is a yardstick of good auditors that they cope with that challenge.

I don’t have a great deal of sympathy with auditors who shy away from auditing agile projects because it’s too difficult, or with those who bring inappropriate prejudices or assumptions to the audit. Auditing is like testing; it isn’t meant to be easy, it’s meant to be valuable.

Internal auditors should not be aliens who beam down to a project to beat up the developers and testers, then shoot off to their next assigment leaving bruised and hurt victims. I’m afraid that is how some auditors have worked, but good auditors are broadly on the same side as developers and testers. They are trying to achieve the same ends, but they have a different perspective. They should have wider knowledge of the context in which the project is working, and they should have a bleaker view of reality and of human nature. They should know what can go wrong, how that can happen, and what might be effective and efficient ways of preventing that.

Following the happy path is a foreign concept to good, experienced auditors. Their path is a narrow one. They strive to warn of the unseen dangers lurking all around while also providing constructive input, all the time maintaining sufficient independence so that they can comment dispassionately and constructively on what they see. As I’ve said, it’s not easy.

Auditors and testers should resist any attempts to redefine their difficult jobs to try and make them appear easier. Such attempts require a refusal to deal with reality, and a pretence, a delusion, that we can do something worthwhile if we refuse to engage with complex and messy reality.

Testing and auditing are both jobs that it is possible to fake, going through the motions in a plausible manner, while producing nothing of value. That approach is easier in the short tun, but it is deeply short sighted and irresponsible. It trashes the credibility and reputation of practitioners, it short-changes people who expect to receive valuable information, and it leaves both testers and auditors open to being replaced by semi-skilled competition. If you’re doing a lousy job and focusing on cost, there is always someone who can do it cheaper.

Audit and Agile (part 1)

Training in stuff I know

Last Thursday, October 8th, I went to a training day organised by the Scottish Chapter of ISACA, the international body representing IT governance, control, security and audit professionals. The event was entitled “Audit and Control of Agile Projects”. It was presented by Christopher Wright, who has over thirty years experience in IT audit and risk management.

This is a topic about which I am already very well informed but I was keen to go along for a mixture of reasons. When discussing auditing and the expectations of auditors we are not dealing with absolutes. There is no hard and fast right answer. We can’t say “this interpretation is correct and that one is wrong”. Well, we could, but that ignores the possibility that others might disagree, and they might be auditors who are conducting audits on a basis that we believe to be flawed.

It is therefore important to think deeply about, and understand, the approach that we believe to be appropriate, but also to consider alternative opinions, who holds them and why. Consultants offering advice in this area have to know the job, but they must also be able to advise clients about other approaches. We have to stay in touch with what other people are saying and doing, regardless of whether we agree with them.

I was keen to hear what Christopher Wright had to say, and also to talk to other attendees, who work for a wide range of employers. Happily the event was free to members, which made the decision to attend even easier!

As it transpired I didn’t hear anything from Christopher that was really new to me, but that was reassuring. His message was very much aligned with the advice I’ve been pushing over the last few years to clients, in my writing and in training tutorials. What was also reassuring was that the attendees were receptive to Christopher’s message and there was no sign of lingering attachment to older, traditional ways of working.

I don’t want to cover everything Christopher said. I will just focus on a few points that I thought were key. I will split them into two posts. I will try to make it clear when I am offering my own opinion rather than reporting Christopher’s views.

Agile is potentially an appropriate response to the problems of software development

Firstly, and crucially, agile can be entirely consistent with good governance. The “can be” is an important qualification. You can do agile well or badly. It is not a cop out by organisations that couldn’t make traditional methods work. In many situations it is the most appropriate response. Christopher ran through a quick discussion of the factors involved in making a decision whether to go with an agile or a traditional, waterfall approach. His views were orthodox, current software development thinking rather than a grudging auditor’s acceptance of what has been happening in development circles.

Put simply, Christopher argued that a waterfall approach is fine when the problem is stable and the solution is well understood and predictable. If we know at the outset, with justified confidence, what we are going to do then there’s no problem with waterfall, indeed we might as well use it because it is simpler to manage. We are dealing with a well defined problem and we are not going to be swamped by a succession of changes late in the project. I would argue that that is usually not the case when we are developing new software intended to provide a new solution to a problem.

If the problem is not trivial then it is unlikely that we can understand it fully at the start of the project, and we can only build our understand once the project is well underway. Agile is appropriate in these circumstances. We need to learn what is needed through a process of iteration, experimentation and discovery.

We had a brief discussion about predictable solutions. I would happily have spent much longer mulling over the idea of predictability. My view is that software development has been plagued throughout its history by a yearning for unattainable predictability. That has been damaging because we have pretended that we could predict solutions and project outcomes when the reality has been that we didn’t know, and couldn’t have known at the time. The paradox is that if we try to define a predictable outcome too early that pushes back the point when we truly can predict with confidence what is required and possible.
Mary Poppendieck made the point very well back in 2003 with an excellent article “Lean Development and the Predictability Paradox” (PDF, opens in new window).

Agile is scary for auditors

Christopher argued some important points about why many traditional auditors are suspicious of agile, and I was very much in agreement with him. There are many agile methods, and none of them are rigidly defined or standardised. Auditors can’t march into a project with a checklist and say “you’ve not produced a burndown chart in the form prescribed by… ”.

The auditors have to base their work on the Agile Manifesto, but they need to read and understand it carefully. It is not a matter of choosing either working software or comprehensive documentation, but valuing the former over the latter. Auditors have to satisfy themselves that an appropriate balance is being struck, and this requires judgment and experience. The Agile Manifesto can guide them to ask useful questions, but it can’t provide them with answers.

Crucially, auditors have to ask open questions when they are auditing an agile project. This is one of my hobby horses, and I have often written and spoken about it. Auditors must have highly developed questioning skills. They need the soft skills that will allow them to draw out the information they need from auditees. They must not rely on checklists that encourage them to ask simplistic questions that invite simple, binary answers; yes or no.

Christopher told a sadly plausible story of an audit team that reviewed an agile project and produced a report listing 50 detailed problems. The auditors did not have experience with agile and had conducted the audit using a Prince2 checklist that was designed for waterfall projects.

This might seem a ludicrously extreme example, but I’ve seen similar appallingly unprofessional and incompetent behaviour by auditors. I believe that when this happens the best outcome is that the auditors have wasted everyone’s time, failed to do anything useful and trashed their credibility.

What is far more damaging, in my opinion, is a situation where the auditors have real power, regardless of their competence, and auditees start to tailor their work to fit the prejudices and assumptions of the auditors. Managers start to do things they know are wrong, wasteful and damaging because they fear criticism from auditors. Clients become infuriated because the supplier staff are ignoring their needs and focusing on appeasing the auditors.

In this scenario the auditors are taking commercial decisions, but are not accountable for the consequences. They are probably not even aware that this is what they are doing. They lack the experience, knowledge and insight to understand the damage they are doing.

I therefore agree with Christopher that auditing IT projects is no job for the inexperienced. Closed questions are dangerous, and should be used only to confirm information that has already been elicited. The auditor must not ask “do you have a detailed stage plan?”, but ask “can you show me how you plan the work?”. The former question might simply produce a “no”. The latter question will allow the auditor to see, and assess, how the planning is being performed. Auditees are naturally suspicious of auditors and many people will offer as little information as they can. Allowing them to answer with a curt yes or no helps nobody.

Of course, the problem for inexperienced auditors is that if they ask open ended questions they have to understand and interpret the answers then vary their follow up depending on what they are told. That can be difficult. Well, tough. Auditing isn’t meant to be easy. Like testing it should offer valuable information, and redefining the job to make it easy but pointless is deeply unprofessional.

And in part two…

I am splitting this article into two parts and will post the second one as soon as possible. You could easily write a book about this topic, as indeed Christopher has already done. It’s called “Agile Governance and Audit”.

In part two I cover how auditors can work constructively with audit teams, and also discuss testing. I wanted to set the scene in part one before moving on to testing. I don’t think it’s worth treating testing in isolation, and it’s useful to understand first where modern auditors are coming from. The way they should approach testing follows on naturally from the way that they engage with agile in general.

Audit, testers & whistleblowers – further thoughts on the Volkswagen scandal

After writing my first blog on the Volkswagen emissions scandal I thought I should expand and explain why auditors would have a responsibility to act. I think many people are still sceptical about whether employees should involve internal auditors if they have concerns about what they are being asked to do.

Modern internal auditors should not be focusing on whether processes have been followed, but on the big risks that keep the board awake at night. Normally, internal auditors would base their audits on an assessment of the risks they had identified. However, they would also act if risks were flagged up to them. Internal auditors would might not have a formal role operating a whistleblowing process (PDF, opens in a new tab), though they would certainly be able to steer whistleblowers in the right direction and ensure that their concerns were acted upon. Once internal auditors are involved they have a responsibility to ensure that serious concerns are not simply ignored or dropped. Their place in the organisation means they have the ability, and the power to persevere if they have valid concerns.

Internal auditor – power through independence

Internal auditors are in a powerful position because they are independent of the normal management hierarchy. They are accountable to the board. Good internal auditors cannot be intimidated by the threat to go over their heads; they know that is a bluff. If internal auditors have a concern they will raise it with senior management. If the concern is not addressed, and it it is sufficiently serious then they have the right and duty to escalate their concern all the way to the board, where there should be non-exec directors who are not involved in the management of the corporation. The VW scandal would certainly have been sufficiently serious to require internal audit to escalate to the very top – if they had been aware of what was happening.

Internal auditors have to report on significant risks that affect the corporation. The risk to which VW was exposed by the emissions cheating was obviously massive. Just look at the consequences of the scandal being exposed. Also, consider how likely they were to be caught. There was always a serious risk of that because independent emissions testers who were checking emissions during road running would obtain dramatically different results from the official tests. That is how VW were caught. So a risk with dramatic, adverse consequences, and a significant probability of being realised, would be off the scale of any risk assessment.

So, in principle, the situation is clear for internal auditors who discover, or are tipped off about illegal behaviour. Breaking the law introduces big risks and the auditors have a duty to act, regardless of any ethical considerations.

Internal audit and ethical concerns

The situation is murkier if it is a strictly ethical issue. Internal auditors might have personal views, but they wouldn’t necessarily have a professional duty to act. As a rule of thumb I would describe ethical issues that require audit interest as being those which concern actions that are not illegal but which would be difficult to defend in public. They would entail some reputational damage. One possibility is developing software that is quite legal where it is being developed and tested, but which is intended for use in a jurisdiction where it would be illegal. Alternatively, using the software might actually be illegal in that country, but developing it would be within the law and the company is intending to sell it or use it elsewhere.

Another possibility is “creative compliance” where software is intended to exploit a loophole and defeat the ends of regulation. That could be particularly dodgy, because it could rest on a mistaken interpretation of the law and be genuinely illegal, or it might expose the company to very damaging publicity, or to damaging legal action before it could be established that it wasn’t illegal. There are all the sorts of things that I believe auditors would have a legitimate interest in.

It’s hard to say where auditors or testers should draw the line. I wouldn’t expect either to have any responsibility to act in the sort of routine, dark pattern usability tricks that some companies get up to. That means website features that designers know will trick users into selecting add ons, or more expensive purchases. In usability circles these are known as dark patterns. It’s an interesting subject. I find use of these tricks distasteful, and wouldn’t want to be involved, but that is a personal judgment rather than a professional one. Auditors would have a responsibility to get involved if the dark patterns edged over into fraud, or if there was a serious risk of damaging publicity. Companies that do it, but stay on the right side of the law, are generally known for that sort of behaviour, and have decided to live with the image. I’m not naming any airlines!

VW and the testing role

I have been wondering since I wrote my previous blog whether I have been unduly harsh on the VW testers in the absence of clear evidence. That would be a fair charge if I had named testers, or if I had suggested there had been a specific failure at a certain time and place. I stand by my belief that there was a moral and professional failure on the part of the VW testing community, a failure of the testing role. Given the complexity of software development in a large corporation it seems quite possible that the engine control software was assembled and tested over a lengthy period, in such a way that it would be unreasonable to pin a charge of negligence on any individual person or even team.

Nevertheless, my understanding of testing is that it should provide an assessment of what the product does, and that the testing role should enjoy some independence, or at least protection, from project management and development. If there had been such an assessment of the full functionality of the engine control software then the managers responsible for software testing would have know about the illegal defeat device.

I believe that the testers would then have had a duty to raise concerns with the development management and, if they did not receive a satisfactory response, escalate the matter to internal audit, or the compliance professionals, who would have had a clear legal responsibility to act. Whether the testers’ duty to report their concerns was an ethical or a legal duty is debatable, and that may well be argued in court. My personal stance is that testers should always have open access to internal audit. Internal auditors have to report on the risks that scare the top guys, that would keep them awake at night. If testers uncover information about such risks can there be any defence for them if they keep quiet? If they fail to find out anything about big risks that are present in the software that what is the point of testing?

The Volkswagen emissions scandal; responsible software testing?

The scandal blows up in Volkswagen’s face

The Volkswagen emissions scandal has been all over the media worldwide since the US Environmental Protection Agency hit VW with a notice of violation on 18th September.

This is a sensational story and there are many important and fascinating aspects to it, but there is one angle I haven’t seen explored that I find fascinating. Many of the early reports focused on the so called “defeat device” that the EPA referred to. That gave the impression the problem was a secret, discrete piece of kit hidden away in the engine. A defeat device, however, is just EPA shorthand for any illegal means of subverting its regulations. Such an illegal device is one that alters the emissions controls in normal running, outside a test. In the VW case the device is software in the car control software that could detect the special conditions under which emissions testing is performed. This is how the EPA reported the violation in its formal notice.

“VW manufactured and installed software in the electronic control module (ECM) of these vehicles that sensed when the vehicle was being tested for compliance with EPA emission standards. For ease of reference, the EPA is calling this the ‘switch’. The ‘switch’ senses whether the vehicle is being tested or not based on various inputs including the position of the steering wheel, vehicle speed, the duration of the engine’s operation, and barometric pressure. These inputs precisely track the parameters of the federal test procedure used for emission testing for EPA certification purposes.

During EPA emission testing, the vehicles’ ECM ran software which produced compliant emission results under an ECM calibration that VW referred to as the ‘dyno calibration’ (referring to the equipment used in emissions testing, called a dynamometer). At all other times during normal vehicle operation, the ‘switch’ was activated and the vehicle ECM software ran a separate ‘road calibration’ which reduced the effectiveness of the emission control system.”

What did Volkswagen’s testers know?

What interests me about this is that the defeat device is integral to the control system (ECM); the switch has to operate as part of the normal running of the car. The software is constantly checking the car’s behaviour to establish whether it is taking part in a federal emissions test or just running about normally. The testing of this switch would therefore have been part of the testing of the ECM. There’s no question of some separate piece of kit or software over-riding the ECM.

This means the software testers were presumably complicit in the conspiracy. If they were not complicit then that would mean that they were unaware of the existence of the different dyno and road calibrations of the ECM. They would have been so isolated from the development and the functionality of the ECM that they couldn’t have been performing any responsible, professional testing at all.

Passing on bad news – even to the very top

That brings me to my real interest. What does responsible and professional testing mean? That is something that the broadly defined testing community hasn’t resolved. The ISTQB/ISO community and the Context Driven School have different ideas about that, and neither has got much beyond high level aspirational statements. These say what testers believe, but don’t provide guiding principles that might help them translate their beliefs into action.

Other professions, or rather serious, established professions, have such guiding principles. After working as an IT auditor I am familiar with the demands that the Institute of Internal Auditors makes on the profession. If internal auditors were to discover the existence of the defeat device then their responsibility would be clear.

Breaking the law by cheating on environmental regulation introduces huge risk to the corporation. The auditors would have to report that and escalate their concern to the Audit Committee, on which non-executive directors should sit. In the case of VW the Audit Committee is responsible for risk management and compliance. Of its four members one is a senior trade union official and another is a Swedish banker. Such external, independent scrutiny is essential for responsible corporate governance. The internal auditors are accountable to them, not the usual management hierarchy.

Of course escalation to the Audit Committee would require some serious deliberation and would be no trivial matter. It would be the nuclear option for internal auditors, but in principle their responsibility is simple and clear; they must pursue and escalate the issue or they are guilty of professional misconduct or negligence. “In principle”; that familiar phrase that is meaningless in software testing.

If internal auditors had deteced the ECM defeat device they might have done so when conducting audit tests on the software as part of a risk based audit, having decided that the regulatory implications meant the ECM was extremely high risk software. However, it is far more likely that they would have discovered it after a tip off from a whistleblower (as is often the case with serious incidents).

What is the responsibility of testers?

This takes us back to the testers. Just what was their responsibility? I know what I would have considered my moral duty as a tester, but I know that I would have left myself in a very vulnerable position if I had been a whistleblower who exposed the existence of the defeat device. As an auditor I would have felt bullet proof. That is what auditor independence means.

So what should testers do when they’re expected to be complicit in activities that are unethical or illegal or which have the whiff of negligence? Until that question is resolved and testers can point to some accepted set of guiding principles then any attempts to create testing standards or treat testing as a profession are just window dressing.

Addendum – 30th September 2015

I thought I’d add this afterthought. I want to be clear that I don’t think the answer to the problem would be to beef up the ISTQB code of ethics and enforce certification on testers. That would be a depressingly retrograde step. ISTQB lacks any clear and accepted vision of what software testing is and should be. The code of ethics is vague and inconsistent with ISTQB’s own practices. It would therefore not be in a credible position to enforce compliance, which would inevitably be selective and arbitrary.

On a more general note, I don’t think any mandatory set of principles is viable or desirable under current and foreseeable circumstances. By “mandatory” I mean principles to which testers would have to sign up and adhere to if they wanted to work as testers.

As for ISO 29119, I don’t think that it is relevant one way or another to the VW case. The testers could have complied with the standard whilst conspiring in criminal acts. That would not take a particularly imaginative form of creative compliance.

I have followed up this article with a second post, written on 7th October.

Sarbanes-Oxley & scripted testing

This post was prompted by an article from 2013 by Mukesh Sharma that Sticky Minds recycled this week. I disagree with much of the article, about exploratory and scripted testing and about the nature of checklists. However, I’m going to restrict myself to Mukesh Sharma’s comments about regulatory compliance, specifically Sarbanes Oxley.

“In such (regulatory) scenarios the reliance on scripted testing is heavy, with almost no room for exploratory testing. Other examples include testing for Sarbanes-Oxley… and other such laws and acts, which are highly regulated and require strict adherence to defined guidelines.”

Let’s be clear. The Sarbanes-Oxley legislation does not mention software testing, never mind prescribe how it should be performed. It does mention testing, but this is the testing that auditors perform. Standards and quality control also feature, but these relate to the work of accountants and auditors.

Nevertheless, compliance with Sarbanes-Oxley does require “strict adherence to defined guidelines” but this is a requirement that is inferred from the legislation and not the law itself. The guidelines with which software testers must comply are locally defined testing policies and processes. Each compliant organisation must be able to point to a document that says “this is how we test here”. The legislation does have plenty to say about guidelines, but these are guidelines for sentencing miscreants. I suppose the serious consequences of non-compliance go a long way to explaining the over-reaction to Sarbanes-Oxley.

I suspect the pattern was that companies and consultants looked at how they could comply by following their existing approach to development and testing, then reinforced that. Having demonstrated that this would produce compliance they claimed that this was the way to comply. Big consultancies have always been happy to sell document heavy, process driven solutions because this gives them plenty of opportunity to wheel out inexperienced, young graduates to do the grunt work tailoring the boiler plate templates and documents.

I used to detest Sarbanes-Oxley, but that was because I saw it as reinforcing damaging practices. I’m still hardly a fan, but I eventually came to take a more considered approach because it doesn’t have to be that way. If you go to look at what the auditors have to say about Sarbanes-Oxley you get a very different perspective. ISACA (the professional body for IT auditors) provides a guide to SOX compliance (free to members only) and it doesn’t mention scripts at all. Appropriate test environments is a far bigger concern.

ISACA’s COBIT 5 model for IT governance (the full model is free to members only) doesn’t refer to manual test scripts. It does require testers to “consider the appropriate balance between automated scripted tests and interactive user testing”. For manual testing COBIT 5 prefers the phrase “clearly defined test instructions” rather than “scripts”. The requirement is for testers to be clear about what will be done, not to document traditional test scripts in great detail in advance. COBIT 5 is far more insistent on the need to plan your testing carefully, have proper test environments and retain the evidence. You have do all that properly, it’s non-negotiable.

COBIT 5 matters because if you comply with that then you will comply with Sarbanes-Oxley. Consultancies who claim that you have to follow their heavyweight, document driven processes in order to comply are being misleading. You can do it that way, just like you could drive from New York to Miami via Chicago. You get there in the end, but there are better ways!

Exploratory testing, Context Driven Testing and Bach & Bolton’s Rapid Test Management are all consistent with the demands of Sarbanes-Oxley compliance provided you know what you’re doing and take the problem seriously, caveats that apply to any testing approach. If anyone tells you that Sarbanes-Oxley requires you to test in a particular way challenge them to quote the relevant piece of legislation or an appropriate auditor’s interpretation. You can be sure that it’s a veiled sales pitch – or they don’t know what they are talking about. Or both perhaps!