Not “right”, but as good as I can do

This is probably the last in the series of articles running up to my upcoming EuroSTAR tutorial ”Questioning auditors questioning testing”.

In my last blog post I explained my concerns about the way that testers have traditionally adopted an excessively positivist attitude, i.e. they conducted testing as if it were a controlled scientific experiment that would allow them to announce definite, confident answers.

I restricted that article to my rejection of the positivist approach. However, I need to follow it up with my explanation of why I can’t accept the opposite position, that of the anti-positivist or interpretivist. The interpretivist would argue that there is no single, fixed reality. Everything we know is socially constructed. Researchers into social activities have to work with the subjects of the research, learning together and building a joint account of what reality might be. Everything is relative. There is no single truth, just truths.

I don’t think that is any more helpful than rigid positivism. This article is about how I groped for a reasonable position that avoided the extremes.

Understanding, not condemning

My experience as an auditor forced me to think about these issues. That shaped my attitudes to learning and uncertainty when I switched to testing.

I’ve seen internal auditors taking what I considered a disastrously unprofessional approach largely because they never bothered to consider such points. Their lack of intellectual curiousity meant that they unwittingly adopted a rigid scientific positivist approach, totally unaware that a true scientist would never start off by relying on unproven assumptions. That rigid approach was the only option they could consider. That was their worldview. Anything that didn’t match their simplistic, binary view was wrong, an audit finding. They’d plough through audits relying to a ludicrous extent on checklists, without acquiring an adequate understanding of the business context they were trampling around in. They would ask questions that required yes/no answers, and that was all they would accept.

The tragedy was that the organisation where I saw this was in fear of corporate audit, and so the perspective of the auditors shaped commercial decisions. It was better to lose money than to receive a poor audit report. This fear reinforced the dysfunctional approach to audit. Sadly that approach deskilled the job. Audits could be performed by anyone who was literate, so it became a low status job. Auditors were feared but not respected, a dreadful outcome for both audit and the whole company.

Prior to this I had worked as an auditor in a company that had adopted a risk based approach before it became the norm. Compliance, and simple checks against defined standards and controls were seen as being interesting, but of limited value. We had to consider the bigger picture, understand the reasons for apparent non-compliance and try to explain the significance of our findings.

The temptations that can seduce a novice

At first I found all this intimidating. The temptation for a novice is to adopt one of two extremes. I could either get too close to the subjects of my audits and accept the excuses of IT management, which would have been very easy given that I’d just moved over from being a developer and expected to resume my career in IT one day. Or I could veer to the other extreme and rely on formal standards, processes and predefined checklists all of which offer the seductive illusion of a Right Answer.

This is essentially the dichotomy between interpretivism and positivism. Please don’t misunderstand me. I am not rejecting either outright. I am not qualified to do that. I just don’t believe that either is a helpful worldview for testers or auditors.

I knew I had to avoid the extremes and position myself in the middle. This was simply being pragmatic. I had to provide an explanation of what was going on, and reach conclusions. This required me to say that certain things could and should be done better. Senior management were not helped by simplistic answers that didn’t allow for the context, but neither were they interested in waffle that failed to offer helpful advice. They were paying me to make informed judgements, not make excuses.

Maybe not “right”, but as good as I can do

It took me a couple of years to feel comfortable and confident in my judgement. Where should I position myself on any particular complicated problem? I never felt convinced I was right, but I learned to be confident that I was taking a stance that was justifiable in the circumstances and reflected the context and the risks. I learned to be comfortable with “this is as good as I can do”, and take comfort that this was far more valuable to my employers than being either a hardline positivist or interpretivist.

As I conducted each audit I would fairly quickly build up a rough picture, a model, of what I thought should be happening. This understanding was always provisional, capable of being tested and revised all the way through the audit in a constant cycle of revision. Occasionally the audit would have to be aborted because it quickly revealed some fundamental problem that we had not envisaged. In such cases there was no point pursuing the detail in the audit plan. It would have been like reviewing the decoration of a house when the roof had been blown off.

On other occasions our findings were radically different from those that we had expected. We had been sent in because of particular concerns, but the apparent problems were merely symptoms of a quite separate problem with roots elsewhere. It was essential that we had the mental flexibility, and the openness to realise that what we thought we knew was wrong.

The dangers of the extremes

I’m not comfortable positioning myself, and the learning approach that we took, in any particular school between positivism and relativism. It would be interesting to pin it down but working out the right label isn’t a priority. The important point, for my attitude towards auditing and testing, is that I have to be wary of the dangers inherent in the extremes.

In testing, the greater and more obvious danger is that of positivism, or rather an excessive regard for its validity and relevance in this context. In auditing, relativism is also a huge danger, and it’s fatal to the independence and credibility of auditors if they start identifying too closely with the auditees. There can be great commercial and organisational pressure to go with the flow. There have been countless examples of internal and external auditors who blew it in this way and failed to say “this is wrong”.

The dangerous temptation of going with the flow also exists in testing. Indeed, it could be argued that the greatest danger of relativism in testing is to accept the naïve positivism embodied in traditional approaches, and to pretend that detailed scripts, and meticulous documentation can convey the Truth against which the testers have to measure the product. That’s an interesting paradox, but not one I intend to pursue now.

Frustration with “faking it”

Anyway, after learning how to handle myself in a professional and enlightened audit department I couldn’t contain my frustration when I later had to deal with blinkered auditors who couldn’t tell the difference between an audit and a checklist. I don’t regard that as real auditing. It’s faking it, in very much the same style that James Bach has described in testing; running through the process, producing immaculate documents, but taking great care not to learn anything that might really matter – or that might disrupt plans, budgets or precious assumptions.

Inevitably, when I switched to testing, and had to endure the problems of the traditional document driven approach I experienced very much the same frustrations that I had had with poor auditing. It was with great relief that I came across the work of the Context Driven School, and realised that testing wasn’t a messed up profession after all; it was just a profession with some messed up practices, that we can go along with or resist. It’s our choice, and the practical and learning experiences that I enjoyed as an auditor meant I could go only one way.

Testing inside the box?

The EuroSTAR online softare testing summit took place yesterday. I missed the morning, but there were a few things that leapt out at me in the afternoon.

When Fiona Charles was talking about test strategies she said, “we always have a model, whether conscious or unconscious”.

Keith Klain later talked about the dangers of testers trying to provide confidence rather than information. Confidence should never be the goal of testing. If we try to instil confidence then we are likely to be reinforcing biases and invalid assumptions.

In between Fiona’s and Keith’s talks I came across this quote from the Economist about the reasons for the 2007/8 financial crash.

With half a decade’s hindsight, it is clear the crisis had multiple causes. The most obvious is the financiers themselves – especially the irrationally exuberant Anglo-Saxon sort, who claimed to have found a way to banish risk when in fact they had simply lost track of it.

These three points resonated with me because over the last day or so I have been thinking about the relationship between risk, our view of the world and our inbuilt biases as part of my upcoming EuroSTAR tutorial.

During the tutorial I want to discuss a topic that is fundamental to the work that testers and auditors do. We might not think much about it but our attitudes towards ontology and epistemology shape everything that we do.

Don’t switch off! I’m only going to skim the surface. I won’t be getting heavy.

Lessons from social science

Ontology means looking at whether things exist, and in what form. Epistemology is about how and whether we can know about anything. So “is this real?” is an ontological question, and “how do I know that it is real, and what can I know about it?” are epistemological questions.

Testing and auditing are both forms of exploration and research. We have to understand our mindset before we go off exploring. Our attitudes and our biases will shape what we go looking for. If we don’t think about them beforehand we will simply go and find what we were looking for, or interpret whatever we do find to suit our preconceptions. We won’t bother to look for anything else. We will not even be aware that there is anything else to find.

Both testing and auditing have been plagued by an unspoken, unrecognised devotion to a positivist outlook. This makes the ontological assumption that we are working in a realm of unambiguous, objective facts and the epistemoligical assumption that we can observe, measure and conduct objective experiments (audits or tests) on these facts. Crucially this mindset implicitly assumes that we can know all that is relevant.

If you have this mindset then reality looks like the subject of a scientific experiment. We can plan, conduct and report on our test or audit as if it were an experiment; the subject is under our control, we manipulate the inputs, check the output against our theory and reach a clear, objective conclusion.

If we buy into the positivist attitude then we can easily accept the validity of detailed test scripts and audit by checklist. We’ve defined our reality and we run through binary checks to confirm that all is as it should be. Testing standards, CMMI maturity levels, certification all seem natural progressions towards ever greater control and order.

We don’t need to worry about the unknown, because our careful planning and meticulous documentation have ruled that out. All we need to do is run through our test plan and we will have eliminated uncertainty. We can confidently state our conclusions and give our stakeholders a nice warm glow of confidence.

Reality in business and software development is nothing like that. If we adopt an uncritical positivist approach we have just climbed into a mental box to ensure that we cannot see what is really out there. We have defined our testing universe as being that which is within our comprehension and reach. We have redefined risk as being that which is within our control and which we can manage and remove.

Getting out of the box

Perhaps the testers, and certainly the auditors, at the banks that crashed failed to realise they were in a world in which risk had not been banished. It was still out there, bigger and scarier than they could ever have imagined, but they had chosen to adopt a mindset that kept them in a neat and tidy box.

When you forget about the philosophical and social science jargon it all comes down to whether you want to work in the box, pretending the world suits your worldview, or get out of the box and help your stakeholders understand scary, messy reality.

Social science researchers know the pitfalls of unconsciously assuming a worldview without realising how it shapes our investigation. Auditors are starting to realise this and discuss big questions about what they can know and how they can know it. Testers? Well, not all of them, not yet, but we all need to start thinking along those lines. That’s part of the message I hope to get across at Gothenburg and elsewhere over the coming months.

Questioning auditors questioning testing

Since I chose the title for my tutorial, “questioning auditors questioning testing” at EuroSTAR this year I have become increasingly aware of how relevant both the title and the topic are.

Testers are used to being questioned, whether it’s by project managers, senior management, users – and auditors too of course. It’s easy to get wrapped up in the problems and pressures of our own profession and forget that other people are working under scrutiny too.

When I worked as an internal auditor I always knew we had to demonstrate that we were “adding value” to the company. I dislike that phrase, but the underlying point is crucial. Auditors can go through the motions and produce detailed, unhelpful reports that are of little real value to the organisation. Alternatively they can get to the heart of their role and provide advice that makes a difference. Only by doing so can they justify their salaries.

Auditors are under scrutiny too

There are certainly some audit departments that just go through the motions. A couple of decades ago such auditors would have accounted for the vast majority of the profession. Happily my audit experience was in a company that was in the vanguard of the new approach to audit. Our work was risk based, always aware of the wider context and absolutely never driven by checklists. We were in the minority then but the tide has now turned emphatically. I don’t know whether the time-serving, low value, low quality auditors are now in the minority but they are certainly under serious pressure from their own profession and the regulators to up their game and adjust to the modern world.

The UK branch of the Institute of Internal Auditors has just issued guidance to its members that internal audit departments in financial services should include within their scope…

…the risk and control culture of the organisation. This should include assessing whether the processes (e.g. appraisal and remuneration), actions (e.g. decision making) and “tone at the top” are in line with the values, ethics, risk appetite and policies of the organisation.

Audit departments should have an internal QA team with highly experienced auditors. Their role would be to “ensure” (the IIA’s bold choice of word) that audit plans and reports are risk based, with opinions that are “adequately evidenced”. These uber-auditors would be expected to challenge their colleagues, even their own management, and report directly to the board if there are problems. It sounds an interesting job!

The world has changed since the turn of the millenium. We have seen huge corporations crash, and banks that were too big to fail crash spectacularly. There has been widespread and legitimate disappointment, anger even, at the performance of both external and internal auditors.

Auditors are going to have to be more accountable. They in turn are going to be audited. That’s the way that the world is going.

Will auditors who are increasingly expected to query the culture of a company and the “tone at the top”, who are themselves subject to intense scrutiny, really be content with working their way down a checklist? Will they really be happy to tick boxes as you show them the shelfware, the endless documents meticulously completed to IEEE829 standard templates? Or will they want to sit down with you and understand the risks you identified and investigated so that they can relate them to the risks that keep the stakeholders awake at night?

Just today I was looking at the questions asked about testing practices by a company that sells liability insurance. The questions reflect a dated view of testing, and assume that effective, responsible testing follows the old document driven approach.

Risk management is vital for all companies, but those in financial services have to be particularly skilled. It’s not simply a matter of protecting the company. They are selling their expertise at handling risk though their products.

Any insurer who views testing in the same way as that liability insurer has lost sight of the true risks. That would be a legitimate concern for the auditors of that company. Now, in the UK at least, the auditors will have an explicit responsibility to challenge poor practice.

Why am I telling you this? Testers have to work constructively with auditors, and they have to understand where they are coming from.

Winning friends and influencing auditors

The subtitle of my tutorial is a conscious nod towards Dale Carnegie’s phenomenally successful self-help book “How to win friends and influence people”. You might dismiss Carnegie’s book as a trite collection of obvious pieces of advice, but the sorry truth is that many of the basic truths about human nature are still routinely neglected in business.

I chose the subtitle because I wanted to stress the importance of understanding auditors and where they are coming from, rather than assuming that they are the hard nosed suits from head office. If we are defensive, expecting a battle with auditors then that is what we are likely to get. The relationship between auditors and testers is a human relationship every bit as much as a business one. Whether that relationship is good or bad is largely a matter of how well the personal relationship is handled.

One of Carnegie’s key points is a quote from the Roman writer Publilius Syrus.

We are interested in others when they are interested in us.

If we show an interest in other peoples’ problems then they will be interested in ours. Show an interest in what the auditors are trying to do and they are more likely to take a positive interest in your work.

From personal experience I know how scary it is to embark on a new audit when you can’t use a checklist that pretends to provide all the questions and answers. Look at it from the auditors’ point of view. They have to learn quickly about a new project, a new business area, or some new technology. They have to be able to discuss the important risks and issues with people who are highly experienced and possibly hostile. The auditors know that within a few weeks (at most) they will have to issue an intelligent report that tells a persuasive story identifying problems and possible improvements. It was always a relief to meet people who understood our role, who wanted to work with us and who saw us as valuable allies in their attempt to do a better job.

Auditors can help testers

I have seen both testing and auditing change enormously over the past couple of decades. Enlightened testers and auditors now have far more in common with each other than they do with the old school, checklist/script practitioners in their own professions.

Both testers and auditors are, or should be, enquiring, inquisitive people trying to provide more information about new products and applications and crucially about the risks that their employers and clients are facing.

If auditors are viewed in that positive light then testers should want to get involved with them as early as possible, and to keep speaking to them. It shouldn’t be a one way conversation, with testers justifying themselves. Good auditors will have a different perspective from normal project members. They will help testers to see a bigger picture, to understand the business risks better, and their input should help testers to come up with important new ideas for testing.

My tutorial won’t be a simple matter of telling people the magic tricks, or correct phrases to get the auditors off your back. What I do hope it will provide is an insight into why good auditors will support good testing, rather than impressive documentation. I also hope it will show testers how they can fight back constructively against the poor auditors who are still out there. If auditors are giving you a hard time over your failure to write unnecessary documents then it is good to have the ammunition you need to defend yourself, to turn the tables on them and help them to do a better job!