Test management – the good, the bad and the ugly

Paul Carvalho has written a very interesting blog “Test Management is Wrong”, about how we place too much emphasis on test management rather than how the whole project provides value to the stakeholders.

I do agree broadly with Paul, but I think his argument needs to be clarified and refined.

The good – letting testers do their job

Management per se is not bad. It can be done well or badly, tightly or loosely; managers can inspire or demoralise. There always needs to be some form of management to see the big picture; to coach, direct and lead testing by personal example; to report, co-ordinate and assign work; to deal with other managers and stakeholders; to protect the testers from time-wasting crap so that they can be productive. That is all good and worthwhile, and it is all test management.

We need to be clear about the distinction between managing testing and managing the testing process. A couple of years ago I had a great discussion with Fiona Charles at the Test Management Summit in London. Fiona worked it up into a marvellous talk for STPCon (PDF, opens in a new window).

The bad, part 1 – managing the process

The distinction between managing the activity and managing the process is vital and applies equally to other disciplines in software development. Testing has a particular problem. It’s a profession where it is possible to get away with blurring the distinction.

Perhaps that’s because testing, as a concept, is widely misunderstood and therefore poorly defined. When the defined process veers away from the true nature of the activity the eyes of management can remain fixed on the process, unaware of the misalignment.

Sure, the thought leaders of testing are crystal clear about what they are doing, about the distinction between process and task, but do they influence a majority of testers? I’m not sure. They certainly influence those who are listening and whose minds are open. However, there are huge regiments of testers working away in traditional environments, drawing up detailed plans and scripts; test execution becomes a matter of plodding through the scripts, progress being measured in numbers of scripts, or test cases, rather than increasing understanding of the real nature of the application.

Managers can spend all their time managing the process but doing very little that could reasonably be called testing. I don’t mean that they spend all their time managing and don’t have time to perform any testing, though that is a problem. I mean that their management activities are only loosely connected to genuine testing, i.e. providing information to stakeholders about the state of the product under test.

Managers in such companies often don’t really understand testing. They confuse the process with the reality, seeing testing as a process that must be followed, and ticked off on the project plan before the application goes live. Project managers don’t want test managers to manage testing, they want them to manage the testing process so that the project can be tracked neatly through the MS Project plan.

The bad, part 2 – faking the testing

Sadly testing is an activity which it is possible to fake, as James Bach memorably put it (PDF, opens in a new window).

James’ talk was wickedly satirical, but it is based in truth. The deeper truth, the more challenging message of James’ talk isn’t that managers consciously fake the testing. A few test managers might do that; many more do it unwittingly, seduced by the painful effort into the delusion that they must be doing something useful.

The heavyweight, documentation addicted traditional approach to testing is so tough, laborious and stressful that it really does require strong management. Also, it almost guarantees conflict, when the testing window is squeezed, when the defects disrupt the test execution schedule, when 80% of the testing window has elapsed and only 20% of the test cases have been executed. The unpleasantness adds to the illusion that something important and worthwhile must be going on. Surely all that grief, stress and sweaty effort couldn’t be pointless?

Well, actually that’s exactly what it could be. There have been times when I have felt futility and despair when I realised that I was adding nothing to the real testing even though I was the test manager. I was spending most of my time producing meaningless reports for people who didn’t really need them, didn’t understand them and certainly didn’t understand the reality. I was contorting the test execution plan so that I could produce reports showing steady progress through the plan, rather than reports that enhanced knowledge. If my reports had been scrapped all that would have been missed was the chance to tick the box “report produced”.

The ugly – battlefield testing

Whilst I was writing this I received an email asking if I was interested in this role.

lousy test management role

Crisis management? Conflict resolution? I think it’s safe to assume from the skills required that the test lead will be going into a battle zone, expected to flog the testing team along the testing leg of the project death march. How much testing, or even genuine test management, will the poor sucker be able to do? How much time will be spent battling through the bureaucracy and politics.

Test managers can run through the whole testing process entirely plausibly, without any real testing. I find it hard to imagine that happening with other IT disciplines. The folly would be immediately apparent. Yet in testing conflict resolution and crisis management are sadly important for test managers on traditional projects battling their way through the process.

It’s a lousy, unsatisfying way to work; it is inefficient and ineffective. It crushes the people working that way. That’s why we need to be clear about what test management is, and angry when managing the testing is confused with managing the process.

Paul’s follow-up

Paul Carvalho has since clarified his argument in a follow up comment on his blog. His comment confirms that we are pretty much in agreement about test management. Paul’s blog has interesting things to say about development projects providing value to stakeholders, which takes us into territory explored by Tom Gilb and Gojko Adzic.

Testing always requires strong test managers. It’s a valuable role and an important skill. What test management cannot do, especially test management that is no more than management of the process, is to provide value.

Good test management creates the conditions that allows testers to tell people about value, but only if the the whole development has been focussed on delivering value to the stakeholders. You cannot dump responsibility for value, or quality, onto the poor testers at the end. You cannot do it with a beautifully engineered Test Management Process. As Paul put it “saying that you will assure Quality through [only] Test Management is wrong.”