But how many test cases?

One of the aspects of traditional test management that has always troubled me has been the obsession with counting test cases at some places.

If you don’t understand what’s going on, if you don’t really understand what software testing is all about then start counting, and you’ve immediately got some sort of grip on well, er something.

100 is bigger than 10. 10,000 is pretty impressive, and 100,000 is satisfyingly humongous. You might not really understand what’s happening, but when you face up to senior management and tell them that you’re managing thousands of things, well, they’ve got to be impressed.

But counting test cases? Is that useful? Well it is if it’s useful for managing the problem, but it’s not an end in itself. It’s nonsense to expect all testing to be measurable by the number of test cases. It can even be a damaging distraction.

I was once interviewed for a test management role. I was asked about my most challenging testing problem. I replied that it was working as a Y2K test manager. It seemed like a good answer. It was a huge task. There was nowhere near enough time to do all the testing we’d have liked. The dates couldn’t be pushed back, and we were starting too late. We had to take a ruthless risk based approach, triaging some applications out of sight. They’d have to run over the millennium and we’d wait and see what would happen. The cost of testing, and the limited damage if the applications failed meant we had to forget about them and put our effort where it would count.

What seemed like a good answer was really a big mistake! “How many test cases did you have?”

I was surprised. The question made no sense. The applications were insurance management information systems. There was an on-line front end, but technically that was pretty simple. My responsibility was the huge and fearsomely complex back end processing. The strategy was to get all the code fixed, then hit the most complex and date sensitive areas hard in testing.

We were looking at batch files. We had a perfect test oracle. We ran the batch suites with 1996 data, and the non-Y2K compliant code. We then date shifted the input data forward to 2000 (the next leap year) and ran the Y2K compliant code. The 2000 output files should be identical to the 1996 output files in everything except the years. It was much more complex than that, but in principle it was pretty simple.

There was a huge amount of preparation, ensuring all the reference tables were set up correctly, allowing for any hard coded dates in the programs, setting up the data and test jobs. The investigation of the inevitable discrepancies was painstaking and time-consuming, but it worked. We got through the planned high-priority testing on time, and there were no serious incidents over the millennium. I explained all this. We didn’t have “test cases”.

My interviewers asked me to try and put a figure on how many test cases we would have had, as if it were merely a matter of terminology. Even if you multiplied all the files we were checking by each time frame in which we were running tests, the total number would have been under a hundred. You could have called them “test cases” but it wouldn’t have meant anything. I explained this.

“So, under a hundred”. Pens scribbled furiously, and I seethed as I saw all our challenging, sometimes quite imaginative, and ultimately successful testing being reduced to “under a hundred” by people who hadn’t a clue.

I wasn’t hired. It didn’t bother me. I’d lost interest. I could have just lied and said, “oh, about 10,000 – yeah it was pretty big”, but I wouldn’t have wanted to work there.

I’ve seen the obsession with counting test cases taken to extremes. The need to count test cases created huge pressure on one project to execute testing in a way that facilitated reporting, not testing.

It was another big batch financial system, and there were some strong similarities with Y2K. However, this time we had to have “test cases” we could count, and report progress on. We had to “manage the stakeholders”. It was important that test execution should show that we’d got through 10% of the test cases in only 9% of the testing window, and that everything was just dandy.

Sadly, reports like that meant absolutely nothing. Easy test cases got passed quickly, complex ones took far longer and we were running badly late – according to the progress reports and the metrics. The trouble was that the easy test cases were insignificant. The complex ones were what counted and there were many inter-dependencies between them. The testers were finding out more and more about the application and the data, and if all went satisfactorily there would be a rush of test cases getting cleared at the end, as eventually happened.

The reaction of senior management to the daily progress reports was frantic concern. We weren’t getting through the test cases. That was all that mattered. No amount of explanation made a difference. I was spending most of my time on reports, explanations and meetings; very little with the testers. Management thought that we were politically naïve, and didn’t understand the need to keep the stakeholders happy. Bizarrely, the people who knew almost nothing about the work, or its complexities, thought that we were out of touch with reality.

Reality for them was managing the process, counting the test cases, playing the organisational game. “The quality of the application? Whoa, guys! You testers are missing the point. How many test cases have you got?”

Advertisements

Testers are like auditors

No, this isn’t another “life is like a box of chocolates”. It’s not a tortured metaphor to try and make a point. I mean it straight up. Testers and auditors are very similar, or they should be.

In May I went to hear Michael Bolton speaking in Edinburgh. He touched on the distinction between testing and checking. If you’re not familiar with this then go and check out this article now. Sure I want you to read my article, but Michael’s argument is important and you really need to understand it.

I talked to Michael for a few minutes afterwards, and I said that the distinction between testing and checking applied also to auditors and that I’d had arguments about that very point, about the need for good auditors to find out what’s really going on rather than to work their way down a checklist.

I mentioned other similarities between testing and auditing and Michael agreed. Usually people are surprised, or think the connection is tenuous. Recruitment consultants have been the most frustrating. They see my dual experience in testing and auditing as an oddity, not an asset.

I said I’d been meaning to write about the link between testing and auditing, especially the audit of applications. Michael told me to do it! So here it is.

It’s about using your brain – not a script

If you don’t know what you’re doing then a script to follow is handy. However, that doesn’t mean that unskilled people with prepared scripts, or checklists, are adequate substitutes for skilled people who know what they’re doing.

As Michael said in his talk, scripts focus our attention on outputs rather than outcomes. If you’re just doing a binary yes/no check then all you need to know is whether the auditees complied. You don’t need to trouble your brain with tricky questions like; “does it matter?”, “how significant is it?”.

I’ve seen auditors whose job was simply to check whether people and applications were complying with the prescribed controls. They asked “have you done x to y” and would accept only the answers “yes” or “no”. I thought that was unprofessional and potentially dangerous. It encourages managers to take decisions based on the fear of getting beaten up for “failing” an audit rather than decisions based on commercial realities.

However, that style of auditing is easier and cheaper than doing a thorough, professional job. At least it’s cheaper if your aim is to go through the motions and do a low quality job at the lowest possible cost. You can fulfil your regulatory obligations to have an internal audit department without having to pay top dollar for skilled staff. The real costs of that cheapskate approach can be massive and long term.

What’s the risk?

Good auditing, like good testing, has to be risk based. You concentrate on the areas that matter, where the risk is greatest. You then assess your findings in the light of the risk and the business context of the application.

The questions I asked myself when I approached the audit of an application were;

“What must the application do, and how will I know that it’s doing it?”

“What must the application prevent, and how will I know that it really will stop it happening?”

“What controls are in place to make sure that all processing is on time, complete, accurate and authorised by the right people?”

The original requirements might be of interest, but if they were missing or hopelessly out of date it was no big deal. What mattered was the relationship between the current business environment and the application. Who cared if the application perfectly reflected the requirements if the world had moved on? What really mattered were the context and the risk.

Auditing with exploratory testing

Application audits were invariably time boxed. There was no question of documenting our planned tests in detailed scripts. We worked in a highly structured manner, but documentation was light. Once we understood the application’s context, and the functions that were crucial, we’d identify the controls that were needed to make sure the right things happened, and the wrong things didn’t.

I’d then go in and see if it worked out that way. Did the right things always happen? Could we force through transactions that should be stopped? Could we evade controls? Could we break the application? I’d approach each piece of testing with a particular mindset, out to prove a particular point.

It’s maybe pushing the point to call it true exploratory testing, but that’s only because we’d never even heard of it. It was just an approach that worked for us. However, in principle I think it was no different from exploratory testing and we could have done our job much better if we had been trained.

Apart from the fun and adrenalin rush of catching criminals on fraud investigations (no use denying it – that really was good fun) this was the best part of auditing. You’d built up a good knowledge of the application and its context, then you went in and tried to see how it really worked. It was always fascinating, and it was nothing like the stereotype of the checklist-driven compliance auditor. It was an engrossing intellectual exercise, learning more and applying each new piece of understanding to learn still more.

”It’s your decision – but this is what’s going on”

I have a horror of organisations that are in hock to their internal auditors. Such bodies are deeply dysfunctional. The job of good auditors is to shine a light on what’s really going on, not to beat people up for breaking the rules. It’s the responsibility of management to act on the findings and recommendations of auditors. It should never be the responsibility of auditors to tell people what they must do. It happens though, and corporations that allow it are sick. It means that auditors effectively take commercial decisions for which they are neither trained nor accountable.

It’s just like letting testers make the decision to ship, without being responsible for the commercial consequences.

Both testers and auditors are responsible for letting the right people have the best available information to make the right decisions. The context varies, and the emphasis is different, but many of the techniques are interchangeable. Of course auditors look at things that testers don’t, but these differences can still offer the chance to learn from each other.

For instance, auditors will pay close attention to the controls and procedures surrounding an application. How else could they judge the effectiveness of the application’s controls? They have to understood how they fit into the wider business if they are to assess the impact and risk of weaknesses. Maybe the broader vision is something testers could work on?

Why not try cultivating your internal audit department? If they’re any good you could learn something. If they’re no good then they could learn a lot from you!