No, this isn’t another “life is like a box of chocolates”. It’s not a tortured metaphor to try and make a point. I mean it straight up. Testers and auditors are very similar, or they should be.
In May I went to hear Michael Bolton speaking in Edinburgh. He touched on the distinction between testing and checking. If you’re not familiar with this then go and check out this article now. Sure I want you to read my article, but Michael’s argument is important and you really need to understand it.
I talked to Michael for a few minutes afterwards, and I said that the distinction between testing and checking applied also to auditors and that I’d had arguments about that very point, about the need for good auditors to find out what’s really going on rather than to work their way down a checklist.
I mentioned other similarities between testing and auditing and Michael agreed. Usually people are surprised, or think the connection is tenuous. Recruitment consultants have been the most frustrating. They see my dual experience in testing and auditing as an oddity, not an asset.
I said I’d been meaning to write about the link between testing and auditing, especially the audit of applications. Michael told me to do it! So here it is.
It’s about using your brain – not a script
If you don’t know what you’re doing then a script to follow is handy. However, that doesn’t mean that unskilled people with prepared scripts, or checklists, are adequate substitutes for skilled people who know what they’re doing.
As Michael said in his talk, scripts focus our attention on outputs rather than outcomes. If you’re just doing a binary yes/no check then all you need to know is whether the auditees complied. You don’t need to trouble your brain with tricky questions like; “does it matter?”, “how significant is it?”.
I’ve seen auditors whose job was simply to check whether people and applications were complying with the prescribed controls. They asked “have you done x to y” and would accept only the answers “yes” or “no”. I thought that was unprofessional and potentially dangerous. It encourages managers to take decisions based on the fear of getting beaten up for “failing” an audit rather than decisions based on commercial realities.
However, that style of auditing is easier and cheaper than doing a thorough, professional job. At least it’s cheaper if your aim is to go through the motions and do a low quality job at the lowest possible cost. You can fulfil your regulatory obligations to have an internal audit department without having to pay top dollar for skilled staff. The real costs of that cheapskate approach can be massive and long term.
What’s the risk?
Good auditing, like good testing, has to be risk based. You concentrate on the areas that matter, where the risk is greatest. You then assess your findings in the light of the risk and the business context of the application.
The questions I asked myself when I approached the audit of an application were;
“What must the application do, and how will I know that it’s doing it?”
“What must the application prevent, and how will I know that it really will stop it happening?”
“What controls are in place to make sure that all processing is on time, complete, accurate and authorised by the right people?”
The original requirements might be of interest, but if they were missing or hopelessly out of date it was no big deal. What mattered was the relationship between the current business environment and the application. Who cared if the application perfectly reflected the requirements if the world had moved on? What really mattered were the context and the risk.
Auditing with exploratory testing
Application audits were invariably time boxed. There was no question of documenting our planned tests in detailed scripts. We worked in a highly structured manner, but documentation was light. Once we understood the application’s context, and the functions that were crucial, we’d identify the controls that were needed to make sure the right things happened, and the wrong things didn’t.
I’d then go in and see if it worked out that way. Did the right things always happen? Could we force through transactions that should be stopped? Could we evade controls? Could we break the application? I’d approach each piece of testing with a particular mindset, out to prove a particular point.
It’s maybe pushing the point to call it true exploratory testing, but that’s only because we’d never even heard of it. It was just an approach that worked for us. However, in principle I think it was no different from exploratory testing and we could have done our job much better if we had been trained.
Apart from the fun and adrenalin rush of catching criminals on fraud investigations (no use denying it – that really was good fun) this was the best part of auditing. You’d built up a good knowledge of the application and its context, then you went in and tried to see how it really worked. It was always fascinating, and it was nothing like the stereotype of the checklist-driven compliance auditor. It was an engrossing intellectual exercise, learning more and applying each new piece of understanding to learn still more.
”It’s your decision – but this is what’s going on”
I have a horror of organisations that are in hock to their internal auditors. Such bodies are deeply dysfunctional. The job of good auditors is to shine a light on what’s really going on, not to beat people up for breaking the rules. It’s the responsibility of management to act on the findings and recommendations of auditors. It should never be the responsibility of auditors to tell people what they must do. It happens though, and corporations that allow it are sick. It means that auditors effectively take commercial decisions for which they are neither trained nor accountable.
It’s just like letting testers make the decision to ship, without being responsible for the commercial consequences.
Both testers and auditors are responsible for letting the right people have the best available information to make the right decisions. The context varies, and the emphasis is different, but many of the techniques are interchangeable. Of course auditors look at things that testers don’t, but these differences can still offer the chance to learn from each other.
For instance, auditors will pay close attention to the controls and procedures surrounding an application. How else could they judge the effectiveness of the application’s controls? They have to understood how they fit into the wider business if they are to assess the impact and risk of weaknesses. Maybe the broader vision is something testers could work on?
Why not try cultivating your internal audit department? If they’re any good you could learn something. If they’re no good then they could learn a lot from you!