Does “best practice” protect clients?

This article follows on from yesterday’s, “Best practice, or just okay practice?”.

One of the most superficially impressive justifications for “best practice” is that it protects the interests of clients and users. I really don’t accept that “best practice” is the best, or even an effective way to do it. This isn’t just a theoretical position. Nor have I been swayed by the Rapid Software Testing people and gone looking for confirmation in practice. I agree with most of what they say because it matches, and helps explain, the experience that I’d already acquired.

I lost faith in “best practice” because of what I saw out in the field. The objections in my previous piece were largely academic, but very real. I will tell you some of the experiences that have shaped my views.

I have seen customers getting a lousy service because the supplier was insisting on inappropriate “best practice”. The client staff who were responsible for the contract were going along with that because it protected them if things went wrong. The attitude could be summed up as “Ok, it turned out badly, but we were following best practice, so I guess we were just unlucky. No-one did anything wrong, nothing to learn, we’ll do it all the same way next time, we’re sure to be luckier”.

I have seen places where people were doing things that almost guaranteed failure because they believed it was better to adhere rigidly to “best practice” than make the calculated compromises, i.e. take intelligent risks, that were necessary to be successful. I’ve even seen cases where individual success or failure was judged more by adherence to “best practice” than by results. It was a far better career move to deliver late and lose money while following “best practice” than to please the client while taking a more selective approach.

I’ve seen supplier staff doing utterly pointless work that infuriated the client because the supplier was insisting on best practice that bore no relation to the risks facing the client. A drugs manufacturer who is subject to FDA regulation faces very different problems from a retailer selling consumer goods.

Trying to apply the same laborious “best practice” to both can mean the retailer can’t move fast enough in the market, and thus introduces serious commercial risks that far outweigh the technology risks on which the IT supplier is focused.

That retailer client was exasperated because key supplier staff were tied up doing work the client thought was unnecessary and refused to pay for. Meanwhile work that the client was desperate to start was being delayed. So the client couldn’t do commercially important work, the supplier lost money doing work it couldn’t invoice to the client. The supplier staff also hated the unnecessary work because it was so tedious and they openly doubted the value of it. But “best practice” was followed.

It was a lose-lose situation, but the supplier management felt they had to do it because they feared the consequences of neglecting “best practice”. Everyone was unhappy, everyone was frustrated. Everyone lost money.

Another troubling example of “best practice” causing problems was when a company needed to get a product to the market very quickly for commercial reasons. The internal developers had to build the supporting application to a brutally tight timetable.

There were arguments with the users, who were stalling over agreeing the detailed requirements. So the developers chose to ignore “best practices” and actually developed the application in full without the requirements being signed off. Was that best practice? Of course not. Naturally there were problems, and the user management tried to blame the developers for not adhering to basic best practice.

The developers’ management argued that there was a pressing commercial need to implement the application and that the users had been playing a political game to try and ensure that they would not be blamed for the inevitable problems. The developers were very experienced with that business area and took the decision to press ahead and build the system while the users were wasting time.

Would the developers have been right to stand by “best practice” to protect their skins, even if it was damaging to the company? I don’t think so. That isn’t just a personal opinion. I had to give a professional opinion on the matter.

I was called in as an auditor to review the project. Everyone assumed I would kick the developers for not following “best practice”. Instead I praised them for doing a good job in unreasonable circumstances and I criticised the users for being obstructive. I thought the attitude of the users was lamentable, though I was careful with the wording of my report. They were cynical and devious, keen to blame everyone but themselves. They were relying on “best practices” to try and absolve themselves from any blame. My report was accepted by senior management and I think it contributed towards an improvement in working practices.

Ok, that was an extreme case of irresponsible abuse of “best practice”, but it neatly highlights one of the flaws with the concept.

If commercial pressures, the needs of the users, the risk to the company and conventional “best practice” are inconsistent then what gives way? I’m not saying that developers or testers should abandon all responsible, professional behaviour the moment someone senior shouts “jump”.

What I am saying is that the problems of IT are much more complicated and subtle, messy even, than can be resolved by stating that certain techniques or processes or standards are “best practices”. Contracts written by lawyers who cite “best practice” with no real understanding of what work might be required do not offer real protection to clients. They can even hurt the people they are meant to protect.

What is best for clients, and for users, is the best people available being allowed to do the best work they can. That is a trickier matter than writing a contract or defining a new standard. If you feel you really need to rely on contracts, or “best practice” defined by other people, you’re probably on a loser anyway.

Advertisements

Best practice, or just okay practice?

The testing community has been conducting a lively debate on Twitter over the last week about the merits of “best practice”. Rex Black kicked it off with this provocative tweet.
RexBlacktweet

Not surprisingly Rex got a vigorous response and soon the debate was raging. Many people provided valid objections to the idea that there are such things as best practices, but Rex had no intention of backing down.

In response to criticism that “best practice” is just a marketing term Rex wrote;

It’s not a marketing term. “Best practices” is a widely used management term. By refusing to use this term in its common meaning, you’re just removing yourself from mainstream discourse.

Yes, I’d agree with the first part of that, but not in a way that would make Rex feel comfortable because the reasons for my agreement undermine Rex’s position. It is a weaselly and damaging management term, however popular it might be. The second part, about dissenters “removing themselves from mainstream” is a non-sequitur. Disagreement and disengagement are very different things.

“Best practices” certainly is a widely used management term, and thoughtful testers do need to engage with it. We must not remove ourselves from the mainstream. However, by refusing to use it “in its common meaning” we are bringing much needed clarity of language and thought to the problem.

“Best practices” may be widely used but it is also bollocks. I’d like to offer just some of the reasons why.

”Best practices” hinder experts

My first objection is that the idea of “best practice” constrains practitioners. Beginners need the rules and formal structure that “best practice” and standards provide. Experts do not, and their creativity is stifled by “best practices” that are usually defined by people with less experience and skill.

I touched on this in an article I wrote about testing standards a few years ago.

I referred in that article to a talk that Lloyd Roden gave at Starwest 2009. I’ve just found this video of Lloyd’s talk on YouTube. It is under nine minutes long and well worth watching. It’s had only eight views so far and deserves many more.

”Best practices” foster mediocrity

If experts are constrained and frustrated then that leads on to my second objection to “best practice”. The term implies that certain methods and techniques cannot be improved upon and are required in all circumstances. Paradoxically “best practice” encourages mediocre conformity.

Practitioners have to conform to an industry norm and are discouraged from improving upon “best practice”. If we have already achieved “best” then how can we get better? Any deviance must be inferior. That, I’m afraid, may be utter nonsense, but it is the subtext to much of what goes on in reality.

Dilbert nails “best practices” in this classic strip, “stop making mediocrity sound bad”.

Don’t dismiss Dilbert’s view as being cynical and unfair, at odds with serious professional opinion. Read this.

I have always had difficulties with the term “best practice.” Who is to say which practice is best, which is almost as good, which is really not good enough? The members of the standards committee have been appointed (who appoints them, anyway?) to define best practice at a point in time, but as I stated previously the best sinks to “just okay” practice with the passage of time. A standard is obsolete the day it is published.

Is a standards committee empowered to describe what they believe are generally the best methods, tools and techniques as evidenced by what most organizations are doing? If so, the committee is describing standard practice that has been overtaken by best practice, however defined. The process of standardization drives the thought process to the middle. Carried forward over time, standard practice is mediocrity.

This isn’t a tract from the “context-driven/RST” camp. It is an extract from an article by Steven Ross of Risk Masters Inc entitled “Just okay practice” in ISACA Journal, vol 2, 2013. This is the official magazine of the Information Systems Audit and Control Association, of which Ross is a former president.

Unfortunately the article is available online only to members. Is it a controversial view? I doubt it. Two months after it was published the only comments are ones that agree with the writer.

Context comes first, practices come second

Although Steven Ross was talking in general about standards the article was prompted by a specific concern about the effect of standards on information security practice. So do the auditors think differently when it comes to software development?

I don’t think so. Try this.

Internal auditors should not expect organizations to fully implement PMBOK, PRINCE2, COBIT, or any other large set of best practices. Rather, they should expect to see that these practices have been customized and integrated into the organization’s project management methodology.

“Not … fully implement… best practices”, “customized”. This pragmatic statement comes from “Global Technology Audit Guide 12 – Auditing IT Applications”, an official research paper issued by the Institute of Internal Auditors to its members.

COBIT (Control Objectives for Information and Related Technologies) is the governance framework for IT created by ISACA. The latest version, COBIT 5, makes it clear that “best practices” are neither mandatory nor inflexible. They should be tailored to each organisation’s objectives and needs. Understanding the context comes first. Choosing, refining and deploying the techniques come second.

Does anyone believe “best practice” means what it says?

Clearly “best practice” doesn’t mean exactly what it says. “Best” is only good practice in a particular context. That is probably not contentious, and I doubt if supporters of “best practice” such as Rex Black would contest the point.

However, in practice many influential people would dispute the point and they do expect “best practice” to mean exactly what it says.

Whatever the defenders say, it is used loosely as a marketing term. It is used to persuade clients that the highest professional standards will be deployed. Clients can also insist on suppliers complying with “best practice” in the naïve belief that this will protect them.

During the Twitter debate Suresh Nageswaran made a very revealing comment in defending “best practice” with the analogy of checks one should perform before driving.

Before car starts, checking tire pressure, fuel,coolant, water is best practice. Cars have numerous points of failure. This practice will reduce the chances of running into them. Ergo best practice. “Best practice” reduces risk in the path to achieving a goal.

Rex Black backed up Suresh saying that he was “exactly right”.

I think he was exactly wrong. It would be best practice to perform such checks before a journey only if there is time available and the consequences of failing to reach the destination on time were sufficiently serious to justify the delay.

Given the reliability of modern cars the risk does not justify performing these checks every time. If you have to get to work on time it would be prudent to trust in the reliability of your vehicle and perform checks at leisure. It is dangerous nonsense to insist that whatever reduces risk is “best practice”.

However, if we neglect to perform the checks then we are guilty of ignoring a “best practice”. In business neglecting documented and agreed “best practice” might get the lawyers interested. So clients and suppliers agree to use “best practice”. It suits both to pretend that it raises quality and protects them.

If you use words loosely don’t be surprised when people believe you mean what you say. “Best practice” can’t be treated as innocuous management terminology when it shapes contracts and introduces damaging inflexibility to working practices.

If you mean that a particular technique is a good practice in certain contexts then say that. Don’t pretend it is a best practice. If the auditors are careful to qualify the term shouldn’t we also be cautious about how we use “best practice”? Some lawyer might just believe you actually mean it! Why on earth would we want to use a term that suggests we are negligent if we don’t follow “best practice”?

Postscript – I intended to leave the matter at that, but I was challenged in a comment by Suresh Nageswaran (see below) that “best practice” is an important form of protection for clients and users. Thinking back on my own experience I reflected that this wasn’t the case, and I decided to write a second article explaining why I’ve found the idea of “best practice” unhelpful.

Fun and games in audit (adventures with Big Data part 2)

While I was writing my recent blog about working with insurance management information systems I thought about my other experience of trying to make sense out of huge volumes of data, working on fraud investigations using SAS. I toyed with the idea of including that, but decided to keep it for a separate article.

I spent a few years working as a computer auditor at a large UK insurance company. Fraud investigation was not a core part of the job, but it was a frequent and exciting diversion from the routine. If there was a concern that a serious fraud had taken place then one of the computer auditors would be told to drop everything and do whatever was necessary to piece the story together.

The largest, and most interesting investigation concerned an employee at an office in Lancashire. He managed the team that settled insurance claims. He authorised the claims payments and entered the cheque requests, which were always approved by someone more senior (although usually without detailed checking). We received an anonymous tip off that we should take a look at what he had been doing. That was all we had to go on.

I was asked to take a quick look. I extracted all the claims that he’d authorised over the previous couple of years and looked for interesting patterns. There were quite a few totally separate claims that just happened to have identical values, down to the last penny. That was interesting. Claims frauds often involve old invoices being recycled. These payments were all motor claims for third party damage, i.e. for damage done to someone else’s vehicle. Third party claims were notoriously popular with fraudsters.

This was sufficient evidence to launch a full investigation. I searched for all claims and cheques that the suspect had worked on over the precious eight years. I then interrogated the data looking for suspicious patterns.

In addition to duplicate values other red flag signs were duplicate addresses to which cheques were sent, and duplicate payees. Identifying these is a much more complicated exercise than it might seem. If you want to find duplicate addresses you can’t simply sort the data and check for matches. Fraudsters are rarely stupid and if they are using a particular address to receive the cheques they will take care to disguise it. So they enter the address slightly differently each time.

Just think how many different ways you could write your own address and still be confident that letters will reach you; slip in extra spaces or commas, break the address up differently between “address line 1” and “address line 2”, make simple spelling mistakes. What about post codes? They are pretty useless. If the post code is wrong the letter will still get there. So fraudsters would keep changing the post code each time they used an address.

You have similar problems with payee names, though there is less scope for misleading mischief if the cheque has to be paid into a bank account.

SAS was incredibly valuable for these investigations. It was extremely powerful for quick manipulation and comparison of large files, doing sort and match routines. However, it is was also very flexible for painstakingly detailed low level work: chopping up and manipulating data byte by byte, and even bit by bit when necessary. I would try to standardise addresses, stripping out all spaces, punctuation and special characters, and separating building numbers from street names. I would reduce the street name to the first six characters to reduce the chance of being caught out by a deliberate misspelling.

I would then reassemble the addresses and search for duplicates. In the case of this fraud I kept finding more and more. Each time I found some I would look at the other factors in the claim, and look for other claims that matched these factors. This produced other claims that our suspect had not worked on, but which were highly suspicious. Perhaps a third party claims cheque for an identical payee and amount had gone to a different address, but authorised by another member of staff.

As well as interrogating the historic data from the business applications I also matched them against network data. That allowed me to see which desk the user had been sitting at when a claim had been approved. It was clear that the same desk, and therefore terminal, was being used even when different user accounts were approving claims. It looked like our suspect was using other people’s accounts.

Each time I found something interesting I would feed that back into the interrogation routines, gradually building up a clearer picture.

After a few days of very long hours, crawling all over 25 million historic records, I’d identified 555 highly suspicious payments totalling £1.1 million. That was enough for us to go to the police. Some companies had a policy of keeping frauds quiet and dismissing the culprit. This insurance company took a far tougher line. It wanted staff to know that if they stole from the company the police would always be called in.

We packaged up the evidence and took it to the Lancashire Constabulary. The Fraud Squad was delighted to get such strong evidence handed to them on a plate. They checked out our suspect in ways we could not possibly have done and quickly came up with fascinating information about his lifestyle, which was wildly out of line with his salary.

Whilst they were investigating him they happened to mention in a phone call to one of my colleagues that they were tailing him in Liverpool that day. I was surprised to hear that because I had been watching what he was doing on a network monitoring tool. He was sitting at his usual desk! We got back to the police and told them they were following the wrong guy. Shortly afterwards they came back and admitted their mistake.

The police chose 13 addresses to raid, along with the home of the suspect. They obtained search warrants for all the houses and arrest warrants for the householders. They told us that they’d be going in simultaneously at all the addresses at 5 o’clock in the morning.

That night was the only time in my whole career that I’ve lain awake wondering if I’d done everything properly. Could I have made any mistakes? Were any completely innocent people going to be shocked out of their beds by a police raid?

I needn’t have worried. The conspirators were taken completely by surprise. The main suspect had a huge amount of incriminating evidence in his home and quickly caved in, admitting his guilt. The police picked off all the others in short order, telling each that the others had all owned up.

The leader of the conspiracy, our employee, had taken nearly all of the money. The other people had simply been paid to receive the cheques, or to launder them through their bank accounts. The leader pleaded guilty and received a three year jail sentence. That spared me the need to give evidence in court, which would have been an interesting experience.

Afterwards we gave a lot of thought to how we should handle fraud investigations. In this case we’d been dependent on a tip off. Once we had that lead it was possible to keep plugging away, trying out ideas, learning more, refining our theories about what might have happened until we had a clear picture. The question that bugged us was, “how do we get started when we don’t have any reason to be suspicious?”.

There were certain patterns that the company could always look out for, but every case had something new, and often a fraud might use a pattern that was unique.

We started to delve into neural networks, which seemed to offer a promising way of monitoring vast datasets to learn about patterns that could be suspicious. Neural networks have since become popular for identifying fraud. Credit card companies use them widely

One of the great things about doing this sort of work was that our rivals were crooks, not other companies. I went to speak to other insurance companies to learn and exchange ideas. Everyone was quite happy to talk about what they were doing. We were all better off if we could nail the bastards who were ripping us off. No-one felt there was any particular competitive advantage in trying to do it alone.

Unfortunately, and to my huge frustration, our commitment weakened. It was clear that nothing exciting was going to happen with fraud detection in the near future, so when I was offered an interesting new job I happily accepted it.

Still, I did miss the buzz of these investigations. It was exciting and utterly engrossing to be faced with the intellectual challenge of extracting a clear and convincing story from a vast mess. I always had a great sense of achievement when I could produce evidence that was compelling enough to convict a fraudster. It really wasn’t a moral position. Of course I knew I was on the right side and the fraudsters were wrong, but there was no sense of righteousness. It was a thrilling game, even better than being a kid playing cops and robbers. Nothing in my career has even been quite as much pure fun as catching crooks.