The quality gap – part 1

Lately I’ve been doing a lot of reading and thinking about standards in software testing and how they contribute towards high quality applications.

I went back to read a couple of blogs written by Robert Glass in 2001; “Quality: what a fuzzy term” and “Revisiting the definition of software quality”.

I found Glass’s articles thought provoking. They set off a chain of thought that I’ve decided to put in writing. It’s maybe a bit obscure and theoretical for many readers, but I found it helpful so, for better or worse, I’m leaving it for posterity.

Glass’s formulation of quality

In the first article Glass explains how quality is related to several factors; user satisfaction, compliance with the user requirements and whether the application was delivered on time and budget.

Glass presented the relationship in the form of a simple equation. Framing his argument in this way shed light on contradictions and weaknesses in the arguments for standards and for traditional development methods.

According to Glass user satisfaction = good quality + compliant product (i.e. compliant with the requirements) + delivery within budget/schedule.

I think it’s reasonable to infer from Glass’s articles that the factors on the right of the equation actually represent levels of satisfaction derived from these factors. Therefore, “good quality” in this equation represents the amount of satisfaction the users derive from good quality. Likewise, “compliant product” represents the satisfaction users gets from a product that complies with the requirements, and “delivery with budget/schedule” is the user satisfaction from a slickly run project.

S is total user satisfaction.
Q is the satisfaction from good quality.
C is the satisfaction from compliance with requirements.
B is the satisfaction from delivery within budget/schedule.

So Glass’s equation is; S=Q+C+B.

This equation means that a given level of satisfaction can be achieved by varying any of the three elements. If the requirements are poorly defined and don’t reflect the users’ real needs, but the application matches those requirements and is delivered on budget and schedule then user satisfaction would presumably be low, which implies low quality.

Restating Glass’s equation

For my purposes the equation is better presented as; Q=S-(C+B).

The cost and schedule factors (i.e. B) are certainly important, but these apply to the quality of the project, not the quality of the product. Returning to Glass’s equation total user satisfaction is the sum of satisfaction from project and product. Let’s call project satisfaction S1 & product satisfaction S2.

(S1+S2)=Q+C+B

By definition S1=B. Testers might have an interest in the quality of the project, which affects their ability to do their job, but their real role is providing information on the quality of the product so the equation we are interested in is; S2=Q+C, or Q=S2-C

The quality of the product is therefore the gap between user satisfaction with the product and their satisfaction from the level of compliance with the requirements. Compliance with the requirements has effectively been discounted. It’s a given. If you deliver exactly what the users require and the users are happy with that then, in golf terms, you’d have scored level par. That’s pretty good. If your users are less happy than you’d expect from hitting the requirements then the quality score is negative. If the product is better then you get a positive quality score.

An obvious reason for a negative score would be the example I mentioned earlier, a system that matched flawed requirements. The level of satisfaction will be less than you would expect from delivering the required solution, so the quality score will be negative. If the requirements were seriously flawed, but the solution overcomes those problems then you should get a positive score.

Of course these equations merely illustrate the relationship. It’s not possible to substitute numbers into the equations and get an objective numerical result. Well, you could do it, but it wouldn’t mean anything objectively justifiable.

The crucial point is that if anyone wants to know what the quality of the product is then they can’t do so simply by testing against the requirements. They must do so by testing the product itself to find out what it does for the users. The requirements are obviously an important input to the testing exercise, but they cannot be regarded as the sole input.

In order to learn about the product’s quality testers have to investigate that quality gap, the difference between what the users asked for and what they actually got. You cannot provide information about the quality of the product if you look only for confirmation that they’ve got what you expected. You must also take account of what the product does, regardless of whether it was asked for, and whether it matches the users’ needs.

Is quality “conformance to requirements”?

It was fascinating to read the comments on Robert Glass’s articles. Many of the readers refused to accept that quality can ever be any more than compliance with requirements.

Intriguingly there were two strands of criticism that boil down to the same thing in practice. Firstly, quality is conflated with compliance not because that is necessarily the correct definition, but because that is the easiest concept to cope with. The definition of quality is irrelevant. All that matters is complying with the requirements. The reasoning is that we can know only what the users tell us, so we should be judged on nothing else.

The second strand of argument was to try and justify the notion that quality really is a question of complying with requirements, not on brutally pragmatic grounds, but as a matter of principle. A few commenters referred to Philip Crosby, whose maxim was “the definition of quality is conformance to requirements”.

These comments missed the point by miles. Crosby was a passionate advocate of rigour in the whole development process. He believed that this was the key and it was therefore possible to get the requirements right; to produce requirements that were measurable and met the real business needs. I side with those who think his work is of more relevance to conventional production and engineering rather than software development, and that Tom Gilb has far more to offer us.

However, it is a travesty of Crosby’s work to suggest that he ever advocated that we should take whatever requirements the users give us and treat them as sacrosanct. That is the implication of quoting his precept “quality is conformance to requirements” without saying anything about how to ensure that the requirements are right, or even acknowledging that Crosby’s precept depended on that.

These two strands come together in practice with the same result; the requirements are blithely assumed to be correct. Any user dissatisfaction can be shrugged off; “it’s not our fault, you got what you asked for”.

Glass’s equation and analysis creates a dilemma for advocates of standards and traditional scripted testing. If they want to argue that quality is the same thing as compliance with requirements they must then justify the assumption that the requirements are not only correct, but that they are also complete. No requirements specification can possibly detail everything an application must not do.

The implications of Glass’s equation

If the traditionalists accept that quality is distinct from requirements then they have a choice. Do they try and explain how advance scripting based on requirements (or even on expectations about how the application should behave) can be effective in learning about the real behaviour of an application that is liable to do different things, and more things, than the testers expected?

Or do the traditionalists accept that it is not the testers job to say anything about the total quality of the application, and provide only a narrow subset of the possible knowledge that is available?

I don’t believe that the traditionalists have ever addressed this dilemma satisfactorily. They cannot justify the effectiveness of scripted testing, and they dare not concede that their vision of testing is connected only loosely with quality, when their whole worldview is rooted in the assumption that they are the high priests of Quality.

If documentation-heavy, scripted testing (as implied by formal testing standards such as IEEE 829) is not about quality then the whole traditional edifice comes tumbling down. The only way it can survive is by ignoring the difficult questions that Glass’s equation poses.

Quality and “engineering”

The conflation of quality with requirements is either delusional or a symptom of indifference to the real needs of users. Paradoxically it is often portrayed as being hard nosed professionalism, part of the quest to make software development more like civil engineering.

In reality it was part of a mindset that crippled developers for many years by adopting the trappings of engineering whilst ignoring the realities of software development. This mindset spawned practices that were counter-productive in developing high quality applications. They were nevertheless pursued with a willful blindness that regarded evidence of failure as proof that the dysfunctional methods had not been applied sufficiently rigorously.

In my next blog I will try to explain how this came about, and I will develop my argument about how it is relevant to the debate about standards.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s