The quality gap – part 2

In my last blog, the first of two on the theme of “The Quality Gap” I discussed the harmful conflation of quality with requirements and argued that it was part of a mindset that hindered software development for decades.

In my studies and reading of software development and its history I’ve come to believe that academics and industry gurus misunderstood the nature of software development, regarding it as a more precise and orderly process than it really was, or at least they regarded it as potentially more precise and orderly than it could reasonably be. They saw practitioners managing projects with a structured, orderly process that superficially resembled civil engineering or construction management, and that fitted their perception of what development ought to be.

They missed the point that developers were managing projects that way because it was the simplest way to manage a chaotic and unpredictable process, and not because it was the right way to produce high quality software. The needs of project management were dictating development approaches, not the needs of software development.

The pundits drew the wrong conclusion from observing the uneasy mixture of chaos and rigid management. They decided that the chaos wasn’t the result of developers struggling to cope with the realities of software development and an inappropriate management regime; it was the result of a lack of formal methods and tools, and crucially it was also the consequence of a lack of discipline.

Teach them some discipline!

The answer wasn’t to support developers in coming to grips with the problems of development; it was to crack down on them and call for greater order and formality.

Some of the comments from the time are amusing, and highly revealing. Barry Boehm approvingly quoted a survey in 1976; “the average coder…(is)…generally introverted, sloppy, inflexible, in over his head, and undermanaged”.

Even in the 1990s Paul Ward and Ed Yourdon, two of the proponents of structured methods were berating developers for their sins and moral failings.

Ward – “the wealth of ignorance… the lack of professional discipline among the great unwashed masses of systems developers”.

Yourdon – “the majority of software development organisations operate in a “Realm of Darkness”, blissfully unaware of even the rudimentary concepts of structured analysis and design”

This was pretty rich considering the lack of theoretical and practical underpinning of structured analysis and design, as promoted by Ward and Yourdon. See this part of an article I wrote a few years ago for a fuller explanation. The whole article gives a more exhaustive argument against standards than I’m providing here.

Insulting people is never a great way to influence them, but that hardly mattered. Nobody cared much about what the developers themselves thought. Senior managers were convinced and happily signed the cheques for massive consultancy fees to apply methods built on misconceived conceptual models. These methods reinforced development practices which were damaging, certainly from the perspective of quality (and particularly usability).

Quality attributes

Now we come to the problem of quality attributes. For many years there has been a consensus that a high quality application should deliver the required levels of certain quality attributes, pretty much the same set that Glass listed in the article I referred to in part 1; reliability, modifiability, understandability, efficiency, usability, testability and portability. There is debate over the the members of this set, and their relative importance, but there is agreement that these are the attributes of quality.

They are also called “non-functional requirements”. I dislike the name, but it illustrates the problem. The relentless focus of traditional, engineering obsessed development was on the function, and thus the functional requirements, supposedly in the name of quality. Yet the very attributes that a system needed in order to enjoy high quality were shunted to one side in the development process and barely considered.

I have never seen these quality attributes given the attention they really require. They were often considered only as a lame afterthought and specified in such a way that testing was impossible. They were vague aspirations and lacked precision. Where there were clear criteria and targets they could usually be assessed only after the application had been running for months, by which time the developers would have been long gone. What they did not do, or not do effectively, was to shape the design.

The quality attributes are harder to specify than functional requirements; harder, but not impossible. However, the will to specify clear and measurable quality requirements was sadly lacking. All the attention was directed at the function, a matter of logical relationships, data flows and business rules.

The result was designs that reflected what the application was supposed to do and neglected how it would do it.

This problem was not attributable to incompetent developers and designers who failed to follow the prescribed methods properly. The problem was a consequence of the method, and one of the main reasons was the difficulty of finding the right design.

The design paradox

Traditional development, and structured methods in particular, had a fundamental problem, quite apart from the neglect of quality attributes, in trying to derive the design from the requirements. Again, that same part of my article on testing and standards explains how these methods matched the mental processes of bad designers and ignored the way that successful designers think.

It’s a paradox of the traditional approach to software development that developers did their designing both too early and too late. They subconsciously fixed on design solutions too early while they they should only have been trying to understand the users’ goals and high level requirements. The requirements would be captured in a way that assumed and constrained the solution. The analysts and designers would then work their way through detailed requirements to a design that was not exposed to testing until it was too late to change easily, if it was possible to change it at all.

Ignoring reality

So software development, in attempting to be more like a conventional engineering discipline, was adopting the trappings of formal engineering, whilst ignoring its inability to deal with issues that a civil engineer would never dream of neglecting.

If software engineering really was closely aligned to civil engineering it would have focussed relentlessly on practical problems. Civil engineering has to work. It is a pragmatic discipline and cannot afford to ignore practical problems. Software engineering, or rather the sellers of formal methods, could be commercially highly successful by ignoring problems and targeting their sales pitch at senior managers who didn’t understand software development, but wrote the cheques.

Civil engineering has sound scientific and mathematical groundings. The flow from requirements to design is just that, a flow rather than a series of jumps from hidden assumptions to arbitrary solutions.

Implicit requirements (e.g. relating to safety) in civil engineering are quite emphatically as important as those that are documented. They cannot be dismissed just because the users didn’t request them. The nature of the problem engineers are trying to solve must be understood so that the implicit requirements are exposed and addressed.

In civil engineering designs are not turned into reality before anyone is certain that they will work.

These discrepancies between software development and civil engineering have been casually ignored by the proponents of the civil engineering paradigm.

So why did the civil engineering paradigm survive so long?

There are two simple reasons for the enduring survival of this deeply flawed worldview. It was comforting, and it has been hugely profitable.

Developers had to adopt formal methods to appear professional and win business. They may not have really believed in their efficacy, but it was reassuring to be able to follow an orderly process. Even the sceptics could see the value of these methods in providing commercial advantage regardless of whether they built better applications.

The situation was summed up well by Brian Fitzgerald, back in 1995.

In fact, while methodologies may contribute little to either the process or product of systems development, they continue to be used in organisations, principally as a “comfort factor” to reassure all participants that “proper” practices are being followed in the face of the stressful complexity associated with system development.

Alternately, they are being used to legitimate the development process, perhaps to win development contracts with government agencies, or to help in the quest for ISO-certification. In this role, methodologies are more a placebo than a panacea, as developers may fall victim to goal displacement, that is, blindly and slavishly following the methodology at the expense of actual systems development. In this mode, the vital insight, sensitivity and flexibility of the developer are replaced by automatic, programmed behaviour.

The particular methodologies about which Fitzgerald was writing may now be obsolete and largely discredited, but the mindset he describes is very much alive. The desire to “legitimate the development process” is still massively influential, and it is that desire that the creators of ISO 29119 are seeking to feed, and to profit from.

However, legitimising the development process, in so far as it means anything, requires only that developers should be able to demonstrate that they are accountable for the resources they use, and that they are acting in a responsible manner, delivering applications as effectively and efficiently as they can given the nature of the task. None of that requires exhaustive, prescriptive standards. Sadly many organisations don’t realise that, and the standards lobby feeds off that ignorance.

The quality equation that Robert Glass described, and which I discussed in the first post of this short series, may be no more than a simple statement of the obvious. Software quality is not simply about complying with the requirements. That should be obvious, but it is a statement that many people refuse to acknowledge. They do not see that there is a gap between what the users expected to get, and their perceptions of the application when they get it.

It is that gap which professional testers seek to illuminate. Formal standards are complicit in obscuring the gap. Instead of encouraging testers and developers to understand reality they encouraging a focus on documentation and what was expected. They reinforce assumptions rather than question them. They confuse the means with the end.

I’ll sign off with a quote that sums up the problem with testing standards. It’s from
“Information Systems Development: Methods-in-Action” by Fitzgerald, Russo and Stolterman. The comment came from a developer interviewed during research for their book.

“We may be doing wrong but we’re doing so in the proper and customary manner”.

Advertisements

2 thoughts on “The quality gap – part 2

  1. Great read there, James. Thanks for sharing your insights. I like your point about standards reinforcing assumptions rather than helping people challenge them. To me, standardisation is dangerous precisely because it can put people off challenging things because “it’s what’s written down”.

    Thanks,

    Stephen

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s