Tom Gilb and project requirements

This piece appeared in much the same form on the Software Testing Club in February 2010. I’m repeating it here because it states some points about requirements that I want to emphasise, and I want them to appear on my own site. This is what I believe, so I should say it here!

Earlier this year I went on one of Tom Gilb’s courses, the Management-Level Project Communication Workshop in London. It was a full five days, 9 to 5 each day.

It was a fantastic opportunity to hear Tom explain his ideas in some depth. I learnt a lot, and what is even more important is that I’m sure I’ll be continuing to learn for a long time to come. It’s not just a matter of being taught new techniques. Having one’s preconceptions challenged, and seeing doors open that can lead to more knowledge is ultimately far more important.

Clarity of project goals

I had become disillusioned by the idea of software development as a branch of engineering (see this article on my website). That was understandable. The software development I’ve seen over the years has either been rigidly structured in an inappropriate form that was dictated by the needs of project management, accounting and procurement, or it was just too messy and improvised.

Traditional techniques, and the Waterfall lifecycle were concerned with plodding through a project in a manageable manner. They didn’t offer a predictable route from an organisation’s needs or goals to a high-quality application. The predictability was restricted to the process, not the output. It didn’t matter too much where the project ended up, so long as managers could say how long it would take them to get there, even if their confidence was entirely misplaced!

Agile is great in the way it addresses the rigidity and lack of feedback that have plagued traditional projects, but I’ve worried about its ability to deal with the vision, the big picture. Prioritisation seemed random and arbitrary. I feared that resources might be deployed to satisfy the loudest manager rather than to move towards the right goal.

To be fair to Agile, this is also a failing of traditional techniques. Lengthy Waterfall projects have frequently been launched with only the vaguest understanding of what they were meant to achieve. At least the gap was more obvious with Agile, focussed as it is on the “how” rather than the “what” or “why”. Maybe I should restate that as “obvious with the benefit of hindsight”. Many of the early Agilistas insisted there was no problem.

Clarity of requirements

Throughout the week Tom kept on hammering home the point that clarity with the requirements is essential. Sure, we all know that. Clear requirements, freedom, justice, good health, love; they’re all good things, we’re all in favour of them. The trouble is that we know it, but then charge blindly off into our projects with requirements that are vague rubbish. We’ve always done it.

There have been many attempts to tackle the question of high-level Agile requirements. However, the starting point is often a “vision” that is as vague as any corporate mission statement. The vision is then broken down into more detailed elements that usually anticipate the physical solution, i.e. there is a sudden leap from vague vision to high-level design. Alternatively, the starting point is a product vision, which already anticipates the solution.

I’m not saying that is dreadfully bad. In many ways it’s better than the charade of the old structured methods, which blithely ignored their fundamental difficulty in moving from requirements to design. However, Agile can do better, and crucially its flexibility and yearning to improve do make it open to improvement.

I’ve never been afraid to challenge vague requirements as a test manager, but Tom taught me some important lessons. I’ve got to admit that sometimes it’s been a token effort. I’ve said my piece, then stepped out of the way to avoid getting flattened by the implementation express. It’s important to do what’s right, and to persevere so that the project doesn’t proceed on a fundamentally unsound basis. It doesn’t make you popular.

What about the non-functional requirements?

I once received a statement of non-functional requirements that didn’t have a single testable requirement in its 75 pages. It was full of fluffy ideals and targets that could be assessed only after the application had been running for months, by which time the developers would have been long gone.

We effectively re-opened the process by holding workshops to thrash out quality attributes against which we could be measured. It didn’t make me popular with my employers (though I was about to hand in my notice, so idealism came cheap) and it didn’t even fill the client with gratitude when they realised there was still plenty of work to do.

What we’ve traditionally called “non-functional” requirements (a dodgy name, but we’re stuck with it) can often be the real requirements, yet these requirements are usually specified atrociously, or missed altogether.

Tom argues, and I agree, that what we call the functional requirements are very often high-level design statements. The implication of this is that we invariably plunge into the design before we understand the requirements. The real requirements should be all about what the organisation needs to achieve, and what its goals are.

This confusion of ends and means isn’t restricted to software development. The most glaring example I’ve seen was a company that was worried about the physical security of its head office. It was decided that a barrier was required at the main entrance to the site. A camera would identify the registration numbers of employees’ cars and open the barrier.

The police were concerned about tailbacks onto the main road, so the barrier was put at the exit. This meant that criminals or terrorists would have no problems gaining access to the site, and if they chose to steal a car from the car park then the barrier would obligingly open for them. That did happen!

The real requirement was for improved security. The barrier was a design option – and not an effective one either.

Another example was the time I set up a security management process for a large client, who wanted assurance that the process would work effectively from the very start. This had been interpreted by the supplier as meaning that the client required a full-scale test, with simulated security breaches to establish whether they would be reported and how we would respond. This would have been very difficult to arrange, and extremely expensive to carry out. However, the requirement was written into the contract so I was told we would have to do it.

I was sceptical, and went back to the client to discuss their needs in detail. It turned out that their requirement was simply to be reassured that the process would work, smoothly and quickly. Pulling the right people together for a morning to walk through the process in detail would do just as well, at a tiny fraction of the cost. Once I had secured their agreement it was straightforward to have the contract changed so that it reflected their real requirement, rather than a possible solution.

Confusing possible solutions with the requirements is potentially a catastrophic mistake, that can waste huge amounts of time and money, and send the project down the wrong path, trying frantically to deliver things that are not really needed.

If testers are to add more value to projects than just ticking boxes they should be vigorously challenging the confusion of requirements and design.

Quantifying the requirements

A glib high-level summary of Tom’s thinking in this area is that software developments are about delivering value to the stakeholders, and value is always measurable. Always? Well, I’m not sure I can accept that in an abstract intellectual sense.

However, looking at these value requirements in a way that does make them quantifiable can provide an approximate measurement, and taking a few of these quantifiable measurements should allow a sort of triangulation that helps you home in on the real requirement. That’s my interpretation, not Tom’s argument.

That would give you a much clearer target than the wishful thinking approach, but I won’t be fully convinced till I’ve applied this thinking and succeeded in converting woolly wishes into hard numbers. However, what I fully accept is that this effort is worthwhile; quantified requirements are far better than vague aspirations. As testers it’s worth campaigning for better, more objective, more testable requirements.

Is software development “engineering”?

Until I went on Tom’s course, neither traditional techniques nor Agile seemed remotely credible as “engineering”. Now I’m not so sure. I’m open to being persuaded. However, Tom has convinced me that greater rigour with the requirements, tougher challenges from the testers, and a sharper vision of what we’re trying to do are not just possible, but necessary. Whether this is engineering or not is maybe slightly academic. I’m convinced it’s the right direction.

Advertisements

4 thoughts on “Tom Gilb and project requirements

  1. Good article, James. As always.

    “I’ve said my piece, then stepped out of the way to avoid getting flattened by the implementation express.”

    Love the imagery here. How true!

    Challenging requirements is a large part of my job too. Of course, the situation here isn’t helped by the absence of any systems engineers whatsoever or any sort of centralised requirements repository.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s