Tom Gilb and project requirements

This piece appeared in much the same form on the Software Testing Club in February 2010. I’m repeating it here because it states some points about requirements that I want to emphasise, and I want them to appear on my own site. This is what I believe, so I should say it here!

Earlier this year I went on one of Tom Gilb’s courses, the Management-Level Project Communication Workshop in London. It was a full five days, 9 to 5 each day.

It was a fantastic opportunity to hear Tom explain his ideas in some depth. I learnt a lot, and what is even more important is that I’m sure I’ll be continuing to learn for a long time to come. It’s not just a matter of being taught new techniques. Having one’s preconceptions challenged, and seeing doors open that can lead to more knowledge is ultimately far more important.

Clarity of project goals

I had become disillusioned by the idea of software development as a branch of engineering (see this article on my website). That was understandable. The software development I’ve seen over the years has either been rigidly structured in an inappropriate form that was dictated by the needs of project management, accounting and procurement, or it was just too messy and improvised.

Traditional techniques, and the Waterfall lifecycle were concerned with plodding through a project in a manageable manner. They didn’t offer a predictable route from an organisation’s needs or goals to a high-quality application. The predictability was restricted to the process, not the output. It didn’t matter too much where the project ended up, so long as managers could say how long it would take them to get there, even if their confidence was entirely misplaced!

Agile is great in the way it addresses the rigidity and lack of feedback that have plagued traditional projects, but I’ve worried about its ability to deal with the vision, the big picture. Prioritisation seemed random and arbitrary. I feared that resources might be deployed to satisfy the loudest manager rather than to move towards the right goal.

To be fair to Agile, this is also a failing of traditional techniques. Lengthy Waterfall projects have frequently been launched with only the vaguest understanding of what they were meant to achieve. At least the gap was more obvious with Agile, focussed as it is on the “how” rather than the “what” or “why”. Maybe I should restate that as “obvious with the benefit of hindsight”. Many of the early Agilistas insisted there was no problem.

Clarity of requirements

Throughout the week Tom kept on hammering home the point that clarity with the requirements is essential. Sure, we all know that. Clear requirements, freedom, justice, good health, love; they’re all good things, we’re all in favour of them. The trouble is that we know it, but then charge blindly off into our projects with requirements that are vague rubbish. We’ve always done it.

There have been many attempts to tackle the question of high-level Agile requirements. However, the starting point is often a “vision” that is as vague as any corporate mission statement. The vision is then broken down into more detailed elements that usually anticipate the physical solution, i.e. there is a sudden leap from vague vision to high-level design. Alternatively, the starting point is a product vision, which already anticipates the solution.

I’m not saying that is dreadfully bad. In many ways it’s better than the charade of the old structured methods, which blithely ignored their fundamental difficulty in moving from requirements to design. However, Agile can do better, and crucially its flexibility and yearning to improve do make it open to improvement.

I’ve never been afraid to challenge vague requirements as a test manager, but Tom taught me some important lessons. I’ve got to admit that sometimes it’s been a token effort. I’ve said my piece, then stepped out of the way to avoid getting flattened by the implementation express. It’s important to do what’s right, and to persevere so that the project doesn’t proceed on a fundamentally unsound basis. It doesn’t make you popular.

What about the non-functional requirements?

I once received a statement of non-functional requirements that didn’t have a single testable requirement in its 75 pages. It was full of fluffy ideals and targets that could be assessed only after the application had been running for months, by which time the developers would have been long gone.

We effectively re-opened the process by holding workshops to thrash out quality attributes against which we could be measured. It didn’t make me popular with my employers (though I was about to hand in my notice, so idealism came cheap) and it didn’t even fill the client with gratitude when they realised there was still plenty of work to do.

What we’ve traditionally called “non-functional” requirements (a dodgy name, but we’re stuck with it) can often be the real requirements, yet these requirements are usually specified atrociously, or missed altogether.

Tom argues, and I agree, that what we call the functional requirements are very often high-level design statements. The implication of this is that we invariably plunge into the design before we understand the requirements. The real requirements should be all about what the organisation needs to achieve, and what its goals are.

This confusion of ends and means isn’t restricted to software development. The most glaring example I’ve seen was a company that was worried about the physical security of its head office. It was decided that a barrier was required at the main entrance to the site. A camera would identify the registration numbers of employees’ cars and open the barrier.

The police were concerned about tailbacks onto the main road, so the barrier was put at the exit. This meant that criminals or terrorists would have no problems gaining access to the site, and if they chose to steal a car from the car park then the barrier would obligingly open for them. That did happen!

The real requirement was for improved security. The barrier was a design option – and not an effective one either.

Another example was the time I set up a security management process for a large client, who wanted assurance that the process would work effectively from the very start. This had been interpreted by the supplier as meaning that the client required a full-scale test, with simulated security breaches to establish whether they would be reported and how we would respond. This would have been very difficult to arrange, and extremely expensive to carry out. However, the requirement was written into the contract so I was told we would have to do it.

I was sceptical, and went back to the client to discuss their needs in detail. It turned out that their requirement was simply to be reassured that the process would work, smoothly and quickly. Pulling the right people together for a morning to walk through the process in detail would do just as well, at a tiny fraction of the cost. Once I had secured their agreement it was straightforward to have the contract changed so that it reflected their real requirement, rather than a possible solution.

Confusing possible solutions with the requirements is potentially a catastrophic mistake, that can waste huge amounts of time and money, and send the project down the wrong path, trying frantically to deliver things that are not really needed.

If testers are to add more value to projects than just ticking boxes they should be vigorously challenging the confusion of requirements and design.

Quantifying the requirements

A glib high-level summary of Tom’s thinking in this area is that software developments are about delivering value to the stakeholders, and value is always measurable. Always? Well, I’m not sure I can accept that in an abstract intellectual sense.

However, looking at these value requirements in a way that does make them quantifiable can provide an approximate measurement, and taking a few of these quantifiable measurements should allow a sort of triangulation that helps you home in on the real requirement. That’s my interpretation, not Tom’s argument.

That would give you a much clearer target than the wishful thinking approach, but I won’t be fully convinced till I’ve applied this thinking and succeeded in converting woolly wishes into hard numbers. However, what I fully accept is that this effort is worthwhile; quantified requirements are far better than vague aspirations. As testers it’s worth campaigning for better, more objective, more testable requirements.

Is software development “engineering”?

Until I went on Tom’s course, neither traditional techniques nor Agile seemed remotely credible as “engineering”. Now I’m not so sure. I’m open to being persuaded. However, Tom has convinced me that greater rigour with the requirements, tougher challenges from the testers, and a sharper vision of what we’re trying to do are not just possible, but necessary. Whether this is engineering or not is maybe slightly academic. I’m convinced it’s the right direction.

Advertisements

I wish I’d known I was right!

As the years have passed I’ve noticed something interesting about the way my knowledge of IT and my attitudes have changed.

When I started out I was painfully aware that I knew nothing, and that there was a whole vast universe of knowledge out there. The more I learnt, the more I realised there was to learn.

As I became more competent I started to get cocky about the little areas that I was proficient in. I could do clever things with IBM mainframe utilities, with VM/CMS execs, and with SAS routines and macros. I remained duly humble about the rest of IT.

When I started to get involved in big, conventional developments I was hugely impressed by the size and complexity of these applications. I quickly realised that such developments were enormously difficult and that there was the constant danger of screwing up.

As a humble novice I largely accepted the prevailing wisdom of the times, and tried to understand it better. Yet, there were some things I didn’t really get, and which I had to accept largely on trust, assuming that greater experience would give me a deeper, truer understanding.

As the years passed I gradually realised that my youthful puzzlement and suspicion had been well justified. I’d actually known more than I realised, but I’d been the little boy who didn’t cry out that the emperor was wearing no clothes!

What were the things I didn’t get?

Scripted testing

I remember looking on in wonder at the squads of test analysts who were working long hours churning out scripts by the hundreds, thousands even. They were in every weekend, months before the start of testing.

How could they be confident that the application would actually be delivered to match the specifications they were working from? What about the changes? How could they anticipate everything before they saw the application running? If they missed something from their spec would they really improvise appropriately when they executed the testing?

It seemed strange to me, but I accepted that these were the experts. It might look hugely inefficient, but obviously that had to be the most effective way to ensure that testing was as thorough as it had to be. No!

Getting the requirements right

That leads on to the next thing that left me uncomfortable as a beginner. The process of deriving the requirements was horribly prolonged and laborious. Even assuming that the business didn’t change, it hardly seemed plausible that the analysts could get all the enormous detail right and documented precisely before the programming could start. But then what did I know? The analysts were obviously more experienced, patient and painstaking than I was.

Well, they were, but that didn’t mean they were getting it right. It was only years later that I realised that the problem wasn’t that we were deriving the requirements badly, The problem was that you cannot get the requirements right first time, and the users cannot specify requirements till they have seen what is possible.

The process of building the application changes the requirements, and good development practices have to accept that, rather than pretending that the users are feckless and undisciplined.

Code it all, then test it all

When I was developing small scale applications in tiny teams we didn’t follow formal standards. We worked the way that suited us.

My preferred technique was to work out what I had to do, splitting the work into discrete chunks. Then I’d think through how I’d get each bit to work. I wasn’t thinking “test before I code”. It was just that it seemed natural to know before I started coding how I’d be able to satisfy myself the code was working; to make sure I’d code something I knew I’d be happy with, rather than sometime that would leave me wondering about the quality.

So I’d build a bit, test a bit, and gradually extend my programs into something useful, then the programs into the full application. I never thought of it as testing. It was just easy coding.

I could see that the developers of the big applications didn’t do it that way. They’d churn out whole programmes before testing, zip through a few unit tests and bolt things together into an integration test. I just accepted that my way was only possible with the small apps. The big boys had to do it differently.

Incidentally, my training had left me deeply committed to structure and code quality. After writing a program I’d always go back through it to remove redundant code, ensuring that common processing was pulled together into common routines and modules. Elegance was everything, spaghetti code was an abomination. It took a while for me to twig that refactoring wasn’t some fancy new trick. It was just good coding.

Structured design (boo!)

I was sent on various courses on structured techniques. Structured analysis made sense, more or less. At least, I could follow the internal logic of the whole thing, though I wasn’t entirely convinced about whether it really worked in practice.

The thing that I really didn’t get, however, was how you derived the outline design from the requirements. It seemed suspiciously like you relied on your own personal judgement, guesswork even. The design didn’t seem to flow logically and inevitably from the precision and detail of the structured analysis. I thought I was maybe not as smart as I thought, and that all would become clearer in time. I might be winging it using intuition, but the experts were doing it properly.

It never did become clearer, and I was amazed to discover years later that structured techniques had been rumbled. The founders of structured techniques really had been relying on intuition. There was no empirical basis to their work, and there was a genuine discontinuity between the requirements and the design. See my article on testing and standards from the December 2009 edition of Testing Experience magazine for an explanation, and some references. The relevant part is “IEEE829 & the motivation behind documentation”.

The lesson?

The lesson from this is definitely not that I was an unrecognised genius. Nor is the lesson that youngsters probably know better than the old hands who’ve got set in their ways. Bright youngsters have a lot to offer by asking penetrating questions, and forcing experienced staff to explain and justify why things are done the way they are. But that’s not my point.

The real point is that when I was working in these small teams, without heavyweight processes and standards, we defaulted to a natural way of developing software. The Agile movement goes with the grain of such natural development practices. Structured techniques chased after the chimera of software development as a variant of civil engineering. In reality they were no more than a mutant that was neither engineering nor good software development.

I’m not dismissing standards and processes, but they should be guidance, rather than rules. They should promote good, natural practices rather than the rigid, sclerotic approaches that attempted to resolve problems by churning out more documents. It’s a shame that I had to figure things out for myself as a beginner, rather than being guided by useful standards.

It’s been great to see the way that the more organic, natural approaches such as Agile and Exploratory Testing have energised the industry. They might not always be right or appropriate, but they do reflect deep truths about the right way to produce good software. Any plausible approach has to take these truths on board. Ignoring them guarantees waste, repeated failure, demoralised developers and cynical users. Twenty years ago, that was pretty much business as usual!

Have you ever instinctively known better as a beginner than you realised? Did you have the nerve to criticise the emperor’s clothing?

The O2 registration series – wrapping up

This is the fourth, and last, entry in a series about how testers should remember usability when they’re testing websites, using the O2 registration process as an example.

The previous posts can be found at “Usability and O2 registration”, “O2 website usability: testing, secrets and answers” and “O2 website usability: beating the user up”.

If you remember, I only started out on this expedition through O2’s registration process because I’d linked my personal and business phones. My business phone then disappeared from view. There was no sign of it on my personal account and the business account ID was crippled. I couldn’t log into it, and so I decided to re-register the business phone and plodded my painful way through O2’s registration process.

The mystery of the missing phone account – solved!

the 'My O2' page

the

After successfully re-registering my business phone I logged back into the personal account and stared at the “My O2” page, wondering why the original business account had vanished.

I decided to go ahead and link the new business account to the personal one. Maybe I should pretend I was trying to replicate a bug, but who am I trying to kid? There was no-one to report it to. Just like a little boy, I broke it once, then wanted to do the same thing again to see if would break again.

Whatever I was doing when I clicked again on the button under “Link my accounts” I was not looking for my business phone. Why would I look there? But there it was.

Link/View my other accounts

Link/View my other accounts

This is the screen that faced me. I’ve masked the ID’s and my personal number. My original business phone account hadn’t disappeared after all. It had just been remarkably well hidden.

Information architecture – it matters

I’ve seen information architecture derided as a trendy and pretentious field – but only by people who don’t know what they’re talking about. The O2 site is a good example of why it’s needed. Nielsen and Loranger have this to say in their book “Prioritizing Web Usability”.

“Chaotic design leads to dead ends and wasted effort. Hastily thrown-up web sites without effective information schemes prevent users from getting to the information they seek. When this happens, they may give up, or even worse, go to a different site.

A well-structured site gives users what they want when they want it. One of the biggest compliments a site can get is when people don’t comment on its structure in user testing. It’s the designers’ job to worry about the site’s structure, not the users’”.

Precisely; a well structured site gives users what they want, when they want it. They shouldn’t have to puzzle it out. They shouldn’t have to think, “now what do I do?”. They should just be able to do it. It’s that simple, but companies keep on screwing it up and wasting our time. Steve Krug has written an excellent short book on the subject with the very apt title “Don’t make me think!”.

When I originally linked my accounts I should have been able to look at the “My O2” page and instantly know where I’d have to go to see my business phone account. The site works functionally, but burying the account details of the other phones beneath “Link my accounts … Find out more” was guaranteed to confuse users.

Actually, on second thoughts, this is maybe not such a great example of the need for information architecture. The problem should have been so obvious that anyone could have spotted it early. You shouldn’t have to be any sort of specialist to see this one. The testers should have called it out, unless of course they were sticking rigidly to scripts intended to establish whether the functionality was working according to the documented requirements.

Did no-one put their hands up and say, ”this is daft”. Surely that is part of a tester’s job?

You wouldn’t even need to re-visit the site structure to address it. Simply changing the labels and text would have made the page much less confusing. It might not have been elegant, but the change could have been made very late and it would have worked.

Ideally, however, the problem would have been detected early enough for the site structure to be changed easily, i.e. during design itself. The same applies to the other O2 usability problems I discussed in previous blogs. If usability defects are found only during the user testing at the end of the project then it’s usually too late. The implementation express is at full steam, and it’s stopping for no-one. The project manager has got the team shovelling coal into the engine as fast as they can, the destination’s in sight, and if some testers or users start whining about usability then they need to be reminded of the big picture. “Ach, it’s just a cosmetic problem and, even if it isn’t, there’s a workaround”.

Early testing – what it should mean

If usability testing is to be effective it has to be built into the design process. Under the V Model testers learn the “test early” mantra. In practice it often amounts to little more than signing off massive specifications, without enough time to read them properly, never mind actually inspect them carefully. The testers then get to work churning out vast numbers of test scripts, the more the better. The more test scripts, the more thorough the testing? Please don’t lift that quote out of context. I value my reputation!

No. Early testing has to be formative, early enough to shape the requirements and the design. We have a lot to learn from User Experience (UX) professionals. The UX people are not called in as often as they should be. There are no end of techniques that can be used, and they don’t seem to be part of the testers’ normal repertoire.

Nielsen’s Discount Usability Engineering and Steve Krug’s “lost our lease” usability testing in particular are worth checking out. This article I wrote for Testing Experience has a bit more on the subject, and detailed references. See the section “Tactical improvements testers can make”.

What this series has been all about

There are other problems I could have discussed. In particular, there’s evidence of a lack of basic data analysis. It looks like the designers didn’t clarify exactly what the data meant. I’ve never seen a defect reporting that the data analysis was flawed. That would be the root cause. The defects would probably be symptoms of that deeper problem. The point is that proactive user involvement at an early stage could have detected such root problems.

The theme of this series of articles about O2 isn’t how awful that company is. They’re no worse than many others. They just happened to come to my attention at the right time. It’s only partly about the importance of usability, and incorporating that into our testing. That is certainly important, and much neglected, but the real message is that testers have to be involved early and asking challenging questions of the whole team throughout the development.

Testers should never be constrained by scripts. They should never be expected to derive all their tests from documented requirements. Taking either approach means having your head down, focussing on one little bit after another. We should have our head up, making sure that people understand and remember the big picture.

“What are we doing? What will the users want to do? Does this particular feature help? Is it a hindrance? Why are we doing it this way? Could it be done better?”

Questions like that should be constantly running through our minds. As Lisa Crispin put it in a tweeted comment on one of the earlier O2 posts, we shouldn’t be testing slavishly to the requirements. We should be thinking about the real user experience, and using our own business knowledge. Exactly!