Games & scripts – stopping at red lights

I’ve been reading about game development lately. It’s fascinating. I’m sure there is much that conventional developers and testers could learn from the games industry. Designers need to know their users, how they think and what they expect. That’s obvious, but we’ve often only paid lip service to these matters. The best game designers have thought much more seriously and deeply about the users than most software developers.

Game designers have to know what their customers want from games. They need to understand why some approaches work and others fail. Developing the sort of games that players love is expensive. If the designers get it wrong then the financial consequences are ruinous. Inevitably the more thoughtful designers stray over into anthropology.

That is a subject I want to write about in more depth, but in the meantime this is a short essay I wrote in slightly amended form for Eurostar’s Test Huddle prompted by a blog by Chris Bateman the game designer. Bateman was talking about the nature of play and games, and an example he used made me think about how testing has traditionally been conducted.Midtown Madness

“Ramon Romero of Microsoft’s Games User Research showed footage of various random people playing games for the first time. I was particular touched by the middle aged man who drove around in Midtown Madness as if he was playing a driving simulator. ‘This is a great game’, he said, as he stopped at a red light and waited for it to change.”

That’s ridiculous isn’t it? Treating a maniacal, race game as a driving simulator? Maybe, but I’m not sure. That user was enjoying himself, playing the game entirely in line with his expectations of what the game should be doing.

The story reminded me of testers who embark on their testing armed with beliefs that are just as wildly misplaced about what the game, sorry, the application should be doing. They might have exhaustive scripts generated from requirements documents that tell them exactly what the expected behaviour should be, and they could be dangerously deluded.

Most of my experience has been with financial applications. They have been highly complicated, and the great concern was always about the unexpected behaviour, the hidden functionality that could allow users to steal money or screw up the data.

Focusing on the documented requirements, happy paths and the expected errors is tackling an application like that Midtown Madness player; driving carefully around the city, stopping at red lights and scrupulously obeying the law. Then, when the application is released the real users rampage around discovering what it can really do.

Was that cheerfully naïve Midtown Madness player really ridiculous? He was just having fun his way. He wasn’t paid a good salary to play the game. The ones who are truly ridiculous are the testers who are paid to find out what applications can do and naively think they can do so by sticking to their scripts. Perhaps test scripts are rather like traffic regulations. Both say what someone thinks should be going on, but how much does that actually tell you about what is really happening out there, on the streets, in the wild where the real users play?


Service Virtualization interview about usability

Service VirtualizationThis interview with Service Virtualization appeared in January 2015. Initially when George Lawton approached me I wasn’t enthusiastic. I didn’t think I would have much to say. However, the questions set me thinking, and I felt they were relevant to my experience so I was happy to take part. It gave me something to do while I was waiting to fly back from EuroSTAR in Dublin!

How does usability relate to the notion of the purpose of a software project?

When I started in IT over 30 years ago I never heard the word usability. It was “user friendliness”, but that was just a nice thing to have. It was nice if your manager was friendly, but that was incidental to whether he was actually good at the job. Likewise, user friendliness was incidental. If everything else was ok then you could worry about that, but no-one was going to spend time or money, or sacrifice any functionality just to make the application user friendly. And what did “user friendly” mean anyway. “Who knows? Who cares? We’ve got serious work do do. Forget about that touchy feely stuff.”

The purpose of software development was to save money by automating clerical routines. Any online part of the system was a mildly anomalous relic of the past. It was just a way of getting the data into the system so the real work could be done. Ok, that’s an over-simplification, but I think there’s enough truth in it to illustrate why developers just didn’t much care about the users and their experience. Development moved on from that to changing the business, rather than merely changing the business’s bureaucracy, but it took a long time for these attitudes to shift.

The internet revolution turned everything upside down. Users are no longer employees who have to put up with whatever they’re given. They are more likely to be customers. They are ruthless and rightly so. Is your website confusing? Too slow to load? Your customers have gone to your rivals before you’ve even got anywhere near their credit card number.

The lesson that’s been getting hammered into the heads of software engineers over the last decade or so is that usability isn’t an extra. I hate the way that we traditionally called it a “non-functional requirement”, or one of the “quality criteria”. Usability is so important and integral to every product that telling developers that they’ve got to remember it is like telling drivers they’ve got to remember to use the steering wheel and the brakes. If they’re not doing these things as a matter of course they shouldn’t be allowed out in public. Usability has to be designed in from the very start. It can’t be considered separately.

What are the main problems in specifying for and designing for software usability?

Well, who’s using the application? Where are they? What is the platform? What else are they doing? Why are they using the application? Do they have an alternative to using your application, and if so, how do you keep them with yours? All these things can affect decisions you take that are going to have a massive impact on usability.

It’s payback time for software engineering. In the olden days it would have been easy to answer these questions, but we didn’t care. Now we have to care, and it’s all got horribly difficult.

These questions require serious research plus the experience and nous to make sound judgements with imperfect evidence.

In what ways do organisations lose track of the usability across the software development lifecycle?

I’ve already hinted at a major reason. Treating usability as a non-functional requirement or quality criterion is the wrong approach. That segregates the issue. It’s treated as being like the other quality criteria, the “…ities” like security, maintainability, portability, reliability. It creates the delusion that the core function is of primary importance and the other criteria can be tackled separately, even bolted on afterwards.

Lewis & Rieman came out with a great phrase fully 20 years ago to describe that mindset. They called it the peanut butter theory of usability. You built the application, and then at the end you smeared a nice interface over the top, like a layer of peanut butter (PDF, opens in new tab).

“Usability is seen as a spread that can be smeared over any design, however dreadful, with good results if the spread is thick enough. If the underlying functionality is confusing, then spread a graphical user interface on it. … If the user interface still has some problems, smear some manuals over it. If the manuals are still deficient, smear on some training which you force users to take.”

Of course they were talking specifically about the idea that usability was a matter of getting the interface right, and that it could be developed separately from the main application. However, this was an incredibly damaging fallacy amongst usability specialists in the 80s and 90s. There was a huge effort to try to justify this idea by experts like Hartson & Hix, Edmonds, and Green. Perhaps the arrival of Object Oriented technology contributed towards the confusion. A low level of coupling so that different parts of the system are independent of each other is a good thing. I wonder if that lured usability professionals into believing what they wanted to believe, that they could be independent from the grubby developers.

Usability professionals tried to persuaded themselves that they could operate a separate development lifecycle that would liberate them from the constraints and compromises that would be inevitable if they were fully integrated into development projects. The fallacy was flawed conceptually and architecturally. However, it was also a politically disastrous approach. The usability people made themselves even less visible, and were ignored at a time when they really needed to be getting more involved at the heart of the development process.

As I’ve explained, the developers were only too happy to ignore the usability people. They were following methods and lifecycles that couldn’t easily accommodate usability.

How can organisations incorporate the idea of usability engineering into the software development and testing process?

There aren’t any right answers, certainly none that will guarantee success. However, there are plenty of wrong answers. Historically in software development we’ve kidded ourselves thinking that the next fad, whether Structured Methods, Agile, CMMi or whatever, will transform us into rigorous, respected professionals who can craft high quality applications. Now some (like Structured Methods) suck, while others (like Agile) are far more positive, but the uncomfortable truth is that it’s all hard and the most important thing is our attitude. We have to acknowledge that development is inherently very difficult. Providing good UX is even harder and it’s not going to happen organically as a by-product of some over-arching transformation of the way we develop. We have to consciously work at it.

Whatever the answer is for any particular organisation it has to incorporate UX at the very heart of the process, from the start. Iteration and prototyping are both crucial. One of the few fundamental truths of development is that users can’t know what they want and like till they’ve seen what is possible and what might be provided.

Even before the first build there should have been some attempt to understand the users and how they might be using the proposed product. There should be walkthroughs of the proposed design. It’s important to get UX professionals involved, if at all possible. I think developers have advanced to the point that they are less likely to get it horribly wrong, but actually getting it right, and delivering good UX is asking too much. For that I think you need the professionals.

I do think that Agile is much better suited to producing good UX than traditional methods, but there are still dangers. A big one is that many Agile developers are understandably sceptical about anything that smells of Big Up-Front Analysis and Design. It’s possible to strike a balance and learn about your users and their needs without committing to detailed functional requirements and design.

How can usability relate to the notion of testable hypothesis that can lead to better software?

Usability and testability go together naturally. They’re also consistent with good development practice. I’ve worked on, or closely observed, many applications where the design had been fixed and the build had been completed before anyone realised that there were serious usability problems, or that it would be extremely difficult to detect and isolate defects, or that there would be serious performance issues arising from the architectural choices that had been made.

We need to learn from work that’s been done with complexity theory and organisation theory. Developing software is mostly a complex activity, in the sense that there are rarely predictable causes and effects. Good outcomes emerge from trialling possible solutions. These possibilities aren’t just guesswork. They’re based on experience, skill, knowledge of the users. But that initial knowledge can’t tell you the solution, because trying different options changes your understanding of the problem. Indeed it changes the problem. The trials give you more knowledge about what will work. So you have to create further opportunities that will allow you to exploit that knowledge. It’s a delusion that you can get it right first time just by running through a sequential process. It would help if people thought of good software as being grown rather than built.

Why do UX evangelists give up?

People often dismiss Twitter for being a pointless waste of time, but the other day I had a chat that illustrated how useful and interesting Twitter can be if you’re prepared to use it thoughtfully.

Charlotte Lewis tweeted a link to a comment I’d made on Craig Tomlin’s blog. “UX evangelist as step 1 of ux-ing an org, huh…See @James_Christie comment”.

Charlotte Lewis's tweet

Craig had argued in his blog that there are five levels of UX maturity.

Level 1 – IT has responsibility for UX.

Level 2 – Operations (ie the department that administers the organisation) has responsibility for UX.

Level 3 – Marketing has responsibility for UX.

Level 4 – UX is independent and equal to IT, Operations and Marketing.

Level 5 – UX is independent and has authority over IT, Operations and Marketing.

I had suggsted in my comment that there is an important lower, earlier stage in the evolution of UX within an organisation. That is when an enthusiastic and well informed individual has identified that the organisation’s neglect of UX is a problem and starts lobbying for changes.

I was curious about Charlotte’s tweet and responded, asking why she’d dug out my comment. She said that a feed she follows had highlighted it, and she felt that it was still valid. We then had a chat about factors allowing a organisation to move from the initial level 0 that I’d identified to level 1.

I thought that the question was framed the wrong way round. If there is an enthusiastic and well-informed advocate of UX within the organisation then progress should follow. I thought it would be more interesting, and useful, to look at the factors that block UX evangelists, hence this blog.

What makes a UX evangelist give up?

I’d never considered the problem in such explicit terms before. I had thought about the reasons that organisations find themselves in what I called the usability death zone in my comment in Craig’s blog. The death zone is where organisations aren’t even aware that there is a problem. Usability and UX simply aren’t on their radar.

There are three groups of reasons for organisations staying in the death zone; the way that software engineering matured, the isolation of usability professionals, and the culture of organisations.

I wrote my Masters dissertation on the reasons why usability hadn’t penetrated software engineering. That really looked at the first two sets of reasons.

My conclusions were that traditional development methods, structured techniques and procurement practices had made it extremely difficult for software engineers to incorporate usability. On the other hand usability professionals had isolated themselves because of their lack of understanding of how commercial software was developed, and their willingness to stand aside from the development mainstream and carry out usability testing when it was too late to shape the application.

The whole dissertation is available on line.

Cultures that suffocate

What I didn’t consider in my dissertation were the cultural factors that block people who’ve identified the problem.

The first group of problems I mentioned are historical features of software engineering. Likewise, the second set of problems arise from the way the UX profession developed. However, the cultural factors are much more widespread, and stop many types of organisations from improving.

The common theme with these cultural factors is that the organisation has become rigid and has stopped learning, learning from its customers, its employees and its rivals.

Large organisations can grow to the point that they become so complex that many employees are totally cut off from the real purpose of the organisation, i.e selling products or providing a service to customers. For these employees the purpose of the organisation has become the smooth running of the organisation.

Large numbers of experienced, capable employees work hard, follow processes, calculate metrics and swap vast numbers of emails, “fyi”. They are subject to annual performance reviews in which targets are set, achievement is measured and rewards flow to those whose performance conforms to those idealised targets.

Compliance with processes and standards becomes the professional and responsible way to approach work. Indeed, professionalism and conformity can become indistinguishable from each other. Improvements are possible, but they are strictly marginal increments. The organisation learns how to carry on working the same way, but slightly more efficiently and effectively.

Sure, in software development CMMI can lead to greater rigour and to steady improvements, but they are unlikely to be radical. As Ben Simo put it when he contributed to an article I wrote on testing standards; “if an organisation is at a level less than the intent of level 5, CMM seems to often lock in ignorance that existed when the process was created”.

Seemingly radical change certainly does occur, but it is usually a case of managers being replaced, organisation structures being put through the blender, or whole departments being outsourced. All this chaos and disruption is predicated on the need to produce a more polished and refined version of the past, not to do anything radically different or better.

Such maturity models effectively discourage people from saying, “hang on, we’re on the wrong track here”. People who do try to speak out and to highlight fundamental flaws in working practices are liable to be seen as rebels and troublemakers, not team players.

Improvement is tied to metrics. The underlying assumption on which this paradigm rests is that measurement and metrics provide an objective insight into reality. The only acceptable insights are objective, and there is a hugely damaging and unchallenged false assumption that objectivity requires numbers. If there is a problem it will be detected in the numbers. Unfortunately these numbers are the product of the existing worldview and simply affirm that worldview.

Potential usability evangelists can see the problem, and understand its cause, but working within such a constrained and blinkered environment they might as well be trying to make their colleagues understand an alternative universe in which the laws of physics don’t apply.

The metrics were designed only to illuminate the problems that were imagined. Those problems that were never conceived lack the numerical evidence that justifies reform. Without such evidence the concerns of individuals are dismissed as subjective, as a matter of opinion rather than objective fact.

As I said, such a culture inhibits all sorts of reform, but I believe that it would be particularly dispiriting for budding usability evangelists. Faced with rejection and crushing disapproval they would either learn to conform or they would leave for a more enlightened employer.

Is the picture I’ve painted accurate? I’ve no figures or hard evidence, but my argument ties in with my own experience of how a company’s culture can suffocate innovation. Certainly, too much focus on measurement can be damaging. That is well established. See Robert Austin’s book, “Measuring and Managing Performance in Organizations”. Can anyone shed light on this? What is it that stops organisations moving out of the usability death zone? What makes the UX evangelists give up? I’d love to hear what people think about this problem.

Prototyping – making informed decisions at the right time

The day after I wrote my last blog about taking decisions I went to a talk arranged by the Scottish Usability Professionals Association, “An Introduction to Prototyping”.

The speaker was Neil Allison of the University of Edinburgh’s Website Development Project.

There’s no need for me to discuss Neil’s talk at length because he’s posted it here.

Neil used a phrase that leapt out at me. Prototyping ”helps us take informed decisions at the right time”.

That was exactly what I was thinking about when I wrote in my last blog about taking decisions at the right time, the last responsible moment in Lean terms.

Neil’s phrase summed up the appeal of prototyping perfectly for me. We can take informed decisions at the right time; not take poor, arbitrary or ill-informed decisions when we’ve simply run out of options and have to take whatever default option we’re left with.

In a later email Neil made the point that I keep on trying to make; that usability testing should shape the design, rather than evaluate it.

“When I do usability testing, it’s to check the requirements and the ease of use of a preferred solution. Usually before development begins in earnest as it’s too time consuming and/or costly to backpedal later. (Almost) pointless doing usability testing if you don’t have the resources to take action on the findings”.

Prototyping may have clear benefits but Neil is still trying to raise awareness within the university, spreading the word and trying to widen the use of the technique.

Testers should never see their role as being defined by a rigid process. They should always be looking for better ways to do their job, and be prepared to lobby and educate others.

Please have a look at Neil’s presentation, which contains advice on useful books and tools.

Usability and external suppliers – part 2

In my last post I talked about how the use of external suppliers can cause problems with the usability of applications. I promised to return to the topic and explain what the solutions are. I’m not sure I’d have bothered if I hadn’t stated that I would do it. The solutions are pretty basic and should be well known.

However, I’ve got to admit that there’s considerable reluctance to acknowledge the problems are real, and that the the solutions are relevant and practical. They won’t guarantee success, but since the traditional alternative load the odds heavily against success they’ve got to be an improvement.

The predictability paradox

There are two uncomfortable truths you have to acknowledge; the general problem and the specific one. In general, you can’t specify requirements accurately and completely up front, and any attempt to do so is doomed. The specific problem is that usability requirements are particularly difficult to specify.

Managers crave certainty and predictability. That’s understandable. What’s dangerous is the delusion that they exist when the truth is that we cannot be certain at the start of a development and we cannot predict the future with the level of confidence that we’d like.

It’s hard to stand up and say “we don’t know – and at this stage we couldn’t responsibly say that we do know”, even though that might be the truthful, honest and responsible thing to say.

It’s always been hugely tempting to pretend that you can set aside a certain amount of time during which you will get the requirements right in a single pass. It makes procurement, planning and management simpler.

Sadly any attempt to do so is like trying to build a house on shifting sand. The paradox is that striving for predictability pushes back the point when you truly can find out what is required and possible.

Mary Poppendieck made the point very well a few years back in an excellent article “Lean Development and the Predictability Paradox” (PDF, opens in new window).

I tried to argue much the same point in an article in 2008, “The Dangerous and Seductive V Model”. an article that has proved scarily popular on my website for the last couple of years. I guess many people still need to learn these lessons.

Unrighteous contracts?

Trying to create predictability by tying suppliers into tight contracts is a result of the delusion that predictability is available at an early stage. This can have perverse results.

Tight and tough contracts committing suppliers right at the start of a project is an understandable attempt to contain risk, or transfer it to the supplier.

Sadly, clients never truly transfer risk to the supplier if the risk is better handled internally. If the requirements are complex and poorly understood at the start then attempting to transfer the risk to the supplier is misguided.

Awarding a contract for the full development creates the danger that it will be awarded to the most optimistic, inexperienced or irresponsible supplier. Those suppliers with a shrewd understanding of what is really likely to be involved will tender cautiously and be undercut by the clueless, reckless or unprincipled.

When the wheels come off the project the client may be able to hold the supplier to the contract, but the cost will be high. Changes will be fought over, and the supplier will try to minimise the work – regardless of the impact on quality.

There seem to be three options about how to handle the contractual relationship with an external supplier if you want a suitable focus on quality, and especially usability.

The orthodox approach is to split the contract in two. The first contract would be to establish, at a high level, what is required, and ideally to deliver an early prototype. The second would be to build the full application.

This approach was recommended way back in 1987 by the US Defense Science Board Task Force On Military Software (PDF – opens in new window), and its motives were exactly the same as I’ve outlined.

The Agile approach would probably be to write contracts that have a budget for time and cost, and allow the scope to vary, possibly giving the client the right to cancel at set points, possibly after every iteration.

Martin Fowler summarises that approach well in this article, “The New Methodology”.

The third, more radical, approach was mooted by Mary Poppendieck in her article “Righteous Contracts”.

These “righteous contracts” would be written after the development, not at the start. I’m not convinced. It doesn’t seem all that different from simply dispensing with a contract and relying on trust, but I urge you to read her piece. She analyses the problems well and has some challenging ideas. I’d like to see them work, but they do seem rather idealistic.

Righteous contracts would work only if the client and supplier trust each other and are committed to a long term relationship. However, I’ve got to concede that without that trust and commitment the client is going to have problems with their suppliers regardless of the contractual style. You can’t create a productive partnership with a contract, but you can destroy it if the contract is poorly written and conceived.

Enough of the contracts! What about UX?

This piece was really prompted by my concerns about the effects of the contractual relationship between clients and suppliers, so not surprisingly I’ve been focussing on contracts.

However, I must admit I’ve found that rather dispiriting. So in order to feel a bit more positive I’m going to provide some pointers about how to try and handle usability better.

I discussed this in an article in 2009. It’s all about Agile and UX, but that’s where the interesting work is being done, and honestly, I’ve nothing positive to say about how traditional, linear techniques have handled usability. It might be possible to smuggle in techniques like Nielsen’s Discount Usability Engineering, or more ambitiously, Constantine & Lockwood’s Collaborative Usability Inspections. However, remember we’re talking about external suppliers and the contractual relationship. These techniques require iteration, and if the contract doesn’t make explicit allowance for iterative development then the odds are stacked hopelessly against you.

So Agile really seems to be the best hope for taking account of usability when dealing with an external supplier. Rather embarrassingly I came across this excellent article by Jeff Paton only a few days ago. It dates from 2008 and I really wish I’d found it earlier. It lists 12 “best practice” patterns that Paton has seen people using to incorporate UX into agile development. There’s some great stuff there.

A somewhat downbeat conclusion

Perhaps I shouldn’t have talked about “solutions” to the usability problems caused by using external suppliers. Maybe I should have used a phrase like “a better way”. I believe that flexible, staged contracts are vital. They are necessary, but they’re not sufficient. A commitment to usability is required regardless of the approach that is taken. It’s just that the traditional fixed price contract for a linear development torpedoes any hope of getting usability right. The damage is far more widespread than that, but usability is the issue that gets me particularly passionate.

I doubt if I’ve set off a light bulb in anyone’s head and they’re now thinking, “that’s what we’ve been doing wrong and I know what to do now”. However, I hope I’ve pointed some people in a useful direction to help them read, think and understand a bit more about what we do wrong.

I don’t know all the answers. Testers never do, and the big decisions that will affect usability and quality are a long way out of our remit. But we can, and should, be aware of the pitfalls of following particular approaches. We’ve a duty to speak out and tell management what can go wrong, and that just might tilt the odds back in our favour.

Usability and external suppliers – part 1

In my time I’ve worked as an in-house developer and as an external supplier. I’ve worked on fast developments that followed the Nike Methodology (just do it – get coding and find out what you can do) and formal, rigid monsters which had big parties when we successfully negotiated our way through the gauntlet of each quality gate.

One thing that intrigues me is the effect on usability of the differing approaches, and in particular the implications of using external suppliers. It’s not just that different approaches have different consequences. There’s nothing surprising in that. The intriguing thing is that, in this case, the implications are not widely acknowledged. It’s an inconvenient truth.

The more distant, formal and contractual the relationship between developers and users, the less likely it is that usability principles will be incorporated and that the end users will get a product they enjoy using.

This isn’t a problem that arose with e-commerce and web developments. It dates way back to the days when all software developments were applications for employees to use. Academics spotted the problems, but their work didn’t really percolate through to the consciousness of the practitioners.

Lewis & Rieman, Holmlid & Artman (both PDFs & open in new windows), Artman on his own, and Grudin all made similar points regarding procurement, external suppliers and contracts.

Unfortunately only the Holmlid and Artman article is free, though the Lewis and Rieman e-book has a modest suggested donation.

The conclusion of these writers was that using external suppliers was likely to have a damaging impact on the usability of the application. With external contracts there is more pressure and temptation to go for a traditional linear approach.

Effective usability engineering entails iteration, which in turn requires flexible contracts with estimates and costs being revised repeatedly. This is a frightening prospect for both sides; with the supplier scared of being committed to a job which cannot be sized effectively, and the client equally scared of being tied into a contract with no cost cap, and no easy exit without writing off the work already paid for.

In 1995 a workshop was held by the ACM’s Special Interest Group on Computer-Human Interaction to consider the challenges of introducing usability to US government developments. In addition to the general problems faced by all IT projects the participants concluded that very few invitations to tender mentioned usability beyond vague and subjective aspirations. Suppliers naturally didn’t build into their costing any features that were not explicitly mandated, and even after winning the contract were reluctant to provide them lest they get a reputation for cost over-runs.

The research is dated, but I don’t think that anything has essentially changed. The problems weren’t rooted in the technology, or development techniques. The problems were a human reaction to the pressures and incentives of a particular style of procurement. People haven’t evolved to a nobler, more altruistic level since the 90s, and people will still react the same way when confronted with the same approach to contracts and procurement.

Effective contracts have to be clear and objective. It is hard enough to specify usability requirements clearly and objectively. Stating them in any useful way at the contractual level is even harder.

Of course none of this necessarily means that external suppliers will do a bad job; far from it. What it does mean is that the external supplier relationship contains problems that must be explicitly acknowledged and addressed. Too often the implicit assumption is that using an external supplier is a low risk option, provided that they can be tied down to a tight contract. That sets up an adversarial relationship that is poison to the flexible approach that is essential if the relationship is really going to work.

No plan survives contact with the enemy. No prototype design survives exposure to the users unchanged. Why pretend that it can?

What are the implications for testers? Well, we are not mere checkers. It is our job to tell it like it us, and shine a light on those awkward facts and truths that others might prefer to ignore.

If your management is keen to follow a route that will have worrying implications, make sure they know what these are so they take the decision with the best information available. Who better than a tester to give them the news that management might not want, but really need to know?

I’ve tried to keep this fairly brief, so I’m not talking about how to make the situation better. That’s for next time (click here).

The O2 registration series – wrapping up

This is the fourth, and last, entry in a series about how testers should remember usability when they’re testing websites, using the O2 registration process as an example.

The previous posts can be found at “Usability and O2 registration”, “O2 website usability: testing, secrets and answers” and “O2 website usability: beating the user up”.

If you remember, I only started out on this expedition through O2’s registration process because I’d linked my personal and business phones. My business phone then disappeared from view. There was no sign of it on my personal account and the business account ID was crippled. I couldn’t log into it, and so I decided to re-register the business phone and plodded my painful way through O2’s registration process.

The mystery of the missing phone account – solved!

the 'My O2' page


After successfully re-registering my business phone I logged back into the personal account and stared at the “My O2” page, wondering why the original business account had vanished.

I decided to go ahead and link the new business account to the personal one. Maybe I should pretend I was trying to replicate a bug, but who am I trying to kid? There was no-one to report it to. Just like a little boy, I broke it once, then wanted to do the same thing again to see if would break again.

Whatever I was doing when I clicked again on the button under “Link my accounts” I was not looking for my business phone. Why would I look there? But there it was.

Link/View my other accounts

Link/View my other accounts

This is the screen that faced me. I’ve masked the ID’s and my personal number. My original business phone account hadn’t disappeared after all. It had just been remarkably well hidden.

Information architecture – it matters

I’ve seen information architecture derided as a trendy and pretentious field – but only by people who don’t know what they’re talking about. The O2 site is a good example of why it’s needed. Nielsen and Loranger have this to say in their book “Prioritizing Web Usability”.

“Chaotic design leads to dead ends and wasted effort. Hastily thrown-up web sites without effective information schemes prevent users from getting to the information they seek. When this happens, they may give up, or even worse, go to a different site.

A well-structured site gives users what they want when they want it. One of the biggest compliments a site can get is when people don’t comment on its structure in user testing. It’s the designers’ job to worry about the site’s structure, not the users’”.

Precisely; a well structured site gives users what they want, when they want it. They shouldn’t have to puzzle it out. They shouldn’t have to think, “now what do I do?”. They should just be able to do it. It’s that simple, but companies keep on screwing it up and wasting our time. Steve Krug has written an excellent short book on the subject with the very apt title “Don’t make me think!”.

When I originally linked my accounts I should have been able to look at the “My O2” page and instantly know where I’d have to go to see my business phone account. The site works functionally, but burying the account details of the other phones beneath “Link my accounts … Find out more” was guaranteed to confuse users.

Actually, on second thoughts, this is maybe not such a great example of the need for information architecture. The problem should have been so obvious that anyone could have spotted it early. You shouldn’t have to be any sort of specialist to see this one. The testers should have called it out, unless of course they were sticking rigidly to scripts intended to establish whether the functionality was working according to the documented requirements.

Did no-one put their hands up and say, ”this is daft”. Surely that is part of a tester’s job?

You wouldn’t even need to re-visit the site structure to address it. Simply changing the labels and text would have made the page much less confusing. It might not have been elegant, but the change could have been made very late and it would have worked.

Ideally, however, the problem would have been detected early enough for the site structure to be changed easily, i.e. during design itself. The same applies to the other O2 usability problems I discussed in previous blogs. If usability defects are found only during the user testing at the end of the project then it’s usually too late. The implementation express is at full steam, and it’s stopping for no-one. The project manager has got the team shovelling coal into the engine as fast as they can, the destination’s in sight, and if some testers or users start whining about usability then they need to be reminded of the big picture. “Ach, it’s just a cosmetic problem and, even if it isn’t, there’s a workaround”.

Early testing – what it should mean

If usability testing is to be effective it has to be built into the design process. Under the V Model testers learn the “test early” mantra. In practice it often amounts to little more than signing off massive specifications, without enough time to read them properly, never mind actually inspect them carefully. The testers then get to work churning out vast numbers of test scripts, the more the better. The more test scripts, the more thorough the testing? Please don’t lift that quote out of context. I value my reputation!

No. Early testing has to be formative, early enough to shape the requirements and the design. We have a lot to learn from User Experience (UX) professionals. The UX people are not called in as often as they should be. There are no end of techniques that can be used, and they don’t seem to be part of the testers’ normal repertoire.

Nielsen’s Discount Usability Engineering and Steve Krug’s “lost our lease” usability testing in particular are worth checking out. This article I wrote for Testing Experience has a bit more on the subject, and detailed references. See the section “Tactical improvements testers can make”.

What this series has been all about

There are other problems I could have discussed. In particular, there’s evidence of a lack of basic data analysis. It looks like the designers didn’t clarify exactly what the data meant. I’ve never seen a defect reporting that the data analysis was flawed. That would be the root cause. The defects would probably be symptoms of that deeper problem. The point is that proactive user involvement at an early stage could have detected such root problems.

The theme of this series of articles about O2 isn’t how awful that company is. They’re no worse than many others. They just happened to come to my attention at the right time. It’s only partly about the importance of usability, and incorporating that into our testing. That is certainly important, and much neglected, but the real message is that testers have to be involved early and asking challenging questions of the whole team throughout the development.

Testers should never be constrained by scripts. They should never be expected to derive all their tests from documented requirements. Taking either approach means having your head down, focussing on one little bit after another. We should have our head up, making sure that people understand and remember the big picture.

“What are we doing? What will the users want to do? Does this particular feature help? Is it a hindrance? Why are we doing it this way? Could it be done better?”

Questions like that should be constantly running through our minds. As Lisa Crispin put it in a tweeted comment on one of the earlier O2 posts, we shouldn’t be testing slavishly to the requirements. We should be thinking about the real user experience, and using our own business knowledge. Exactly!