Bridging the gap – Agile and the troubled relationship between UX and software engineering (2009)

Bridging the gap – Agile and the troubled relationship between UX and software engineering (2009)

This article appeared as the cover story for the September 2009 edition of TEST magazine.

I’m moving the article onto my blog from my website, which will be decommissioned soon. If you choose to read the article please bear in mind that it was written in August 2009 and is therefore inevitably dated in some respects, though I think that much of it is still valid, especially where I am discussing the historical problems.

The article

TEST magazine coverFor most of its history software engineering has had great difficulty building applications that users found enjoyable. Far too many applications were frustrating and wasted users’ time.

That has slowly changed with the arrival of web developments, and I expect the spread of Agile development to improve matters further.

I’m going to explain why I think developers have had difficulty building usable applications and why user interaction designers have struggled to get their ideas across. For simplicity I’ll just refer to these designers and psychologists as “UX”. There are a host of labels and acronyms I could have used, and it really wouldn’t have helped in a short article.

Why software engineering didn’t get UX

Software engineering’s problems with usability go back to its roots, when geeks produced applications for fellow geeks. Gradually applications spread from the labs to commerce and government, but crucially users were wage slaves who had no say in whether they would use the application. If they hated it they could always find another job!

Gradually applications spread into the general population until the arrival of the internet meant that anyone might be a user. Now it really mattered if users hated your application. They would be gone for good, into the arms of the competition.

Software engineering had great difficulty coming to terms with this. The methods it had traditionally used were poison to usability. The Waterfall lifecycle was particularly damaging.

The Waterfall had two massive flaws. At its root was the implicit assumption that you can tackle the requirements and design up front, before the build starts. This led to the second problem; iteration was effectively discouraged.

Users cannot know what they want till they’ve seen what is possible and what can work. In particular, UX needs iteration to let analysts and users build on their understanding of what is required.

The Waterfall meant that users could not see and feel what an application was like until acceptance testing at the end when it was too late to correct defects that could be dismissed as cosmetic.

The Waterfall was a project management approach to development, a means of keeping control, not building good products. This made perfect sense to organisations who wanted tight contracts and low costs.

The desire to keep control and make development a more predictable process explained the damaging attempt to turn software engineering into a profession akin to civil engineering.

So developers were sentenced to 20 years hard labour with structured methodologies, painstakingly creating an avalanche of documentation; moving from the current physical system, through the current logical system to a future logical system and finally a future physical system.

However, the guilty secret of software engineering was that translating requirements into a design wasn’t just a difficult task that required a methodical approach; it’s a fundamental problem for developers. It’s not a problem specific to usability requirements, and it was never resolved in traditional techniques.

The mass of documentation obscured the fact that crucial design decisions weren’t flowing predictably and objectively from the requirements, but were made intuitively by the developers – people who by temperament and training were polar opposites of typical users.

Why UX didn’t get software development

Not surprisingly, given the massive documentation overhead of traditional techniques, and developers’ propensity to pragmatically tailor and trim formal methods, the full process was seldom followed. What actually happened was more informal and opaque to outsiders.

The UX profession understandably had great difficulty working out what was happening. Sadly they didn’t even realise that they didn’t get it. They were hampered by their naivety, their misplaced sense of the importance of UX and their counter-productive instinct to retain their independence from developers.

If developers had traditionally viewed functionality as a matter of what the organisation required, UX went to the other extreme and saw functionality as being about the individual user. Developers ignored the human aspect, but UX ignored commercial realities – always a fast track to irrelevance.

UX took software engineering at face value, tailoring techniques to fit what they thought should be happening rather than the reality. They blithely accepted the damaging concept that usability was all about the interface; that the interface was separate from the application.

This separability concept was flawed on three grounds. Conceptually it was wrong. It ignored the fact that the user experience depends on how the whole application works, not just the interface.

Architecturally it was wrong. Detailed technical design decisions can have a huge impact on the users. Finally separability was wrong organisationally. It left the UX profession stranded on the margins, in a ghetto, available for consultation at the end of the project, and then ignored when their findings were inconvenient.

An astonishing amount of research and effort went into justifying this fallacy, but the real reason UX bought the idea was that it seemed to liberate them from software engineering. Developers could work on the boring guts of the application while UX designed a nice interface that could be bolted on at the end, ready for testing. However, this illusory freedom actually meant isolation and impotence.

The fallacy of separability encouraged reliance on usability testing at the end of the project, on summative testing to reveal defects that wouldn’t be fixed, rather than formative testing to stop these defects being built in the first place.

There’s an argument that there’s no such thing as effective usability testing. If it takes place at the end it’s too late to be effective. If it’s effective then it’s done iteratively during design, and it’s just good design rather than testing.

So UX must be hooked into the development process. It must take place early enough to allow alternative designs to be evaluated. Users must therefore be involved early and often. Many people in UX accept this completely, though the separability fallacy is still alive and well. However, its days are surely numbered. I believe, and hope, that the Agile movement will finally kill it.

Agile and UX – the perfect marriage?

The mutual attraction of Agile and UX isn’t simply a case of “my enemy’s enemy is my friend”. Certainly they do have a common enemy in the Waterfall, but each discipline really does need the other.

Agile gives UX the chance to hook into development, at the points where it needs to be involved to be effective. Sure, with the Waterfall it is possible to smuggle effective UX techniques into a project, but they go against the grain. It takes strong, clear-sighted project managers and test managers to make them work. The schedule and political pressures on these managers to stop wasting time iterating and to sign off each stage is huge.

If UX needs Agile, then equally Agile needs UX if it is to deliver high quality applications. The opening principle of the Agile Manifesto states that “our highest priority is to satisfy the customer through early and continuous delivery of valuable software”.

There is nothing in Agile that necessarily guarantees better usability, but if practitioners believe in that principle then they have to take UX seriously and use UX professionals to interpret users’ real needs and desires. This is particularly important with web applications when developers don’t have direct access to the end users.

There was considerable mutual suspicion between the two communities when Agile first appeared. Agile was wary of UX’s detailed analyses of the users, and suspected that this was a Waterfall style big, up-front requirements phase.

UX meanwhile saw Agile as a technical solution to a social, organisational problem and was rightly sceptical of claims that manual testing would be a thing of the past. Automated testing of the human interaction with the application is a contradiction.

Both sides have taken note of these criticisms. Many in UX now see the value in speeding up the user analysis and focussing on the most important user groups, and Agile has recognised the value of up-front analysis to help them understand the users.

The Agile community is also taking a more rounded view of testing, and how UX can fit into the development process. In particular check out Brian Marick’s four Agile testing quadrants.Agile Testing Quadrants

UX straddles quadrants two and three. Q2 contains the up-front analysis to shape the product, and Q3 contains the evaluation of the results. Both require manual testing, and use such classic UX tools as personas (fictional representative users) and wireframes (sketches of webpages).

Other people who’re working on the boundary of Agile and UX are Jeff Patton, Jared Spool and Anders Ramsay. They’ve come up with some great ideas, and it’s well worth checking them out. Microsoft have also developed an interesting technique called the RITE method.

This is an exciting field. Agile changes the role of testers and requires them to learn new skills. The integration of UX into Agile will make testing even more varied and interesting.

There will still be tension between UX and software professionals who’ve been used to working remotely from the users. However, Agile should mean that this is a creative tension, with each group supporting and restraining the others.

This is a great opportunity. Testers will get the chance to help create great applications, rather than great documentation!

What happens to usability when development goes offshore? (2009)

What happens to usability when development goes offshore? (2009)

Testing ExperienceThis article appeared in the March 2009 edition of Testing Experience magazine, which is no longer published. I’m moving the article onto my blog from my website, which will be decommissioned soon.

If you choose to read the article please bear in mind that it was written in January 2009 and is therefore inevitably dated in some respects, though I think that much of it is still valid.

The references in the article were all structured for a paper magazine. There are no hyperlinks and I have not tried to recreate them and check out whether they still work.

The article

Two of the most important trends in software development over the last 20 years have been the increasing number of companies sending development work to cheaper labour markets, and the increasing attention that is paid to the usability of applications.offshore usability article

Developers in Europe and North America cannot fail to have missed the trend to offshore development work and they worry about the long-term implications.

Usability, however, has had a mixed history. Many organizations and IT professionals have been deeply influenced by the need to ensure that their products and applications are usable; many more are only vaguely aware of this trend and do not take it seriously.

As a result, many developers and testers have missed the significant implications that offshoring has for building usable applications, and underestimate the problems of testing for usability. I will try to explain these problems, and suggest possible remedial actions that testers can take if they find themselves working with offshore developers. I will be looking mainly at India, the largest offshoring destination, because information has been more readily available. However, the problems and lessons apply equally to other offshoring countries.

According to NASSCOM, the main trade body representing the Indian IT industry [1], the numbers of IT professionals employed in India rose from 430,000 in 2001 to 2,010,000 in 2008. The numbers employed by offshore IT companies rose 10 fold, from 70,000 to 700,000.

It is hard to say how many of these are usability professionals. Certainly at the start of the decade there were only a handful in India. Now, according to Jumkhee Iyengar, of User in Design, “a guesstimate would be around 5,000 to 8,000”. At most that’s about 0.4% of the people working in IT. Even if they were all involved in offshore work they would represent no more than 1% of the total.

Does that matter? Jakob Nielsen, the leading usability expert, would argue that it does. His rule of thumb [2] is that “10% of project resources must be devoted to usability to ensure a good quality product”.

Clearly India is nowhere near capable of meeting that figure. To be fair, the same can be said of the rest of the world given that 10% represents Nielsen’s idea of good practice, and most organizations have not reached that stage. Further, India traditionally provided development of “back-office” applications, which are less likely to have significant user interaction.

Nevertheless, the shortage of usability professionals in the offshoring destinations does matter. Increasingly offshore developments have involved greater levels of user interaction, and any shortage of usability expertise in India will damage the quality of products.

Sending development work offshore always introduces management and communication problems. Outsourcing development, even within the same country, poses problems for usability. When the development is both offshored and outsourced, the difficulties in producing a usable application multiply. If there are no usability professionals on hand, the danger is that the developers will not only fail to resolve those problems – they will probably not even recognize that they exist.

Why can outsourcing be a problem for usability?

External software developers are subject to different pressures from internal developers, and this can lead to poorer usability. I believe that external suppliers are less likely to be able to respond to the users’ real needs, and research supports this. [3, 4, 5, 6, 7]

Obviously external suppliers have different goals from internal developers. Their job is to deliver an application that meets the terms of the contract and makes a profit in doing so. Requirements that are missed or are vague are unlikely to be met, and usability requirements all too often fall into one of these categories. This is not simply a matter of a lack of awareness. Usability is a subjective matter, and it is difficult to specify precise, objective, measurable and testable requirements. Indeed, trying too hard to do so can be counter-productive if the resulting requirements are too prescriptive and constrain the design.

A further problem is that the formal nature of contractual relationships tends to push clients towards more traditional, less responsive and less iterative development processes, with damaging results for usability. If users and developers are based in different offices, working for different employers, then rapid informal feedback becomes difficult.

Some of the studies that found these problems date back to the mid 90s. However, they contain lessons that remain relevant now. Many organizations have still not taken these lessons on board, and they are therefore facing the same problems that others confronted 10 or even 20 years ago.

How can offshoring make usability problems worse?

So, if simple outsourcing to a supplier in the same country can be fraught with difficulty, what are the usability problems that organizations face when they offshore?

Much of the discussion of this harks back to an article by Jakob Nielsen in 2002 [2]. Nielsen stirred up plenty of discussion about the problem, much of it critical.

“Offshore design raises the deeper problem of separating interaction designers and usability professionals from the users. User-centered design requires frequent access to users: the more frequent the better.”

If the usability professionals need to be close to the users, can they stay onshore and concentrate on the design while the developers build offshore? Nielsen was emphatic on that point.

“It is obviously not a solution to separate design and implementation since all experience shows that design, usability, and implementers need to proceed in tight co-ordination. Even separating teams across different floors in the same building lowers the quality of the resulting product (for example, because there will be less informal discussions about interpretations of the design spec).”

So, according to Nielsen, the usability professionals have to follow the developers offshore. However, as we’ve seen, the offshore countries have nowhere near enough trained professionals to cover the work. Numbers are increasing, but not by enough to keep pace with the growth in offshore development, never mind the demands of local commerce.

This apparent conundrum has been dismissed by many people who have pointed out, correctly, that offshoring is not an “all or nothing” choice. Usability does not have to follow development. If usability is a concern, then user design work can be retained onshore, and usability expertise can be deployed in both countries. This is true, but it is a rather unfair criticism of Nielsen’s arguments. The problem he describes is real enough. The fact that it can be mitigated by careful planning certainly does not mean the problem can be ignored.

User-centred design assumes that developers, usability analysts and users will be working closely together. Offshoring the developers forces organizations to make a choice between two unattractive options; separating usability professionals from the users, or separating them from the developers.

It is important that organizations acknowledge this dilemma and make the choice explicitly, based on their needs and their market. Every responsible usability professional would be keenly aware that their geographical separation from their users was a problem, and so those organizations that hire usability expertise offshore are at least implicitly acknowledging the problems caused by offshoring. My concern is for those organizations who keep all the usability professionals onshore and either ignore the problems, or assume that they don’t apply in their case.

How not to tackle the problems

Jhumkee Iyengar has studied the responses of organizations wanting to ensure that offshore development will give them usable applications [8]. Typically they have tried to do so without offshore usability expertise. They have used two techniques sadly familiar to those who have studied usability problems; defining the user interaction requirements up-front and sending a final, frozen specification to the offshore developers, or adopting the flawed and fallacious layering approach.

Attempting to define detailed up-front requirements is anathema to good user-centred design. It is an approach consistent with the Waterfall approach and is attractive because it is neat and easy to manage (as I discussed in my article on the V Model in Testing Experience, issue 4).

Building a usable application that allows users and customers to achieve their personal and corporate goals painlessly and efficiently requires iteration, prototyping and user involvement that is both early in the lifecycle and repeated throughout it.

The layering approach was based on the fallacy that the user interface could be separated from the functionality of the application, and that each could be developed separately. This fallacy was very popular in the 80s and 90s. Its influence has persisted, not because it is valid, but because it lends an air of spurious respectability to what people want to do anyway.

Academics expended a huge amount of effort trying to justify this separability. Their arguments, their motives and the consequences of their mistake are worth a full article in their own right. I’ll restrict myself to saying that the notion of separability was flawed on three counts.

  • It was flawed conceptually because usability is a product of the experience of the user with the whole application, not just the interface.
  • It was flawed architecturally, because design decisions taken by system architects can have a huge impact on the user experience.
  • Finally, it was flawed in political and organizational terms because it encouraged usability professionals to retreat into a ghetto, isolated from the hubbub of the developers, where they would work away on an interface that could be bolted onto the application in time for user testing.

Lewis & Rieman [3] memorably savaged the idea that usability professionals could hold themselves aloof from the application design, calling it “the peanut butter theory of usability”

“Usability is seen as a spread that can be smeared over any design, however dreadful, with good results if the spread is thick enough. If the underlying functionality is confusing, then spread a graphical user interface on it. … If the user interface still has some problems, smear some manuals over it. If the manuals are still deficient, smear on some training which you force users to take.”

If the usability professionals stay onshore, and adopt either the separability or the peanut butter approach, the almost inescapable result is that they will be delegating decisions about usability to the offshore developers.

Developers are just about the worst people to take these decisions. They are too close to the application, and instinctively see workarounds to problems that might appear insoluble to real users.

Developers also have a different mindset when approaching technology. Even if they understand the business context of the applications they can’t unlearn their technical knowledge and experience to see the application as a user would; and this is if developers and users are in the same country. The cultural differences are magnified if they are based in different continents.

The relative lack of maturity of usability in the offshoring destinations means that developers often have an even less sophisticated approach than developers in the client countries. User interaction is regarded as an aesthetic matter restricted to the interface, with the developers more interested in the guts of the application.

Pradeep Henry reported in 2003 that most user interfaces at Indian companies were being designed by programmers, and that in his opinion they had great difficulty switching from their normal technical, system-focused mindset to that of a user. [9]

They also had very little knowledge of user centered design techniques. This is partly a matter of education, but there is more to it. In explaining the shortage of usability expertise in India, Jhumkee Iyengar told me that she believes important factors are the “phenomenal success of Indian IT industry, which leads people to question the need for usability, and the offshore culture, which has traditionally been a ‘back office culture’ not conducive to usability”.

The situation is, however, certainly improving. Although the explosive growth in development work in India, China and Eastern Europe has left the usability profession struggling to keep up, the number of usability experts has grown enormously over the last 10 years. There are nowhere near enough, but there are firms offering this specialist service keen to work with offshoring clients.

This trend is certain to continue because usability is a high value service. It is a hugely attractive route to follow for these offshore destinations, complementing and enhancing the traditional offshore development service.

Testers must warn of the dangers

The significance of all this from the perspective of testers is that even though usability faces significant threats when development is offshored, there are ways to reduce the dangers and the problems. They cannot be removed entirely, but offshoring offers such cost savings it will continue to grow and it is important that testers working for client companies understand these problems and can anticipate them.

Testers may not always, or often, be in a position to influence whether usability expertise is hired locally or offshore. However, they can flag up the risks of whatever approach is used, and adopt an appropriate test strategy.

The most obvious danger is if an application has significant interaction with the user and there is no specialist usability expertise on the project. As I said earlier, this could mean that the project abdicates responsibility for crucial usability decisions to the offshore developers.

Testers should try to prevent a scenario where the interface and user interaction are pieced together offshore, and thrown “over the wall” to the users back in the client’s country for acceptance testing when it may be too late to fix even serious usability defects.

Is it outside the traditional role of a tester to lobby project management to try and change the structure of the project? Possibly, but if testers can see that the project is going to be run in a way that makes it hard to do their job effectively then I believe they have a responsibility to speak out.

I’m not aware of any studies looking at whether outsourcing contracts (or managed service agreements) are written in such prescriptive detail that they restrict the ability of test managers to tailor their testing strategy to the risks they identify. However, going by my experience and the anecdotal evidence I’ve heard, this is not an issue. Testing is not usually covered in detail in contracts, thus leaving considerable scope to test managers who are prepared to take the initiative.

Although I’ve expressed concern about the dangers of relying on a detailed up front specification there is no doubt that if the build is being carried out by offshore developers then they have to be given clear, detailed, unambiguous instructions.

The test manager should therefore set a test strategy that allows for significant static testing of the requirements documents. These should be shaped by walkthroughs and inspections to check that the usability requirements are present, complete, stated in sufficient detail to be testable, yet not defined so precisely that they constrain the design and rule out what might have been perfectly acceptable solutions to the requirements.

Once the offshore developers have been set to work on the specification it is important that there is constant communication with them and ongoing static testing as the design is fleshed out.

Hienadz Drahun leads an offshore interaction design team in Belarus. He stresses the importance of communication. He told me that “communication becomes a crucial point. You need to maintain frequent and direct communication with your development team.”

Dave Cronin of the US Cooper usability design consultancy wrote an interesting article about this in 2004, [10].

“We already know that spending the time to holistically define and design a software product dramatically increases the likelihood that you will deliver an effective and pleasurable experience to your customers, and that communication is one of the critical ingredients to this design process. All this appears to be even more true if you decide to have the product built in India or Eastern Europe.

To be absolutely clear, to successfully outsource product development, it is crucial that every aspect of the product be specifically defined, designed and documented. The kind of hand-waving you may be accustomed to when working with familiar and well-informed developers will no longer suffice.”

Significantly Cronin did not mention testing anywhere in his article, though he does mention “feedback” during the design process.

The limits of usability testing

One of the classic usability mistakes is to place too much reliance on usability testing. In fact, I’ve heard it argued that there is no such thing as usability testing. It’s a provocative argument, but it has some merit.

If usability is dependent only on testing, then it will be left to the end of the development, and serious defects will be discovered too late in the project for them to be fixed.

“They’re only usability problems, the users can work around them” is the cry from managers under pressure to implement on time. Usability must be incorporated into the design stages, with possible solutions being evaluated and refined. Usability is therefore produced not by testing, but by good design practices.

Pradeep Henry called his new team “Usability Lab” when he introduced usability to Cognizant, the offshore outsourcing company, in India. However, the name and the sight of the testing equipment in the lab encouraged people to equate usability with testing. As Henry explained;

“Unfortunately, equating usability with testing leads people to believe that programmers or graphic designers should continue to design the user interface and that usability specialists should be consulted only later for testing.”

Henry renamed his team the Cognizant Usability Group (now the Cognizant Usability Center of Excellence). [9]

Tactical improvements testers can make

So if usability evaluation has to be integrated into the whole development process then what can testers actually do in the absence of usability professionals? Obviously it will be difficult, but if iteration is possible during design, and if you can persuade management that there is a real threat to quality then you can certainly make the situation a lot better.

There is a lot of material readily available to guide you. I would suggest the following.

Firstly, Jakob Nielsen’s Discount Usability Engineering [11] consists of cheap prototyping (maybe just paper based), heuristic (rule based) evaluation and getting users to talk their way through the application, thinking out loud as they work their way through a scenario.

Steve Krug’s “lost our lease” usability testing basically says that any usability testing is better than none, and that quick and crude testing can be both cheap and effective. Krug’s focus is more on the management of this approach rather than the testing techniques themselves, so it fits with Nielsen’s DUE, rather than being an alternative in my opinion. It’s all in his book “Don’t make me think”. [12]

Lockwood & Constantine’s Collaborative Usability Inspections offer a far more formal and rigorous approach, though you may be stretching yourself to take this on without usability professionals. It entails setting up formal walk-throughs of the proposed design, then iteration to remove defects and improve the product. [13, 14, 15]

On a lighter note, Alan Cooper’s book “The inmates are running the asylum” [15], is an entertaining rant on the subject. Cooper’s solution to the problem is his Interaction Design approach. The essence of this is that software development must include a form of functional analysis that seeks to understand the business problem from the perspective of the users, based on their personal and corporate goals, working through scenarios to understand what they will want to do.

Cooper’s Interaction Design strikes a balance between the old, flawed extremes of structured methods (which ignored the individual) and traditional usability (which often paid insufficient attention to the needs of the organization). I recommend this book not because I think that a novice could take this technique on board and make it work, but because it is very readable and might make you question your preconceptions and think about what is desirable and possible.

Longer term improvements

Of course it’s possible that you are working for a company that is still in the process of offshoring and where it is still possible to influence the outcome. It is important that the invitation to tender includes a requirement that suppliers can prove expertise and experience in usability engineering. Additionally, the client should expect potential suppliers to show they can satisfy the following three criteria.

  • The supplier should have a process or lifecycle model that not only has usability engineering embedded within it but that also demonstrates how the onshore and offshore teams will work together to achieve usability. The process must involve both sides.

    Offshore suppliers have put considerable effort into developing such frameworks. Three examples are Cognizant’s “End-to-End UI Process”, HFI’s “Schaffer-Weinschenk Method™” and Persistent’s “Overlap Usability”.

  • Secondly, suppliers should carry out user testing with users from the country where the application will be used. The cultural differences are too great to use people who happen to be easily available to the offshore developers.

    Remote testing entails usability experts based in one location conducting tests with users based elsewhere, even another continent. It would probably not be the first choice of most usability professionals, but it is becoming increasingly important. As Jumkhee Iyengar told me it “may not be the best, but it works and we have had good results. A far cry above no testing.”

  • Finally, suppliers should be willing to send usability professionals to the onshore country for the requirements gathering. This is partly a matter of ensuring that the requirements gathering takes full account of usability principles. It is also necessary so that these usability experts can fully understand the client’s business problem and can communicate it to the developers when they return offshore.

It’s possible that there may still be people in your organization who are sceptical about the value of usability. There has been a lot of work done on the return on investment that user centered design can bring. It’s too big a topic for this article, but a simple internet search on “usability” and “roi” will give you plenty of material.

What about the future?

There seems no reason to expect any significant changes in the trends we’ve seen over the last 10 years. The offshoring countries will continue to produce large numbers of well-educated, technically expert IT professionals. The cost advantage of developing in these countries will continue to attract work there.

Proactive test managers can head off some of the usability problems associated with offshoring. They can help bring about higher quality products even if their employers have not allowed for usability expertise on their projects. However, we should not have unrealistic expectations about what they can achieve. High quality, usable products can only be delivered consistently by organizations that have a commitment to usability and who integrate usability throughout the design process.

Offshoring suppliers will have a huge incentive to keep advancing into user centered design and usability consultancy. The increase in offshore development work creates the need for such services, whilst the specialist and advanced nature of the work gives these suppliers a highly attractive opportunity to move up the value chain, selling more expensive services on the basis of quality rather than cost.

The techniques I have suggested are certainly worthwhile, but they may prove no more than damage limitation. As Hienadz Drahun put it to me; “to get high design quality you need to have both a good initial design and a good amount of iterative usability evaluation. Iterative testing alone is not able to turn a bad product design into a good one. You need both.” Testers alone cannot build usability into an application, any more than they can build in quality.

Testers in the client countries will increasingly have to cope with the problems of working with offshore development. It is important that they learn how to work successfully with offshoring and adapt to it.

They will therefore have to be vigilant about the risks to usability of offshoring, and advise their employers and clients how testing can best mitigate these risks, both on a short term tactical level, i.e. on specific projects where there is no established usability framework, and in the longer term, where there is the opportunity to shape the contracts signed with offshore suppliers.

There will always be a need for test management expertise based in client countries, working with the offshore teams, but it will not be the same job we knew in the 90s.

References

[1] NASSCOM (2009). “Industry Trends, IT-BPO Sector-Overview”.

[2] Nielsen, J. (2002). “Offshore Usability”. Jakob Nielsen’s website.

[3] Lewis, C., Rieman, J. (1994). “Task-Centered User Interface Design: A Practical Introduction”. University of Colorado e-book.

[4] Artman, H. (2002). “Procurer Usability Requirements: Negotiations in Contract Development”. Proceedings of the second Nordic conference on Human-Computer Interaction (NordiCHI) 2002.

[5] Holmlid, S. Artman, H. (2003). “A Tentative Model for Procuring Usable Systems” (NB PDF download). 10th International Conference on Human-Computer Interaction, 2003.

[6] Grudin, J. (1991). “Interactive Systems: Bridging the Gaps Between Developers and Users”. Computer, April 1991, Vol. 24, No. 4 (subscription required).

[7] Grudin, J. (1996). “The Organizational Contexts of Development and Use”. ACM Computing Surveys. Vol 28, issue 1, March 1996, pp 169-171 (subscription required).

[8] Iyengar, J. (2007). “Usability Issues in Offshore Development: an Indian Perspective”. Usability Professionals Association Conference, 2007 (UPA membership required.

[9] Henry, P. (2003). “Advancing UCD While Facing Challenges Working from Offshore”. ACM Interactions, March/April 2003 (subscription required).

[10] Cronin D. (2004). “Designing for Offshore Development”. Cooper Journal blog.

[11] Nielsen, J. (1994). “Guerrilla HCI: Using Discount Usability Engineering to Penetrate the Intimidation Barrier”, Jakob Nielsen’s website.

[12] Krug, S. (2006). “Don’t Make Me Think!: A Common Sense Approach to Web Usability” 2nd edition. New Riders.

[13] Constantine, L. Lockwood, L. (1999). “Software for Use”. Addison Wesley.

[14] Lockwood, L. (1999). “Collaborative Usability Inspecting – more usable software and sites” (NB PDF download), Constantine and Lockwood website.

[15] Cooper, A. (2004). “The Inmates Are Running the Asylum: Why High-tech Products Drive Us Crazy and How to Restore the Sanity”. Sams.

The dragons of the unknown; part 7 – resilience requires people

The dragons of the unknown; part 7 – resilience requires people

Introduction

This is the seventh post in a series about problems that fascinate me, that I think are important and interesting. The series draws on important work from the fields of safety critical systems and from the study of complexity, specifically complex socio-technical systems. This was the theme of my keynote at EuroSTAR in The Hague (November 12th-15th 2018).

The first post was a reflection, based on personal experience, on the corporate preference for building bureaucracy rather than dealing with complex reality, “facing the dragons part 1 – corporate bureaucracies”. Part 2 was about the nature of complex systems. The third followed on from part 2, and talked about the impossibility of knowing exactly how complex socio-technical systems will behave with the result that it is impossible to specify them precisely, “I don’t know what’s going on”.

Part 4 “a brief history of accident models”, looked at accident models, i.e. the way that safety experts mentally frame accidents when they try to work out what caused them.

The fifth post, “accident investigations and treating people fairly”, looked at weaknesses in the way that we have traditionally investigated accidents and failures, assuming neat linearity with clear cause and effect. In particular, our use of root cause analysis, and willingness to blame people for accidents is hard to justify.

Part six “Safety II, a new way of looking at safety” looks at the response of the safety critical community to such problems and the necessary trade offs that a practical response requires. The result, Safety II, is intriguing and has important lessons for software testers.

This post is about the importance of system resilience and the vital role that people play in keeping systems going.

Robustness versus resilience

The idea of resilience is where Safety II and Cynefin come together in a practical way for software development.sea wall collapse Safety critical professionals have become closely involved in the field of resilience engineering. Dave Snowden, Cynefin’s creator, places great emphasis on the need for systems in complex environments to be resilient.

First, I’d like to make an important distinction, between robustness and resilience. The example Snowden uses is that a seawall is robust but a salt marsh is resilient. A seawall is a barrier to large waves and storms. It protects the land behind, but if it fails it does so catastrophically. A salt marsh protects inland areas by acting as a buffer, absorbing storm waves rather than repelling them. It might deteriorate over time but it won’t fail suddenly and disastrously.saltmarsh

Designing for robustness entails trying to prevent failure. Designing for resilience recognises that failure is inevitable in some form but tries to make that failure containable. Resilience means that recovery should be swift and relatively easy when it does occur, and crucially, it means that failure can be detected quickly, or even in advance so that operators have a chance to respond.

What struck me about the resilience engineering approach is that it matches the way that we managed the highly complex insurance financial applications I mentioned in “part 2 – crucial features of complex systems”. We had never heard of resilience engineering, but the site standards were of limited use. We had to feel our way, finding an appropriate response as the rapid pace of development created terrifying new complexity on top of a raft of ancient legacy applications.

The need for efficient processing of the massive batch runs had to be balanced against the need to detect the constant flow of small failures early, to stop them turning into major problems, and also against the pressing need to facilitate recovery when we inevitably hit serious failure. We also had to think about what “failure” really meant in a context where 100% (or even 98%) accuracy was an unrealistic dream that would distract us from providing flawed but valuable systems to our users within the timescales that were dictated by commercial pressures.

An increasing challenge for testers will be to look for information about how systems fail, and test for resilience rather than robustness. Liz Keogh, in this talk on “Safe-to-Fail” makes a similar point.

“Testers are really, really good at spotting failure scenarios… they are awesomely imaginative at calamity… Devs are problem solvers. They spot patterns. Testers spot holes in patterns… I have a theory that other people who are in critical positions, like compliance and governance people are also really good at this.”

Developing for resilence means that tolerance for failure becomes more important than a vain attempt to prevent failure altogether. This tolerance often requires greater redundancy. Stripping out redundancy and maximizing the efficiency of systems has a downside. Greater efficiency can make applications brittle and inflexible. When problems hit they hit hard and recovery can be difficult.

However, redundancy itself adds to the complexity of systems and can create unexpected ways for them to fail. In our massively complex insurance finance systems a constant threat was that the safeguards we introduced to make the systems resilient might result in the processing runs failing to complete in time and disrupting other essential applications.

The ETTO principle (see part 6 , “Safety II – learning from what goes right”) describes the dilemmas we were constantly having to deal with. But the problems we faced were more complex than a simple trade off, sacrificing efficiency would not necessarily lead to greater effectiveness. Poorly thought out safeguards could harm both efficiency and effectiveness.

We had to nurse those systems carefully. That is a crucial idea to understand. Complex systems require constant attention by skilled people and these people are an indispensable means of controlling the systems.

Ashby’s Law of Requisite Variety

Ashby’s Law of Requisite Variety is also known as The First Law of Cybernetics.

“The complexity of a control system must be equal to or greater than the complexity of the system it controls.”

A stable system needs as much variety in the control mechanisms as there is in the system itself. This does not mean as much variety as the external reality that the system is attempting to manage – a thermostat is just on or off, it isn’t directly controlling the temperature, just whether the heating is on or off.

The implication for complex socio-technical systems is that the controlling mechanism must include humans if it is to be stable precisely because the system includes humans. The control mechanism has to be as complex and sophisticated as the system itself. It’s one of those “laws” that looks trivially obvious when it is unpacked, but whose implications can easily be missed unless we turn our minds to the problem and its implications. conductorWe must therefore trust expertise, trust the expert operators, and learn what they have to do to keep the system running.

I like the analogy of an orchestra’s conductor. It’s a flawed analogy (all models are flawed, though some are useful). The point is that you need a flexible, experienced human to make sense of the complexity and constantly adjust the system to keep it working and useful.

Really know the users

I have learned that it is crucially important to build a deep understanding of the user representatives and the world they work in. This is often not possible, but when I have been able to do it the effort has always paid off. If you can find good contacts in the user community you can learn a huge amount from them. Respect deep expertise and try to acquire it yourself if possible.

When I moved into the world of insurance finance systems I had very bright, enthusiastic, young (but experienced) users who took the time to immerse me in their world. I was responsible for the development, not just the testing. The users wanted me to understand them, their motivation, the pressures on them, where they wanted to get to, the risks they worried about, what kept them awake at night. It wasn’t about record-keeping. It was all about understanding risks and exposures. They wanted to set prices accurately, to compete aggressively using cheap prices for good risks and high prices for the poor risks.

That much was obvious, but I hadn’t understood the deep technical problems and complexities of unpacking the risk factors and the associated profits and losses. Understanding those problems and the concerns of my users was essential to delivering something valuable. The time spent learning from them allowed me to understand not only why imperfection was acceptable and chasing perfection was dangerous, but also what sort of imperfection was acceptable.

Building good, lasting relations with my users was perhaps the best thing I ever did for my employers and it paid huge dividends over the next few years.

We shouldn’t be thinking only about the deep domain experts though. It’s also vital to look at what happens at the sharp end with operational users, perhaps lowly and stressed, carrying out the daily routine. If we don’t understand these users, the pressures and distractions they face, and how they have to respond then we don’t understand the system that matters, the wider complex, socio-technical system.

Testers should be trying to learn more from experts working on human factors and ergonomics and user experience. I’ll finish this section with just a couple of examples of the level of thought and detail that such experts put into the design of aeroplane cockpits.

Boeing is extremely concerned about the danger of overloading cockpit crew with so much information that they pay insufficient attention to the most urgent warnings. The designers therefore only use the colour red in cockpits when the pilot has to take urgent action to keep the plane safe. Red appears only for events like engine fires and worse. Less urgent alerts use other colours and are less dramatic. Pilots know that if they ever see a red light or red text then they have to act. [The original version of this was written before the Boeing 737 MAX crashes. These have raised concerns about Boeing’s practices that I will return to later.]

A second and less obvious example of the level of detailed thought that goes into flight deck designs is that analog speed dials are widely considered safer than digital displays. Pilots can glance at the dial and see that the airspeed is in the right zone given all the other factors (e.g. height, weight and distance to landing) at the same time as they are processing a blizzard of other information.

A digital display isn’t as valuable. (See Edwin Hutchins’ “How a cockpit remembers its speeds“, Cognitive Science 19, 1995.) It might offer more precise information, but it is less useful to pilots when they really need to know about the aircraft’s speed during a landing, a time when they have to deal with many other demands. In a highly complex environment it is more important to be useful than accurate. Safe is more important than precise.speed dial

The speed dial that I have used as an illustration is also a good example both of beneficial user variations and of the perils of piling in extra features. The tabs surrounding the dial are known as speed bugs. Originally pilots improvised with grease pencils or tape to mark the higher and lower limits of the airspeed that would be safe for landing that flight. Designers picked up on that and added movable plastic tabs. Unfortunately, they went too far and added tabs for every eventuality, thus bringing visual clutter into what had been a simple solution. (See Donald Norman’s “Turn signals are the facial expressions of automobiles“, chapter 16, “Coffee cups in the cockpit”, Basic Books, 1993.)

We need a better understanding of what will help people make the system work, and what is likely to trip them up. That entails respect for the users and their expertise. We must not only trust them we must never lose our own humility about what we can realistically know.

As Jens Rasmussen put it (in a much quoted talk at the IEEE Standards Workshop on Human Factors and Nuclear Safety in 1981 – I have not been able to track this down).

“The operator’s role is to make up for holes in designers’ work.”

Testers should be ready to explore and try to explain these holes, the gap between the designers’ limited knowledge and the reality that the users will deal with. We have to try to think about what the system as found will be like. We must not restrict ourselves to the system as imagined.

Lessons from resilience engineering

There is a huge amount to learn from resilience engineering. This community has a significant overlap with the safety critical community. The resilience engineering literature is vast and growing. However, for a quick flavour of what might be useful for testers it’s worth looking at the four principles of Erik Hollnagel’s Functional Resonance Analysis Method (FRAM). FRAM tries to provide a way to model complex socio-technical systems so that we can gain a better understanding of likely outcomes.

  • Success and failure are equivalent. They can happen in similar ways.

    It is dangerously misleading to assume that the system is bimodal, that it is either working or broken. Any factor that is present in a failure can equally be present in success.

  • Success, failure and normal outcomes are all emergent qualities of the whole system.

    We cannot learn about what will happen in a complex system by observing only the individual components.

  • People must constantly make small adjustments to keep the system running.

    These changes are both essential for the successful operation of the system, but also a contributory cause of failure. Changes are usually approximate adjustments, based on experience, rather than precise, calculated changes. An intrinsic feature of complex systems is that small changes can have a dramatic effect on the overall system. A change to one variable or function will always affect others.

  • “Functional resonance” is the detectable result of unexpected interaction of normal variations.

Functional resonance is a particularly interesting concept. Resonance is the engineering term for the effect we get when different things vibrate with the same frequency. If an object is struck or moved suddenly it will vibrate at its natural frequency. If the object producing the force is also vibrating at the same frequency the result is resonance, and the effect of the impact can be amplified dramatically.Albert Bridge warning notice

Resonance is the effect you see if you push a child on a swing. If your pushes match the motion of the swing you quickly amplify the motion. If your timing is wrong you dampen the swing’s motion. Resonance can produce unpredictable results. A famous example is the danger that marching troops can bring a bridge down if the rhythm of their marching coincides with the natural frequency at which the bridge vibrates.

Learning about functional resonance means learning about the way that different variables combine to amplify or dampen the effect that each has, producing outcomes that would have been entirely unpredictable from looking at their behaviour individually.

Small changes can lead to drastically different outcomes at different times depending on what else is happening. The different variables in the system will be coupled in potentially significant ways the designers did not understand. These variables can reinforce, or play off each other, unpredictably.

Safety is a control problem – a matter of controlling these interactions, which means we have to understand them first. But, as we have seen, the answer can’t be to keep adding controls to try and achieve greater safety. Safety is not only a control problem, it is also an emergent and therefore unpredictable property (see appendix). That’s not a comfortable combination for the safety critical community.

Although it is impossible to predict emergent behaviour in a complex system it is possible to learn about the sort of impact that changes and user actions might have. FRAM is not a model for testers. However, it does provide a useful illustration of the approach being taken by safety experts who are desperate to learn and gain a better understanding of how systems might work.

Good testers are surely well placed to reach out and offer their skills and experience. It is, after all, the job of testers to learn about systems and tell a “compelling story” (as Messrs Bach & Bolton put it) to the people who need to know. They need the feedback that we can provide, but if it is to be useful we all have to accept that it cannot be exact.

Lotfi Zadeh, a US mathematician, computer scientist and engineer introduced the idea of fuzzy logic. He made this deeply insightful observation, quoted in Daniel McNeill and Paul Freiberger’s book “Fuzzy Logic”.

“As complexity rises, precise statements lose meaning, and meaningful statements lose precision.”

Zadeh’s maxim has come to be known as the Law of Incompatibility. If we are dealing with complex socio-technical systems we can be meaningful or we can be precise. We cannot be both; they are incompatible in such a context. It might be hard to admit we can say nothing with certainty, but the truth is that meaningful statements cannot be precise. If we say “yes, we know” then we are misleading the people who are looking for guidance. To pretend otherwise is bullshitting.

In the eighth post of this series, “How we look at complex systems”, I will talk about the way we choose to look at complex systems, the mental models that we build to try and understand them, and the relevance of Devops.

Appendix – is safety an emergent property?

In this series I have repeatedly referred to safety as being an emergent property of complex adaptive systems. For beginners trying to get their heads round this subject it is an important point to take on board.

However, the nature of safety is rather more nuanced. Erik Hollnagel argues that safety is a state of the whole system, rather than one of the system’s properties. Further, we consciously work towards that state of safety, trying to manipulate the system to achieve the desired state. Therefore safety is not emergent; it is a resultant state, a deliberate result. On the other hand, a lack of safety is an emergent property because it arises from unpredictable and undesirable adaptions of the system and its users.

Other safety experts differ and regard safety as being emergent.For the purpose of this blog I will stick with the idea that it is emergent. However, it is worth bearing Hollnagel’s argument in mind. I am quite happy to think of safety being a state of a system because my training and experience lead me to think of states as being more transitory than properties, but I don’t feel sufficiently strongly to stop referring to safety as being an emergent property.

The dragons of the unknown; part 6 – Safety II, a new way of looking at safety

Introduction

This is the sixth post in a series about problems that fascinate me, that I think are important and interesting. The series draws on important work from the fields of safety critical systems and from the study of complexity, specifically complex socio-technical systems. This was the theme of my keynote at EuroSTAR in The Hague (November 12th-15th 2018).

The first post was a reflection, based on personal experience, on the corporate preference for building bureaucracy rather than dealing with complex reality, “Facing the dragons part 1 – corporate bureaucracies”. The second post was about the nature of complex systems, “part 2 – crucial features of complex systems”. The third followed on from part 2, and talked about the impossibility of knowing exactly how complex socio-technical systems will behave with the result that it is impossible to specify them precisely, “part 3 – I don’t know what’s going on”.

The fourth post, “part 4 – a brief history of accident models”, looks at accident models, i.e. the way that safety experts mentally frame accidents when they try to work out what caused them.

The fifth post, “part 5 – accident investigations and treating people fairly”, looks at weaknesses of the way that we have traditionally investigated accidents and failures, assuming neat linearity with clear cause and effect. In particular, our use of root cause analysis, and willingness to blame people for accidents is hard to justify.

This post looks at the response of the safety critical community to such problems and the necessary trade offs that a practical response requires. The result, Safety II, is intriguing and has important lessons for software testers.

More safety means less feedback

2017 - safest year in aviation historyIn 2017 nobody was killed on a scheduled passenger flight (sadly that won’t be the case in 2018). That prompted the South China Morning Post to produce this striking graphic, which I’ve reproduced here in butchered form. Please, please look at the original. My version is just a crude taster.

Increasing safety is obviously good news, but it poses a problem for safety professionals. If you rely on accidents for feedback then reducing accidents will choke off the feedback you need to keep improving, to keep safe. The safer that systems become the less data is available. Remember what William Langewiesche said (see part 4).

“What can go wrong usually goes right – and then people draw the wrong conclusions.”

If accidents have become rare, but are extremely serious when they do occur, then it will be highly counter-productive if investigators pick out people’s actions that deviated from, or adapted, the procedures that management or designers assumed were being followed.

These deviations are always present in complex socio-technical systems that are running successfully and it is misleading to focus on them as if they were a necessary and sufficient cause when there is an accident. The deviations may have been a necessary cause of that particular accident, but in a complex system they were almost certainly not sufficient. These very deviations may have previously ensured the overall system would work. Removing the deviation will not necessarily make the system safer.

There might be fewer opportunities to learn from things going wrong, but there’s a huge amount to learn from all the cases that go right, provided we look. We need to try and understand the patterns, the constraints and the factors that are likely to amplify desired emergent behaviour and those that will dampen the undesirable or dangerous. In order to create a better understanding of how complex socio-technical systems can work safely we have to look at how people are using them when everything works, not just when there are accidents.

Safety II – learning from what goes right

Complex systems and accidents might be beyond our comprehension but that doesn’t mean we should just accept that “shit happens”. That is too flippant and fatalistic, two words that you can never apply to the safety critical people.

Safety I is shorthand for the old safety world view, which focused on failure. Its utility has been hindered by the relative lack of feedback from things going wrong, and the danger that paying insufficient attention to how and why things normally go right will lead to the wrong lessons being learned from the failures that do occur.

Safety ISafety I assumed linear cause and effect with root causes (see part 5). It was therefore prone to reaching a dangerously simplistic verdict of human error.

This diagram illustrates the focus of Safety I on the unusual, on the bad outcomes. I have copied, and slight adapted, the Safety I and Safety II diagrams from a document produced by Eurocontrol, (The European Organisation for the Safety of Air Navigation) “From Safety-I to Safety-II: A White Paper” (PDF, opens in new tab).

Incidentally, I don’t know why Safety I and Safety II are routinely illustrated using a normal distribution with the Safety I focus kicking in at two standard deviations. I haven’t been able to find a satisfactory explanation for that. I assume that this is simply for illustrative purposes.

Safety IIIf Safety I wants to prevent bad outcomes, in contrast Safety II looks at how good outcomes are reached. Safety II is rooted in a more realistic understand of complex systems than Safety I and extends the focus to what goes right in systems. That entails a detailed examination of what people are doing with the system in the real world to keep it running. Instead of people being regarded as a weakness and a source of danger, Safety II assumes that people, and the adaptations they introduce to systems and processes, are the very reasons we usually get good, safe outcomes.

If we’ve been involved in the development of the system we might think that we have a good understanding of how the system should be working, but users will always, and rightly, be introducing variations that designers and testers had never envisaged. The old, Safety I, way of thinking regarded these variations as mistakes, but they are needed to keep the systems safe and efficient. We expect systems to be both, which leads on to the next point.

There’s a principle in safety critical systems called ETTO, the Efficiency Thoroughness Trade Off. It was devised by Erik Hollnagel, though it might be more accurate to say he made it explicit and popularised the idea. The idea should be very familiar to people who have worked with complex systems. Hollnagel argues that it is impossible to maximise both efficiency and thoroughness. I’m usually reluctant to cite Wikipedia as a source, but its article on ETTO explains it more succinctly than Hollnagel himself did.

“There is a trade-off between efficiency or effectiveness on one hand, and thoroughness (such as safety assurance and human reliability) on the other. In accordance with this principle, demands for productivity tend to reduce thoroughness while demands for safety reduce efficiency.”

Making the system more efficient makes it less likely that it will achieve its important goals. Chasing these goals comes at the expense of efficiency. That has huge implications for safety critical systems. Safety requires some redundancy, duplication and fallbacks. These are inefficient. Efficiencies eliminate margins of error, with potentially dangerous results.

ETTO recognises the tension between organisations’ need to deliver a safe, reliable product or service, and the pressure to do so at the lowest cost possible. In practice, the conflict in goals is usually fully resolved only at the sharp end, where people do the real work and run the systems.

airline job adAs an example, an airline might offer a punctuality bonus to staff. For an airline safety obviously has the highest priority, but if it was an absolute priority, the only consideration, then it could not contemplate any incentive that would encourage crews to speed up turnarounds on the ground, or to persevere with a landing when prudence would dictate a “go around”. In truth, if safety were an absolute priority, with no element of risk being tolerated, would planes ever take off?

People are under pressure to make the systems efficient, but they are expected to keep the system safe, which inevitably introduces inefficiencies. This tension results in a constant, shifting, pattern of trade-offs and compromises. The danger, as “drift into failure” predicts (see part 4), is that this can lead to a gradual erosion of safety margins.

The old view of safety was to constrain people, reducing variability in the way they use systems. Variability was a human weakness. In Safety II variability in the way that people use the system is seen as a way to ensure the system adapts to stay effective. Humans aren’t seen as a potential weakness, but as a source of flexibility and resilience. Instead of saying “they didn’t follow the set process therefore that caused the accident”, the Safety II approach means asking “why would that have seemed like the right thing to do at the time? Was that normally a safe action?”. Investigations need to learn through asking questions, not making judgments – a lesson it was vital I learned as an inexperienced auditor.

Emergence means that the behaviour of a complex system can’t be predicted from the behaviour of its components. Testers therefore have to think very carefully about when we should apply simple pass or fail criteria. The safety critical community explicitly reject the idea of pass/fail, or the bimodal principle as they call it (see part 4). A flawed component can still be useful. A component working exactly as the designers, and even the users, intended can still contribute to disaster. It all depends on the context, what is happening elsewhere in the system, and testers need to explore the relationships between components and try to learn how people will respond.

Safety is an emergent property of the system. It’s not possible to design it into a system, to build it, or implement it. The system’s rules, controls, and constraints might prevent safety emerging, but they can only enable it. They can create the potential for people to keep the system safe but they cannot guarantee it. Safety depends on user responses and adaptations.

Adaptation means the system is constantly changing as the problem changes, as the environment changes, and as the operators respond to change with their own variations. People manage safety with countless small adjustments.

we don't make mistakesThere is a popular internet meme, “we don’t make mistakes – we do variations”. It is particularly relevant to the safety critical community, who have picked up on it because it neatly encapsulates their thinking, e.g. this article by Steven Shorrock, “Safety-II and Just Culture: Where Now?”. Shorrock, in line with others in the safety critical community, argues that if the corporate culture is to be just and treat people fairly then it is important that the variations that users introduce are understood, rather than being used as evidence to punish them when there is an accident. Pinning the blame on people is not only an abdication of responsibility, it is unjust. As I’ve already argued (see part 5), it’s an ethical issue.

Operator adjustments are vital to keep systems working and safe, which brings us to the idea of trust. A well-designed system has to trust the users to adapt appropriately as the problem changes. The designers and testers can’t know the problems the users will face in the wild. They have to confront the fact that dangerous dragons are lurking in the unknown, and the system has to trust the users with the freedom to stay in the safe zone, clear of the dragons, and out of the disastrous tail of the bell curve that illustrates Safety II.

Safety II and Cynefin

If you’re familiar with Cynefin then you might wonder about Safety II moving away from a focus on the tail of the distribution. Cynefin helps us understand that the tail is where we can find opportunities as well as threats. It’s worth stressing that Safety II does encompass Safety I and the dangerous tail of the distribution. It must not be a binary choice of focusing on either the tail or the body. We have to try to understand not only what happens in the tail, how people and systems can inadvertently end up there, but also what operators do to keep out of the tail.

The Cynefin framework and Safety II share a similar perspective on complexity and the need to allow for, and encourage, variation. I have written about Cynefin elsewhere, e.g. in two articles I wrote for the Association for Software Testing, and there isn’t room to repeat that here. However, I do strongly recommend that testers familiarise themselves with the framework.

To sum it up very briefly, Cynefin helps us to make sense of problems by assigning them to one of four different categories, the obvious, the complicated (the obvious and complicated being related in that problems have predictable causes and resolutions), the complex and the chaotic. Depending on the category different approaches are required. In the case of software development the challenge is to learn more about the problem in order to turn it from a complex activity into a complicated one that we can manage more easily.

Applying Cynefin would result in more emphasis on what’s happening in the tails of the distribution, because that’s where we will find the threats to be avoided and the opportunities to be exploited. Nevertheless, Cynefin isn’t like the old Safety I just because they both focus on the tails. They embody totally different worldviews.

Safety II is an alternative way of looking at accidents, failure and safety. It is not THE definitive way, that renders all others dated, false and heretical. The Safety I approach still has its place, but it’s important to remember its limitations.

Everything flows and nothing abides

Thinking about linear cause and effect, and decomposing components are still vital in helping us understand how different parts of the system work, but they offer only a very limited and incomplete view of what we should be trying to learn. They provide a way of starting to build our understanding, but we mustn’t stop there.

We also have to venture out into the realms of the unknown and often unknowable, to try to understand more about what might happen when the components combine with each other and with humans in complex socio-technical systems. This is when objects become processes, when static elements become part of a flow that is apparent only when we zoom out to take in a bigger picture in time and space.

The idea of understanding objects by stepping back and looking at how they flow and mutate over time has a long, philosophical and scientific history. 2,500 years ago Heraclitus wrote.

“Everything flows and nothing abides. Everything gives way and nothing stays fixed.”

Professor Michael McIntyre (Professor of Atmospheric Dynamics, Cambridge University) put it well in a fascinating BBC documentary, “The secret life of waves”.

“If we want to understand things in depth we usually need to think of them both as objects and as dynamic processes and see how it all fits together. Understanding means being able to see something from more than one viewpoint.”

In my next post “part 7 – resilience requires people” I will discuss some of the implications for software testing of the issues I have raised here, in particular how people keep systems going, and dealing with the inevitability of failure. That will lead us to resilience engineering.everything flows

The dragons of the unknown; part 5 – accident investigations and treating people fairly

Introduction

This is the fifth post in a series about problems that fascinate me, that I think are important and interesting. The series draws on important work from the fields of safety critical systems and from the study of complexity, specifically complex socio-technical systems. This was the theme of my keynote at EuroSTAR in The Hague (November 12th-15th 2018).

The first post was a reflection, based on personal experience, on the corporate preference for building bureaucracy rather than dealing with complex reality, “Facing the dragons part 1 – corporate bureaucracies”. The second post was about the nature of complex systems, “part 2 – crucial features of complex systems”. The third followed on from part 2, and talked about the impossibility of knowing exactly how complex socio-technical systems will behave with the result that it is impossible to specify them precisely, “part 3 – I don’t know what’s going on”.

The fourth post “part 4 – a brief history of accident models” looks at accident models, i.e. the way that safety experts mentally frame accidents when they try to work out what caused them. This post looks at weaknesses of of the way that we have traditionally investigated accidents and failures, assuming neat linearity with clear cause and effect. In particular, our use of root cause analysis, and willingness to blame people for accidents is hard to justify.

The limitations of root cause analysis

root cause (fishbone) diagram

Once you accept that complex systems can’t have clear and neat links between causes and effects then the idea of root cause analysis becomes impossible to sustain. “Fishbone” cause and effect diagrams (like those used in Six Sigma) illustrate traditional thinking, that it is possible to track back from an adverse event to find a root cause that was both necessary and sufficient to bring it about.

The assumption of linearity with tidy causes and effects is no more than wishful thinking. Like the Domino Model (see “part 4 – a brief history of accident models”) it encourages people to think there is a single cause, and to stop looking when they’ve found it. It doesn’t even offer the insight of the Swiss Cheese Model (also see part 4) that there can be multiple contributory causes, all of them necessary but none of them sufficient to produce an accident. That is a key idea. When complex systems go wrong there is rarely a single cause; causes are necessary, but not sufficient.

complex airline system

Here is a more realistic depiction of what a complex socio-technical system. It is a representation of the operations control system for an airline. The specifics don’t matter. It is simply a good illustration of how messy a real, complex system looks when we try to depict it.

This is actually very similar to the insurance finance applications diagram I drew up for Y2K (see “part 1 – corporate bureaucracies”). There was no neat linearity. My diagram looked just like this, with a similar number of nodes, or systems most of which had multiple two-way interfaces with others. And that was just at the level of applications. There was some intimidating complexity within these systems.

As there is no single cause of failure the search for a root cause can be counter-productive. There are always flaws, bugs, problems, deviances from process, variations. So you can always fix on something that has gone wrong. But it’s not really a meaningful single cause. It’s arbitrary.

The root cause is just where you decide to stop looking. The cause is not something you discover. It’s something you choose and construct. The search for a root cause can mean attention will focus on something that is not inherently dangerous, something that had previously “failed” repeatedly but without any accident. The response might prevent that particular failure and therefore ensure there’s no recurrence of an identical accident. However, introducing a change, even if it’s a fix, to one part of a complex system affects the system in unpredictable ways. The change therefore creates new possibilities for failure that are unknown, even unknowable.

It’s always been hard, even counter-intuitive, to accept that we can have accidents & disasters without any new failure of a component, or even without any technical failure that investigators can identify and without external factors interfering with the system and its operators. We can still have air crashes for which no cause is ever found. The pressure to find an answer, any plausible answer, means there has always been an overwhelming temptation to fix the blame on people, on human error.

Human error – it’s the result of a problem, not the cause

If there’s an accident you can always find someone who screwed up, or who didn’t follow the rules, the standard, or the official process. One problem with that is the same applies when everything goes well. Something that troubled me in audit was realising that every project had problems, every application had bugs when it went live, and there were always deviations from the standards. But the reason smart people were deviating wasn’t that they were irresponsible. They were doing what they had to do to deliver the project. Variation was a sign of success as much as failure. Beating people up didn’t tell us anything useful, and it was appallingly unfair.

One of the rewarding aspects of working as an IT auditor was conducting post-implementation reviews and being able to defend developers who were being blamed unfairly for problem projects. The business would give them impossible jobs, complacently assuming the developers would pick up all the flak for the inevitable problems. When auditors, like me, called them out for being cynical and irresponsible they hated it. They used to say it was because I had a developer background and was angling for my next job. I didn’t care because I was right. Working in a good audit department requires you to build up a thick skin, and some healthy arrogance.

There always was some deviation from standards, and the tougher the challenge the more obvious they would be, but these allegedly deviant developers were the only reason anything was delivered at all, albeit by cutting a few corners.

It’s an ethical issue. Saying the cause of an accident is that people screwed up is opting for an easy answer that doesn’t offer any useful insights for the future and just pushes problems down the line.

Sidney Dekker used a colourful analogy. Dumping the blame on an individual after an accident is “peeing in your pants management” (PDF, opens in new tab).

“You feel relieved, but only for a short while… you start to feel cold and clammy and nasty. And you start stinking. And, oh by the way, you look like a fool.”

Putting the blame on human error doesn’t just stink. It obscures the deeper reasons for failure. It is the result of a problem, not the cause. It also encourages organisations to push for greater automation, in the vain hope that will produce greater safety and predictability, and fewer accidents.

The ironies of automation

An important part of the motivation to automate systems is that humans are seen as unreliable & inefficient. So they are replaced by automation, but that leaves the humans with jobs that are even more complex and even more vulnerable to errors. The attempt to remove errors creates fresh possibilities for even worse errors. As Lisanne Bainbridge wrote in a 1983 paper “The ironies of automation”;

“The more advanced a control system is… the more crucial may be the contribution of the human operator.”

There are all sorts of twists to this. Automation can mean the technology does all the work and operators have to watch a machine that’s in a steady-state, with nothing to respond to. That means they can lose attention & not intervene when they need to. If intervention is required the danger is that vital alerts will be lost if the system is throwing too much information at operators. There is a difficult balance to be struck between denying operators feedback, and thus lulling them into a sense that everything is fine, and swamping them with information. Further, if the technology is doing deeply complicated processing, are the operators really equipped to intervene? Will the system allow operators to override? Bainbridge makes the further point;

“The designer who tries to eliminate the operator still leaves the operator to do the tasks which the designer cannot think how to automate.”

This is a vital point. Systems are becoming more complex and the tasks left to the humans become ever more demanding. System designers have only a very limited understanding of what people will do with their systems. They don’t know. The only certainty is that people will respond and do things that are hard, or impossible, to predict. That is bound to deviate from formal processes, which have been defined in advance, but these deviations, or variations, will be necessary to make the systems work.

Acting on the assumption that these deviations are necessarily errors and “the cause” when a complex socio-technical system fails is ethically wrong. However, there is a further twist to the problem, summed up by the Law of Stretched Systems.

Stretched systems

Lawrence Hirschhorn’s Law of Stretched Systems is similar to the Fundamental Law of Traffic Congestion. New roads create more demand to use them, so new roads generate more traffic. Likewise, improvements to systems result in demands that the system, and the people, must do more. Hirschhorn seems to have come up with the law informally, but it has been popularised by the safety critical community, especially by David Woods and Richard Cook.

“Every system operates always at its capacity. As soon as there is some improvement, some new technology, we stretch it.”

And the corollary, furnished by Woods and Cook.

“Under resource pressure, the benefits of change are taken in increased productivity, pushing the system back to the edge of the performance envelope.”

Every change and improvement merely adds to the stress that operators are coping with. The obvious response is to place more emphasis on ergonomics and human factors, to try and ensure that the systems are tailored to the users’ needs and as easy to use as possible. That might be important, but it hardly resolved the problem. These improvements are themselves subject to the Law of Stretched Systems.

This was all first noticed in the 1990s after the First Gulf War. The US Army hadn’t been in serious combat for 18 years. Technology had advanced massively. Throughout the 1980s the army reorganised, putting more emphasis on technology and training. The intention was that the technology should ease the strain on users, reduce fatigue and be as simple to operate as possible. It didn’t pan out that way when the new army went to war. Anthony H. Cordesman and Abraham Wagner analysed in depth the lessons of the conflict. They were particularly interested in how the technology had been used.

“Virtually every advance in ergonomics was exploited to ask military personnel to do more, do it faster, and do it in more complex ways… New tactics and technology simply result in altering the pattern of human stress to achieve a new intensity and tempo of combat.”

Improvements in technology create greater demands on the technology – and the people who operate it. Competitive pressures push companies towards the limits of the system. If you introduce an enhancement to ease the strain on users then managers, or senior officers, will insist on exploiting the change. Complex socio-technical systems always operate at the limits.

This applies not only to soldiers operating high tech equipment. It applies also to the ordinary infantry soldier. In 1860 the British army was worried that troops had to carry 27kg into combat (PDF, opens in new tab). The load has now risen to 58kg. US soldiers have to carry almost 9kg of batteries alone. The Taliban called NATO troops “donkeys”.

These issues don’t apply only to the military. They’ve prompted a huge amount of new thinking in safety critical industries, in particular healthcare and air transport.

The overdose – system behaviour is not explained by the behaviour of its component technology

Remember the traditional argument that any system that was not determimistic was inherently buggy and badly designed? See “part 2 – crucial features of complex systems”.

In reality that applies only to individual components, and even then complexity & thus bugginess can be inescapable. When you’re looking at the whole socio-technical system it just doesn’t stand up.

Introducing new controls, alerts and warnings doesn’t just increase the complexity of the technology as I mentioned earlier with the MIG jet designers (see part 4). These new features add to the burden on the people. Alerts and error message can swamp users of complex systems and they miss the information they really need to know.

I can’t recommend strongly enough the story told by Bob Wachter in “The overdose: harm in a wired hospital”.

A patient at a hospital in California received an overdose of 38½ times the correct amount. Investigation showed that the technology worked fine. All the individual systems and components performed as designed. They flagged up potential errors before they happened. So someone obviously screwed up. That would have been the traditional verdict. However, the hospital allowed Wachter to interview everyone involved in each of the steps. He observed how the systems were used in real conditions, not in a demonstration or test environment. Over five articles he told a compelling story that will force any fair reader to admit “yes, I’d have probably made the same error in those circumstances”.

Happily the patient survived the overdose. The hospital staff involved were not disciplined and were allowed to return to work. The hospital had to think long and hard about how it would try to prevent such mistakes recurring. The uncomfortable truth they had to confront was that there were no simple answers. Blaming human error was a cop out. Adding more alerts would compound the problems staff were already facing; one of the causes of the mistake was the volume of alerts swamping staff making it hard, or impossible, to sift out the vital warnings from the important and the merely useful.

One of the hard lessons was that focussing on making individual components more reliable had harmed the overall system. The story is an important illustration of the maxim in the safety critical community that trying to make systems safer can make them less safe.

Some system changes were required and made, but the hospital realised that the deeper problem was organisational and cultural. They made the brave decision to allow Wachter to publicise his investigation and his series of articles is well worth reading.

The response of the safety critical community to such problems and the necessary trade offs that a practical response requires, is intriguing with important lessons for software testers. I shall turn to this in my next post, “part 6 – Safety II, a new way of looking at safety”.

Games & scripts – stopping at red lights

I’ve been reading about game development lately. It’s fascinating. I’m sure there is much that conventional developers and testers could learn from the games industry. Designers need to know their users, how they think and what they expect. That’s obvious, but we’ve often only paid lip service to these matters. The best game designers have thought much more seriously and deeply about the users than most software developers.

Game designers have to know what their customers want from games. They need to understand why some approaches work and others fail. Developing the sort of games that players love is expensive. If the designers get it wrong then the financial consequences are ruinous. Inevitably the more thoughtful designers stray over into anthropology.

That is a subject I want to write about in more depth, but in the meantime this is a short essay I wrote in slightly amended form for Eurostar’s Test Huddle prompted by a blog by Chris Bateman the game designer. Bateman was talking about the nature of play and games, and an example he used made me think about how testing has traditionally been conducted.Midtown Madness

“Ramon Romero of Microsoft’s Games User Research showed footage of various random people playing games for the first time. I was particular touched by the middle aged man who drove around in Midtown Madness as if he was playing a driving simulator. ‘This is a great game’, he said, as he stopped at a red light and waited for it to change.”

That’s ridiculous isn’t it? Treating a maniacal, race game as a driving simulator? Maybe, but I’m not sure. That user was enjoying himself, playing the game entirely in line with his expectations of what the game should be doing.

The story reminded me of testers who embark on their testing armed with beliefs that are just as wildly misplaced about what the game, sorry, the application should be doing. They might have exhaustive scripts generated from requirements documents that tell them exactly what the expected behaviour should be, and they could be dangerously deluded.

Most of my experience has been with financial applications. They have been highly complicated, and the great concern was always about the unexpected behaviour, the hidden functionality that could allow users to steal money or screw up the data.

Focusing on the documented requirements, happy paths and the expected errors is tackling an application like that Midtown Madness player; driving carefully around the city, stopping at red lights and scrupulously obeying the law. Then, when the application is released the real users rampage around discovering what it can really do.

Was that cheerfully naïve Midtown Madness player really ridiculous? He was just having fun his way. He wasn’t paid a good salary to play the game. The ones who are truly ridiculous are the testers who are paid to find out what applications can do and naively think they can do so by sticking to their scripts. Perhaps test scripts are rather like traffic regulations. Both say what someone thinks should be going on, but how much does that actually tell you about what is really happening out there, on the streets, in the wild where the real users play?

Service Virtualization interview about usability

Service VirtualizationThis interview with Service Virtualization appeared in January 2015. Initially when George Lawton approached me I wasn’t enthusiastic. I didn’t think I would have much to say. However, the questions set me thinking, and I felt they were relevant to my experience so I was happy to take part. It gave me something to do while I was waiting to fly back from EuroSTAR in Dublin!

How does usability relate to the notion of the purpose of a software project?

When I started in IT over 30 years ago I never heard the word usability. It was “user friendliness”, but that was just a nice thing to have. It was nice if your manager was friendly, but that was incidental to whether he was actually good at the job. Likewise, user friendliness was incidental. If everything else was ok then you could worry about that, but no-one was going to spend time or money, or sacrifice any functionality just to make the application user friendly. And what did “user friendly” mean anyway. “Who knows? Who cares? We’ve got serious work do do. Forget about that touchy feely stuff.”

The purpose of software development was to save money by automating clerical routines. Any online part of the system was a mildly anomalous relic of the past. It was just a way of getting the data into the system so the real work could be done. Ok, that’s an over-simplification, but I think there’s enough truth in it to illustrate why developers just didn’t much care about the users and their experience. Development moved on from that to changing the business, rather than merely changing the business’s bureaucracy, but it took a long time for these attitudes to shift.

The internet revolution turned everything upside down. Users are no longer employees who have to put up with whatever they’re given. They are more likely to be customers. They are ruthless and rightly so. Is your website confusing? Too slow to load? Your customers have gone to your rivals before you’ve even got anywhere near their credit card number.

The lesson that’s been getting hammered into the heads of software engineers over the last decade or so is that usability isn’t an extra. I hate the way that we traditionally called it a “non-functional requirement”, or one of the “quality criteria”. Usability is so important and integral to every product that telling developers that they’ve got to remember it is like telling drivers they’ve got to remember to use the steering wheel and the brakes. If they’re not doing these things as a matter of course they shouldn’t be allowed out in public. Usability has to be designed in from the very start. It can’t be considered separately.

What are the main problems in specifying for and designing for software usability?

Well, who’s using the application? Where are they? What is the platform? What else are they doing? Why are they using the application? Do they have an alternative to using your application, and if so, how do you keep them with yours? All these things can affect decisions you take that are going to have a massive impact on usability.

It’s payback time for software engineering. In the olden days it would have been easy to answer these questions, but we didn’t care. Now we have to care, and it’s all got horribly difficult.

These questions require serious research plus the experience and nous to make sound judgements with imperfect evidence.

In what ways do organisations lose track of the usability across the software development lifecycle?

I’ve already hinted at a major reason. Treating usability as a non-functional requirement or quality criterion is the wrong approach. That segregates the issue. It’s treated as being like the other quality criteria, the “…ities” like security, maintainability, portability, reliability. It creates the delusion that the core function is of primary importance and the other criteria can be tackled separately, even bolted on afterwards.

Lewis & Rieman came out with a great phrase fully 20 years ago to describe that mindset. They called it the peanut butter theory of usability. You built the application, and then at the end you smeared a nice interface over the top, like a layer of peanut butter (PDF, opens in new tab).

“Usability is seen as a spread that can be smeared over any design, however dreadful, with good results if the spread is thick enough. If the underlying functionality is confusing, then spread a graphical user interface on it. … If the user interface still has some problems, smear some manuals over it. If the manuals are still deficient, smear on some training which you force users to take.”

Of course they were talking specifically about the idea that usability was a matter of getting the interface right, and that it could be developed separately from the main application. However, this was an incredibly damaging fallacy amongst usability specialists in the 80s and 90s. There was a huge effort to try to justify this idea by experts like Hartson & Hix, Edmonds, and Green. Perhaps the arrival of Object Oriented technology contributed towards the confusion. A low level of coupling so that different parts of the system are independent of each other is a good thing. I wonder if that lured usability professionals into believing what they wanted to believe, that they could be independent from the grubby developers.

Usability professionals tried to persuaded themselves that they could operate a separate development lifecycle that would liberate them from the constraints and compromises that would be inevitable if they were fully integrated into development projects. The fallacy was flawed conceptually and architecturally. However, it was also a politically disastrous approach. The usability people made themselves even less visible, and were ignored at a time when they really needed to be getting more involved at the heart of the development process.

As I’ve explained, the developers were only too happy to ignore the usability people. They were following methods and lifecycles that couldn’t easily accommodate usability.

How can organisations incorporate the idea of usability engineering into the software development and testing process?

There aren’t any right answers, certainly none that will guarantee success. However, there are plenty of wrong answers. Historically in software development we’ve kidded ourselves thinking that the next fad, whether Structured Methods, Agile, CMMi or whatever, will transform us into rigorous, respected professionals who can craft high quality applications. Now some (like Structured Methods) suck, while others (like Agile) are far more positive, but the uncomfortable truth is that it’s all hard and the most important thing is our attitude. We have to acknowledge that development is inherently very difficult. Providing good UX is even harder and it’s not going to happen organically as a by-product of some over-arching transformation of the way we develop. We have to consciously work at it.

Whatever the answer is for any particular organisation it has to incorporate UX at the very heart of the process, from the start. Iteration and prototyping are both crucial. One of the few fundamental truths of development is that users can’t know what they want and like till they’ve seen what is possible and what might be provided.

Even before the first build there should have been some attempt to understand the users and how they might be using the proposed product. There should be walkthroughs of the proposed design. It’s important to get UX professionals involved, if at all possible. I think developers have advanced to the point that they are less likely to get it horribly wrong, but actually getting it right, and delivering good UX is asking too much. For that I think you need the professionals.

I do think that Agile is much better suited to producing good UX than traditional methods, but there are still dangers. A big one is that many Agile developers are understandably sceptical about anything that smells of Big Up-Front Analysis and Design. It’s possible to strike a balance and learn about your users and their needs without committing to detailed functional requirements and design.

How can usability relate to the notion of testable hypothesis that can lead to better software?

Usability and testability go together naturally. They’re also consistent with good development practice. I’ve worked on, or closely observed, many applications where the design had been fixed and the build had been completed before anyone realised that there were serious usability problems, or that it would be extremely difficult to detect and isolate defects, or that there would be serious performance issues arising from the architectural choices that had been made.

We need to learn from work that’s been done with complexity theory and organisation theory. Developing software is mostly a complex activity, in the sense that there are rarely predictable causes and effects. Good outcomes emerge from trialling possible solutions. These possibilities aren’t just guesswork. They’re based on experience, skill, knowledge of the users. But that initial knowledge can’t tell you the solution, because trying different options changes your understanding of the problem. Indeed it changes the problem. The trials give you more knowledge about what will work. So you have to create further opportunities that will allow you to exploit that knowledge. It’s a delusion that you can get it right first time just by running through a sequential process. It would help if people thought of good software as being grown rather than built.

Why do UX evangelists give up?

People often dismiss Twitter for being a pointless waste of time, but the other day I had a chat that illustrated how useful and interesting Twitter can be if you’re prepared to use it thoughtfully.

Charlotte Lewis tweeted a link to a comment I’d made on Craig Tomlin’s blog. “UX evangelist as step 1 of ux-ing an org, huh…See @James_Christie comment http://www.usefulusability.com/the-5-models-of-corporate-user-experience-culture/”.

Charlotte Lewis's tweet

Craig had argued in his blog that there are five levels of UX maturity.

Level 1 – IT has responsibility for UX.

Level 2 – Operations (ie the department that administers the organisation) has responsibility for UX.

Level 3 – Marketing has responsibility for UX.

Level 4 – UX is independent and equal to IT, Operations and Marketing.

Level 5 – UX is independent and has authority over IT, Operations and Marketing.

I had suggsted in my comment that there is an important lower, earlier stage in the evolution of UX within an organisation. That is when an enthusiastic and well informed individual has identified that the organisation’s neglect of UX is a problem and starts lobbying for changes.

I was curious about Charlotte’s tweet and responded, asking why she’d dug out my comment. She said that a feed she follows had highlighted it, and she felt that it was still valid. We then had a chat about factors allowing a organisation to move from the initial level 0 that I’d identified to level 1.

I thought that the question was framed the wrong way round. If there is an enthusiastic and well-informed advocate of UX within the organisation then progress should follow. I thought it would be more interesting, and useful, to look at the factors that block UX evangelists, hence this blog.

What makes a UX evangelist give up?

I’d never considered the problem in such explicit terms before. I had thought about the reasons that organisations find themselves in what I called the usability death zone in my comment in Craig’s blog. The death zone is where organisations aren’t even aware that there is a problem. Usability and UX simply aren’t on their radar.

There are three groups of reasons for organisations staying in the death zone; the way that software engineering matured, the isolation of usability professionals, and the culture of organisations.

I wrote my Masters dissertation on the reasons why usability hadn’t penetrated software engineering. That really looked at the first two sets of reasons.

My conclusions were that traditional development methods, structured techniques and procurement practices had made it extremely difficult for software engineers to incorporate usability. On the other hand usability professionals had isolated themselves because of their lack of understanding of how commercial software was developed, and their willingness to stand aside from the development mainstream and carry out usability testing when it was too late to shape the application.

The whole dissertation is available on line.

Cultures that suffocate

What I didn’t consider in my dissertation were the cultural factors that block people who’ve identified the problem.

The first group of problems I mentioned are historical features of software engineering. Likewise, the second set of problems arise from the way the UX profession developed. However, the cultural factors are much more widespread, and stop many types of organisations from improving.

The common theme with these cultural factors is that the organisation has become rigid and has stopped learning, learning from its customers, its employees and its rivals.

Large organisations can grow to the point that they become so complex that many employees are totally cut off from the real purpose of the organisation, i.e selling products or providing a service to customers. For these employees the purpose of the organisation has become the smooth running of the organisation.

Large numbers of experienced, capable employees work hard, follow processes, calculate metrics and swap vast numbers of emails, “fyi”. They are subject to annual performance reviews in which targets are set, achievement is measured and rewards flow to those whose performance conforms to those idealised targets.

Compliance with processes and standards becomes the professional and responsible way to approach work. Indeed, professionalism and conformity can become indistinguishable from each other. Improvements are possible, but they are strictly marginal increments. The organisation learns how to carry on working the same way, but slightly more efficiently and effectively.

Sure, in software development CMMI can lead to greater rigour and to steady improvements, but they are unlikely to be radical. As Ben Simo put it when he contributed to an article I wrote on testing standards; “if an organisation is at a level less than the intent of level 5, CMM seems to often lock in ignorance that existed when the process was created”.

Seemingly radical change certainly does occur, but it is usually a case of managers being replaced, organisation structures being put through the blender, or whole departments being outsourced. All this chaos and disruption is predicated on the need to produce a more polished and refined version of the past, not to do anything radically different or better.

Such maturity models effectively discourage people from saying, “hang on, we’re on the wrong track here”. People who do try to speak out and to highlight fundamental flaws in working practices are liable to be seen as rebels and troublemakers, not team players.

Improvement is tied to metrics. The underlying assumption on which this paradigm rests is that measurement and metrics provide an objective insight into reality. The only acceptable insights are objective, and there is a hugely damaging and unchallenged false assumption that objectivity requires numbers. If there is a problem it will be detected in the numbers. Unfortunately these numbers are the product of the existing worldview and simply affirm that worldview.

Potential usability evangelists can see the problem, and understand its cause, but working within such a constrained and blinkered environment they might as well be trying to make their colleagues understand an alternative universe in which the laws of physics don’t apply.

The metrics were designed only to illuminate the problems that were imagined. Those problems that were never conceived lack the numerical evidence that justifies reform. Without such evidence the concerns of individuals are dismissed as subjective, as a matter of opinion rather than objective fact.

As I said, such a culture inhibits all sorts of reform, but I believe that it would be particularly dispiriting for budding usability evangelists. Faced with rejection and crushing disapproval they would either learn to conform or they would leave for a more enlightened employer.

Is the picture I’ve painted accurate? I’ve no figures or hard evidence, but my argument ties in with my own experience of how a company’s culture can suffocate innovation. Certainly, too much focus on measurement can be damaging. That is well established. See Robert Austin’s book, “Measuring and Managing Performance in Organizations”. Can anyone shed light on this? What is it that stops organisations moving out of the usability death zone? What makes the UX evangelists give up? I’d love to hear what people think about this problem.

Prototyping – making informed decisions at the right time

The day after I wrote my last blog about taking decisions I went to a talk arranged by the Scottish Usability Professionals Association, “An Introduction to Prototyping”.

The speaker was Neil Allison of the University of Edinburgh’s Website Development Project.

There’s no need for me to discuss Neil’s talk at length because he’s posted it here.

Neil used a phrase that leapt out at me. Prototyping ”helps us take informed decisions at the right time”.

That was exactly what I was thinking about when I wrote in my last blog about taking decisions at the right time, the last responsible moment in Lean terms.

Neil’s phrase summed up the appeal of prototyping perfectly for me. We can take informed decisions at the right time; not take poor, arbitrary or ill-informed decisions when we’ve simply run out of options and have to take whatever default option we’re left with.

In a later email Neil made the point that I keep on trying to make; that usability testing should shape the design, rather than evaluate it.

“When I do usability testing, it’s to check the requirements and the ease of use of a preferred solution. Usually before development begins in earnest as it’s too time consuming and/or costly to backpedal later. (Almost) pointless doing usability testing if you don’t have the resources to take action on the findings”.

Prototyping may have clear benefits but Neil is still trying to raise awareness within the university, spreading the word and trying to widen the use of the technique.

Testers should never see their role as being defined by a rigid process. They should always be looking for better ways to do their job, and be prepared to lobby and educate others.

Please have a look at Neil’s presentation, which contains advice on useful books and tools.

Usability and external suppliers – part 2

In my last post I talked about how the use of external suppliers can cause problems with the usability of applications. I promised to return to the topic and explain what the solutions are. I’m not sure I’d have bothered if I hadn’t stated that I would do it. The solutions are pretty basic and should be well known.

However, I’ve got to admit that there’s considerable reluctance to acknowledge the problems are real, and that the the solutions are relevant and practical. They won’t guarantee success, but since the traditional alternative load the odds heavily against success they’ve got to be an improvement.

The predictability paradox

There are two uncomfortable truths you have to acknowledge; the general problem and the specific one. In general, you can’t specify requirements accurately and completely up front, and any attempt to do so is doomed. The specific problem is that usability requirements are particularly difficult to specify.

Managers crave certainty and predictability. That’s understandable. What’s dangerous is the delusion that they exist when the truth is that we cannot be certain at the start of a development and we cannot predict the future with the level of confidence that we’d like.

It’s hard to stand up and say “we don’t know – and at this stage we couldn’t responsibly say that we do know”, even though that might be the truthful, honest and responsible thing to say.

It’s always been hugely tempting to pretend that you can set aside a certain amount of time during which you will get the requirements right in a single pass. It makes procurement, planning and management simpler.

Sadly any attempt to do so is like trying to build a house on shifting sand. The paradox is that striving for predictability pushes back the point when you truly can find out what is required and possible.

Mary Poppendieck made the point very well a few years back in an excellent article “Lean Development and the Predictability Paradox” (PDF, opens in new window).

I tried to argue much the same point in an article in 2008, “The Dangerous and Seductive V Model”. an article that has proved scarily popular on my website for the last couple of years. I guess many people still need to learn these lessons.

Unrighteous contracts?

Trying to create predictability by tying suppliers into tight contracts is a result of the delusion that predictability is available at an early stage. This can have perverse results.

Tight and tough contracts committing suppliers right at the start of a project is an understandable attempt to contain risk, or transfer it to the supplier.

Sadly, clients never truly transfer risk to the supplier if the risk is better handled internally. If the requirements are complex and poorly understood at the start then attempting to transfer the risk to the supplier is misguided.

Awarding a contract for the full development creates the danger that it will be awarded to the most optimistic, inexperienced or irresponsible supplier. Those suppliers with a shrewd understanding of what is really likely to be involved will tender cautiously and be undercut by the clueless, reckless or unprincipled.

When the wheels come off the project the client may be able to hold the supplier to the contract, but the cost will be high. Changes will be fought over, and the supplier will try to minimise the work – regardless of the impact on quality.

There seem to be three options about how to handle the contractual relationship with an external supplier if you want a suitable focus on quality, and especially usability.

The orthodox approach is to split the contract in two. The first contract would be to establish, at a high level, what is required, and ideally to deliver an early prototype. The second would be to build the full application.

This approach was recommended way back in 1987 by the US Defense Science Board Task Force On Military Software (PDF – opens in new window), and its motives were exactly the same as I’ve outlined.

The Agile approach would probably be to write contracts that have a budget for time and cost, and allow the scope to vary, possibly giving the client the right to cancel at set points, possibly after every iteration.

Martin Fowler summarises that approach well in this article, “The New Methodology”.

The third, more radical, approach was mooted by Mary Poppendieck in her article “Righteous Contracts”.

These “righteous contracts” would be written after the development, not at the start. I’m not convinced. It doesn’t seem all that different from simply dispensing with a contract and relying on trust, but I urge you to read her piece. She analyses the problems well and has some challenging ideas. I’d like to see them work, but they do seem rather idealistic.

Righteous contracts would work only if the client and supplier trust each other and are committed to a long term relationship. However, I’ve got to concede that without that trust and commitment the client is going to have problems with their suppliers regardless of the contractual style. You can’t create a productive partnership with a contract, but you can destroy it if the contract is poorly written and conceived.

Enough of the contracts! What about UX?

This piece was really prompted by my concerns about the effects of the contractual relationship between clients and suppliers, so not surprisingly I’ve been focussing on contracts.

However, I must admit I’ve found that rather dispiriting. So in order to feel a bit more positive I’m going to provide some pointers about how to try and handle usability better.

I discussed this in an article in 2009. It’s all about Agile and UX, but that’s where the interesting work is being done, and honestly, I’ve nothing positive to say about how traditional, linear techniques have handled usability. It might be possible to smuggle in techniques like Nielsen’s Discount Usability Engineering, or more ambitiously, Constantine & Lockwood’s Collaborative Usability Inspections. However, remember we’re talking about external suppliers and the contractual relationship. These techniques require iteration, and if the contract doesn’t make explicit allowance for iterative development then the odds are stacked hopelessly against you.

So Agile really seems to be the best hope for taking account of usability when dealing with an external supplier. Rather embarrassingly I came across this excellent article by Jeff Paton only a few days ago. It dates from 2008 and I really wish I’d found it earlier. It lists 12 “best practice” patterns that Paton has seen people using to incorporate UX into agile development. There’s some great stuff there.

A somewhat downbeat conclusion

Perhaps I shouldn’t have talked about “solutions” to the usability problems caused by using external suppliers. Maybe I should have used a phrase like “a better way”. I believe that flexible, staged contracts are vital. They are necessary, but they’re not sufficient. A commitment to usability is required regardless of the approach that is taken. It’s just that the traditional fixed price contract for a linear development torpedoes any hope of getting usability right. The damage is far more widespread than that, but usability is the issue that gets me particularly passionate.

I doubt if I’ve set off a light bulb in anyone’s head and they’re now thinking, “that’s what we’ve been doing wrong and I know what to do now”. However, I hope I’ve pointed some people in a useful direction to help them read, think and understand a bit more about what we do wrong.

I don’t know all the answers. Testers never do, and the big decisions that will affect usability and quality are a long way out of our remit. But we can, and should, be aware of the pitfalls of following particular approaches. We’ve a duty to speak out and tell management what can go wrong, and that just might tilt the odds back in our favour.