Before concentrating on software testing consultancy I worked at various times as an IT auditor and an information security manager. I also wrote an MSc dissertation on usability and software testing. I’ve therefore had plenty of opportunities to think about usability and security.
Usability and security: a trade-off?
If you search the internet for “usability” and “security” you’ll find plenty of hits. Nearly all of them are concerned with the trade off, the tension between usability and security.
That’s obviously important, and it’s an interesting topic, but it’s not what I want to talk about here. I’d better stress that I’m not really talking about technical security testing of the infrastructure, i.e. penetration testing, or attacks on the network. This article is about the security of the application design, not the infrastructure.
I’m fascinated by the link between usability and security, by the similarities rather than the contrast that people usually focus on.
Functionality – the organisational view
Traditionally software development concentrated on the function of the application; that is, the function as seen by the organisation, not the user. Traditional techniques, especially structured analysis and design, treated people as objects, mere extensions of the machine (and the idea that software systems are machines was always misconceived). The user was just a self-propelled component of the system.
The system designer may have decided that the users should perform the different elements of a task in the sequence A, B, C & D. The natural sequence for the user might be A, C, D and finishing up with B. That sequence may have been more intuitive, and the official sequence may have been confusing and time-wasting for the user, but that was just too bad.
The official sequence may have been chosen for good reasons, or it may have been entirely arbitrary, but that was the way it had to be.
Is traditional functional testing one-dimensional?
When it came to the functional testing the testers would probably have been working with a previously prepared test script along the lines; “do A, B, C then D – does it work?”. After some tweaking, no doubt it would work, and the application would be released.
The users then try to do the task in the sequence A, C, D, B. Maybe it works fine, in which case they settle down to that routine. Maybe it crashes the system. After all the testers were pushed for time. They tested for what the users were supposed to do. They had enough on their plate grinding through hundreds of laboriously documented test scripts. There was no time to see what would happen if the users didn’t follow the rules.
Or maybe the users’ sequence of A, C, D, B just traps users in a dead end because the system doesn’t permit it. They’ll learn the lesson, and get used to it. Their work may take significantly longer than it needs to, and they may hate the system, but the substantial costs are hidden and long term. The relatively small cost savings of ignoring the users’ experience are more immediate and visible.
Another possibility is that the system will accept the unofficial sequence, but with unpredictable results. This is where security becomes relevant. If applications are designed to do what the organisation wants, without enough consideration of what is forbidden and unacceptable, then applications will be launched with all sorts of hidden, latent functionality.
As well as doing what they are supposed to, applications will allow users to do unpredictable things; some will be damaging to the company, some may be illegal.
The sequence A, C, D, B may allow a user to bypass some important control. Or there may be no genuine control. The application’s integrity may depend on nothing more substantial than users following prescribed sequences and routes through the system.
Testers have only a limited time to perform their functional testing, which in consequence can look at the application in a very blinkered, one-dimensional way. If the testing is purely slanted towards providing evidence that the application does what it is supposed to do, then it will be the users who find out the true functionality when it goes live. If there are weaknesses that will allow frauds then users have all the time they want to find them and see how the system can be abused.
Thinking about real user behaviour
The old world of big corporate internal IT development meant users had to like it or lump it when the new applications arrived. That doesn’t work with web applications used by the public. If users hate an application they won’t use it. Companies have to ensure that they test to find out how the users will actually use the application, or at the very least they have to be able to react quickly and refine applications when they see how they’re being used.
Usability and security therefore have a great deal in common, and both stand in contrast to the traditional emphasis on the corporate view of “function”.
In a sense, usability and security are just different ways of looking at the same problem.
How will our respectable users actually use the application?
How will the bad guys actually abuse the application?
Both need to be allowed for in testing, and some of the techniques overlap.
Usability and security as integral parts of functional testing
Usability testing, if it is to be effective, has to be an integral part of the design process. Perhaps it’s so much part of design that it’s hardly really testing at all. A standard technique is to use personas, characters that are fleshed out to help development teams understand who they are developing for.
A possible limitation of these personas is that they will be stock, stereotypical personas that have bland, “good” characteristics and who will behave in neat, predictable ways. And if they’re going to be predictable, why bother with personas at all? Surely you have to think a bit more deeply than that about your users and what motivates them.
If testers are looking only at what the application is supposed to be doing, and not at what it must not be doing, then their knowledge of the application will be hopelessly superficial compared to the deep understanding that real users will acquire over time.
It’s impossible for testers to acquire that deep knowledge in a limited time, but if they use well-judged personas, unleash their imagination and start looking at the application in a different and more cynical way then they can expose serious weaknesses.
Usability and security? It’s not as simple as saying there’s a trade-off. They’re just complementary ways of thinking about real users.
I discussed the sort of negative testing that can help find control weaknesses in an article that appeared in 2009 in Testing Experience and Security Acts magazines. You can find a copy here.