Business logic security testing (2009)

Business logic security testing (2009)

Testing ExperienceThis article appeared in the June 2009 edition of Testing Experience magazine and the October 2009 edition of Security Acts magazine.Security Acts magazine

If you choose to read the article please bear in mind that it was written in January 2009 and is therefore inevitably dated in some respects. In particular, ISACA has restructured COBIT, but it remains a useful source. Overall I think the arguments I made in this article are still valid.

The references in the article were all structured for a paper magazine. They were not set up as hyperlinks and I have not tried to recreate them and check out whether they still logic security testing article

The article

When I started in IT in the 80s the company for which I worked had a closed network restricted to about 100 company locations with no external connections.

Security was divided neatly into physical security, concerned with the protection of the physical assets, and logical security, concerned with the protection of data and applications from abuse or loss.

When applications were built the focus of security was on internal application security. The arrangements for physical security were a given, and didn’t affect individual applications.

There were no outsiders to worry about who might gain access, and so long as the common access controls software was working there was no need for analysts or designers to worry about unauthorized internal access.

Security for the developers was therefore a matter of ensuring that the application reflected the rules of the business; rules such as segregation of responsibilities, appropriate authorization levels, dual authorization of high value payments, reconciliation of financial data.

The world quickly changed and relatively simple, private networks isolated from the rest of the world gave way to more open networks with multiple external connections and to web applications.

Security consequently acquired much greater focus. However, it began to seem increasingly detached from the work of developers. Security management and testing became specialisms in their own right, and not just an aspect of technical management and support.

We developers and testers continued to build our applications, comforted by the thought that the technical security experts were ensuring that the network perimeter was of business logic security article header

Nominally security testing was a part of non-functional testing. In reality, it had become somewhat detached from conventional testing.

According to the glossary of the British Computer Society’s Special Interest Group in Software Testing (BCS SIGIST) [1], security testing is determining whether the application meets the specified security requirements.

SIGIST also says that security entails the preservation of confidentiality, integrity and availability of information. Availability means ensuring that authorized users have access to information and associated assets when required. Integrity means safeguarding the accuracy and completeness of information and processing methods. Confidentiality means ensuring that information is accessible only to those authorized to have access.

Penetration testing, and testing the security of the network and infrastructure, are all obviously important, but if you look at security in the round, bearing in mind wider definitions of security (such as SIGIST’s), then these activities can’t be the whole of security testing.

Some security testing has to consist of routine functional testing that is purely a matter of how the internals of the application work. Security testing that is considered and managed as an exercise external to the development, an exercise that follows the main testing, is necessarily limited. It cannot detect defects that are within the application rather than on the boundary.

Within the application, insecure design features or insecure coding might be detected without any deep understanding of the application’s business role. However, like any class of requirements, security requirements will vary from one application to another, depending on the job the application has to do.

If there are control failures that reflect poorly applied or misunderstood business logic, or business rules, then will we as functional testers detect that? Testers test at the boundaries. Usually we think in terms of boundary values for the data, the boundary of the application or the network boundary with the outside world.

Do we pay enough attention to the boundary of what is permissible user behavior? Do we worry enough about abuse by authorized users, employees or outsiders who have passed legitimately through the network and attempt to subvert the application, using it in ways never envisaged by the developers?

I suspect that we do not, and this must be a matter for concern. A Gartner report of 2005 [2] claimed that 75% of attacks are at the application level, not the network level. The types of threats listed in the report all arise from technical vulnerabilities, such as command injection and buffer overflows.

Such application layer vulnerabilities are obviously serious, and must be addressed. However, I suspect too much attention has been given to them at the expense of vulnerabilities arising from failure to implement business logic correctly.

This is my main concern in this article. Such failures can offer great scope for abuse and fraud. Security testing has to be about both the technology and the business.

Problem of fraud and insider abuse

It is difficult to come up with reliable figures about fraud because of its very nature. According to PriceWaterhouseCoopers in 2007 [3] the average loss to fraud by companies worldwide over the two years from 2005 was $2.4 million (their survey being biased towards larger companies). This is based on reported fraud, and PWC increased the figure to $3.2 million to allow for unreported frauds.

In addition to the direct costs there were average indirect costs in the form of management time of $550,000 and substantial unquantifiable costs in terms of damage to the brand, staff morale, reduced share prices and problems with regulators.

PWC stated that 76% of their respondents reported the involvement of an outside party, implying that 24% were purely internal. However, when companies were asked for details on one or two frauds, half of the perpetrators were internal and half external.

It would be interesting to know the relative proportions of frauds (by number and value) which exploited internal applications and customer facing web applications but I have not seen any statistics for these.

The U.S. Secret Service and CERT Coordination Center have produced an interesting series of reports on “illicit cyber activity”. In their 2004 report on crimes in the US banking and finance sector [4] they reported that in 70% of the cases the insiders had exploited weaknesses in applications, processes or procedures (such as authorized overrides). 78% of the time the perpetrators were authorized users with active accounts, and in 43% of cases they were using their own account and password.

The enduring problem with fraud statistics is that many frauds are not reported, and many more are not even detected. A successful fraud may run for many years without being detected, and may never be detected. A shrewd fraudster will not steal enough money in one go to draw attention to the loss.

I worked on the investigation of an internal fraud at a UK insurance company that had lasted 8 years, as far back as we were able to analyze the data and produce evidence for the police. The perpetrator had raised 555 fraudulent payments, all for less than £5,000 and had stolen £1.1 million pounds by the time that we received an anonymous tip off.

The control weaknesses related to an abuse of the authorization process, and a failure of the application to deal appropriately with third party claims payments, which were extremely vulnerable to fraud. These weaknesses would have been present in the original manual process, but the users and developers had not taken the opportunities that a new computer application had offered to introduce more sophisticated controls.

No-one had been negligent or even careless in the design of the application and the surrounding procedures. The trouble was that the requirements had focused on the positive functions of the application, and on replicating the functionality of the previous application, which in turn had been based on the original manual process. There had not been sufficient analysis of how the application could be exploited.

Problem of requirements and negative requirements

Earlier I was careful to talk about failure to implement business logic correctly, rather than implementing requirements. Business logic and requirements will not necessarily be the same.

The requirements are usually written as “the application must do” rather than “the application must not…”. Sometimes the “must not” is obvious to the business. It “goes without saying” – that dangerous phrase!

However, the developers often lack the deep understanding of business logic that users have, and they design and code only the “must do”, not even being aware of the implicit corollary, the “must not”.

As a computer auditor I reviewed a sales application which had a control to ensure that debts couldn’t be written off without review by a manager. At the end of each day a report was run to highlight debts that had been cleared without a payment being received. Any discrepancies were highlighted for management action.

I noticed that it was possible to overwrite the default of today’s date when clearing a debt. Inserting a date in the past meant that the money I’d written off wouldn’t appear on any control report. The report for that date had been run already.

When I mentioned this to the users and the teams who built and tested the application the initial reaction was “but you’re not supposed to do that”, and then they all tried blaming each other. There was a prolonged discussion about the nature of requirements.

The developers were adamant that they’d done nothing wrong because they’d built the application exactly as specified, and the users were responsible for the requirements.

The testers said they’d tested according to the requirements, and it wasn’t their fault.

The users were infuriated at the suggestion that they should have to specify every last little thing that should be obvious – obvious to them anyway.

The reason I was looking at the application, and looking for that particular problem, was because we knew that a close commercial rival had suffered a large fraud when a customer we had in common had bribed an employee of our rival to manipulate the sales control application. As it happened there was no evidence that the same had happened to us, but clearly we were vulnerable.

Testers should be aware of missing or unspoken requirements, implicit assumptions that have to be challenged and tested. Such assumptions and requirements are a particular problem with security requirements, which is why the simple SIGIST definition of security testing I gave above isn’t sufficient – security testing cannot be only about testing the formal security requirements.

However, testers, like developers, are working to tight schedules and budgets. We’re always up against the clock. Often there is barely enough time to carry out all the positive testing that is required, never mind thinking through all the negative testing that would be required to prove that missing or unspoken negative requirements have been met.

Fraudsters, on the other hand, have almost unlimited time to get to know the application and see where the weaknesses are. Dishonest users also have the motivation to work out the weaknesses. Even people who are usually honest can be tempted when they realize that there is scope for fraud.

If we don’t have enough time to do adequate negative testing to see what weaknesses could be exploited than at least we should be doing a quick informal evaluation of the financial sensitivity of the application and alerting management, and the internal computer auditors, that there is an element of unquantifiable risk. How comfortable are they with that?

If we can persuade project managers and users that we need enough time to test properly, then what can we do?

CobiT and OWASP

If there is time, there are various techniques that testers can adopt to try and detect potential weaknesses or which we can encourage the developers and users to follow to prevent such weaknesses.

I’d like to concentrate on the CobiT (Control Objectives for Information and related Technology) guidelines for developing and testing secure applications (CobiT 4.1 2007 [5]), and the CobiT IT Assurance Guide [6], and the OWASP (Open Web Application Security Project) Testing Guide [7].

Together, CobiT and OWASP cover the whole range of security testing. They can be used together, CobiT being more concerned with what applications do, and OWASP with how applications work.

They both give useful advice about the internal application controls and functionality that developers and users can follow. They can also be used to provide testers with guidance about test conditions. If the developers and users know that the testers will be consulting these guides then they have an incentive to ensure that the requirements and build reflect this advice.

CobiT implicitly assumes a traditional, big up-front design, Waterfall approach. Nevertheless, it’s still potentially useful for Agile practitioners, and it is possible to map from CobiT to Agile techniques, see Gupta [8].

The two most relevant parts are in the CobiT IT Assurance Guide [6]. This is organized into domains, the most directly relevant being “Acquire and Implement” the solution. This is really for auditors, guiding them through a traditional development, explaining the controls and checks they should be looking for at each stage.

It’s interesting as a source of ideas, and as an alternative way of looking at the development, but unless your organization has mandated the developers to follow CobiT there’s no point trying to graft this onto your project.

Of much greater interest are the six CobiT application controls. Whereas the domains are functionally separate and sequential activities, a life-cycle in effect, the application controls are statements of intent that apply to the business area and the application itself. They can be used at any stage of the development. They are;

AC1 Source Data Preparation and Authorization

AC2 Source Data Collection and Entry

AC3 Accuracy, Completeness and Authenticity Checks

AC4 Processing Integrity and Validity

AC5 Output Review, Reconciliation and Error Handling

AC6 Transaction Authentication and Integrity

Each of these controls has stated objectives, and tests that can be made against the requirements, the proposed design and then on the built application. Clearly these are generic statements potentially applicable to any application, but they can serve as a valuable prompt to testers who are willing to adapt them to their own application. They are also a useful introduction for testers to the wider field of business controls.

CobiT rather skates over the question of how the business requirements are defined, but these application controls can serve as a useful basis for validating the requirements.

Unfortunately the CobiT IT Assurance Guide can be downloaded for free only by members of ISACA (Information Systems Audit and Control Association) and costs $165 for non-members to buy. Try your friendly neighborhood Internal Audit department! If they don’t have a copy, well maybe they should.

If you are looking for a more constructive and proactive approach to the requirements then I recommend the Open Web Application Security Project (OWASP) Testing Guide [7]. This is an excellent, accessible document covering the whole range of application security, both technical vulnerabilities and business logic flaws.

It offers good, practical guidance to testers. It also offers a testing framework that is basic, and all the better for that, being simple and practical.

The OWASP testing framework demands early involvement of the testers, and runs from before the start of the project to reviews and testing of live applications.

Phase 1: Before Deployment begins

1A: Review policies and standards

1B: Develop measurement and metrics criteria (ensure traceability)

Phase 2: During definition and design

2A: Review security requirements

2B: Review design and architecture

2C: Create and review UML models

2D: Create and review threat models

Phase 3: During development

3A: Code walkthroughs

3B: Code reviews

Phase 4: During development

4A: Application penetration testing

4B: Configuration management testing

Phase 5: Maintenance and operations

5A: Conduct operational management reviews

5B: Conduct periodic health checks

5C: Ensure change verification

OWASP suggests four test techniques for security testing; manual inspections and reviews, code reviews, threat modeling and penetration testing. The manual inspections are reviews of design, processes, policies, documentation and even interviewing people; everything except the source code, which is covered by the code reviews.

A feature of OWASP I find particularly interesting is its fairly explicit admission that the security requirements may be missing or inadequate. This is unquestionably a realistic approach, but usually testing models blithely assume that the requirements need tweaking at most.

The response of OWASP is to carry out what looks rather like reverse engineering of the design into the requirements. After the design has been completed testers should perform UML modeling to derive use cases that “describe how the application works.

In some cases, these may already be available”. Obviously in many cases these will not be available, but the clear implication is that even if they are available they are unlikely to offer enough information to carry out threat modeling.OWASP threat modelling UML
The feature most likely to be missing is the misuse case. These are the dark side of use cases! As envisaged by OWASP the misuse cases shadow the use cases, threatening them, then being mitigated by subsequent use cases.

The OWASP framework is not designed to be a checklist, to be followed blindly. The important point about using UML is that it permits the tester to decompose and understand the proposed application to the level of detail required for threat modeling, but also with the perspective that threat modeling requires; i.e. what can go wrong? what must we prevent? what could the bad guys get up to?

UML is simply a means to that end, and was probably chosen largely because that is what most developers are likely to be familiar with, and therefore UML diagrams are more likely to be available than other forms of documentation. There was certainly some debate in the OWASP community about what the best means of decomposition might be.

Personally, I have found IDEF0 a valuable means of decomposing applications while working as a computer auditor. Full details of this technique can be found at [9].

It entails decomposing an application using a hierarchical series of diagrams, each of which has between three and six functions. Each function has inputs, which are transformed into outputs, depending on controls and mechanisms.IDEF0
Is IDEF0 as rigorous and effective as UML? No, I wouldn’t argue that. When using IDEF0 we did not define the application in anything like the detail that UML would entail. Its value was in allowing us to develop a quick understanding of the crucial functions and issues, and then ask pertinent questions.

Given that certain inputs must be transformed into certain outputs, what are the controls and mechanisms required to ensure that the right outputs are produced?

In working out what the controls were, or ought to be, we’d run through the mantra that the output had to be accurate, complete, authorized, and timely. “Accurate” and “complete” are obvious. “Authorized” meant that the output must have been created or approved by people with the appropriate level of authority. “Timely” meant that the output must not only arrive in the right place, but at the right time. One could also use the six CobiT application controls as prompts.

In the example I gave above of the debt being written off I had worked down to the level of detail of “write off a debt” and looked at the controls required to produce the right output, “cancelled debts”. I focused on “authorized”, “complete” and “timely”.

Any sales operator could cancel a debt, but that raised the item for management review. That was fine. The problem was with “complete” and “timely”. All write-offs had to be collected for the control report, which was run daily. Was it possible to ensure some write-offs would not appear? Was it possible to over-key the default of the current date? It was possible. If I did so, would the write-off appear on another report? No. The control failure therefore meant that the control report could be easily bypassed.

The testing that I was carrying out had nothing to do with the original requirements. They were of interest, but not really relevant to what I was trying to do. I was trying to think like a dishonest employee, looking for a weakness I could exploit.

The decomposition of the application is the essential first step of threat modeling. Following that, one should analyze the assets for importance, explore possible vulnerabilities and threats, and create mitigation strategies.

I don’t want to discuss these in depth. There is plenty of material about threat modeling available. OWASP offers good guidance, [10] and [11]. Microsoft provides some useful advice [12], but its focus is on technical security, whereas OWASP looks at the business logic too. The OWASP testing guide [7] has a section devoted to business logic that serves as a useful introduction.

OWASP’s inclusion of mitigation strategies in the version of threat modeling that it advocates for testers is interesting. This is not normally a tester’s responsibility. However, considering such strategies is a useful way of planning the testing. What controls or protections should we be testing for? I think it also implicitly acknowledges that the requirements and design may well be flawed, and that threat modeling might not have been carried out in circumstances where it really should have been.

This perception is reinforced by OWASP’s advice that testers should ensure that threat models are created as early as possible in the project, and should then be revisited as the application evolves.

What I think is particularly valuable about the application control advice in CobIT and OWASP is that they help us to focus on security as an attribute that can, and must, be built into applications. Security testing then becomes a normal part of functional testing, as well as a specialist technical exercise. Testers must not regard security as an audit concern, with the testing being carried out by quasi-auditors, external to the development.

Getting the auditors on our side

I’ve had a fairly unusual career in that I’ve spent several years in each of software development, IT audit, IT security management, project management and test management. I think that gives me a good understanding of each of these roles, and a sympathetic understanding of the problems and pressures associated with them. It’s also taught me how they can work together constructively.

In most cases this is obvious, but the odd one out is the IT auditor. They have the reputation of being the hard-nosed suits from head office who come in to bayonet the wounded after a disaster! If that is what they do then they are being unprofessional and irresponsible. Good auditors should be pro-active and constructive. They will be happy to work with developers, users and testers to help them anticipate and prevent problems.

Auditors will not do your job for you, and they will rarely be able to give you all the answers. They usually have to spread themselves thinly across an organization, inevitably concentrating on the areas with problems and which pose the greatest risk.

They should not be dictating the controls, but good auditors can provide useful advice. They can act as a valuable sounding board, for bouncing ideas off. They can also be used as reinforcements if the testers are coming under irresponsible pressure to restrict the scope of security testing. Good auditors should be the friend of testers, not our enemy. At least you may be able to get access to some useful, but expensive, CobiT material.

Auditors can give you a different perspective and help you ask the right questions, and being able to ask the right questions is much more important than any particular tool or method for testers.

This article tells you something about CobiT and OWASP, and about possible new techniques for approaching testing of security. However, I think the most important lesson is that security testing cannot be a completely separate specialism, and that security testing must also include the exploration of the application’s functionality in a skeptical and inquisitive manner, asking the right questions.

Validating the security requirements is important, but so is exposing the unspoken requirements and disproving the invalid assumptions. It is about letting management see what the true state of the application is – just like the rest of testing.


[1] British Computer Society’s Special Interest Group in Software Testing (BCS SIGIST) Glossary.

[2] Gartner Inc. “Now Is the Time for Security at the Application Level” (NB PDF download), 2005.

[3] PriceWaterhouseCoopers. “Economic crime- people, culture and controls. The 4th biennial Global Economic Crime Survey”.

[4] US Secret Service. “Insider Threat Study: Illicit Cyber Activity in the Banking and Finance Sector”.

[5] IT Governance Institute. CobiT 4.1, 2007.

[6] IT Governance Institute. CobiT IT Assurance Guide (not free), 2007.

[7] Open Web Application Security Project. OWASP Testing Guide, V3.0, 2008.

[8] Gupta, S. “SOX Compliant Agile Processes”, Agile Alliance Conference, Agile 2008.

[9] IDEF0 Function Modeling Method.

[10] Open Web Application Security Project. OWASP Threat Modeling, 2007.

[11] Open Web Application Security Project. OWASP Code Review Guide “Application Threat Modeling”, 2009.

[12] Microsoft. “Improving Web Application Security: Threats and Countermeasures”, 2003.

Testers are like auditors

No, this isn’t another “life is like a box of chocolates”. It’s not a tortured metaphor to try and make a point. I mean it straight up. Testers and auditors are very similar, or they should be.

In May I went to hear Michael Bolton speaking in Edinburgh. He touched on the distinction between testing and checking. If you’re not familiar with this then go and check out this article now. Sure I want you to read my article, but Michael’s argument is important and you really need to understand it.

I talked to Michael for a few minutes afterwards, and I said that the distinction between testing and checking applied also to auditors and that I’d had arguments about that very point, about the need for good auditors to find out what’s really going on rather than to work their way down a checklist.

I mentioned other similarities between testing and auditing and Michael agreed. Usually people are surprised, or think the connection is tenuous. Recruitment consultants have been the most frustrating. They see my dual experience in testing and auditing as an oddity, not an asset.

I said I’d been meaning to write about the link between testing and auditing, especially the audit of applications. Michael told me to do it! So here it is.

It’s about using your brain – not a script

If you don’t know what you’re doing then a script to follow is handy. However, that doesn’t mean that unskilled people with prepared scripts, or checklists, are adequate substitutes for skilled people who know what they’re doing.

As Michael said in his talk, scripts focus our attention on outputs rather than outcomes. If you’re just doing a binary yes/no check then all you need to know is whether the auditees complied. You don’t need to trouble your brain with tricky questions like; “does it matter?”, “how significant is it?”.

I’ve seen auditors whose job was simply to check whether people and applications were complying with the prescribed controls. They asked “have you done x to y” and would accept only the answers “yes” or “no”. I thought that was unprofessional and potentially dangerous. It encourages managers to take decisions based on the fear of getting beaten up for “failing” an audit rather than decisions based on commercial realities.

However, that style of auditing is easier and cheaper than doing a thorough, professional job. At least it’s cheaper if your aim is to go through the motions and do a low quality job at the lowest possible cost. You can fulfil your regulatory obligations to have an internal audit department without having to pay top dollar for skilled staff. The real costs of that cheapskate approach can be massive and long term.

What’s the risk?

Good auditing, like good testing, has to be risk based. You concentrate on the areas that matter, where the risk is greatest. You then assess your findings in the light of the risk and the business context of the application.

The questions I asked myself when I approached the audit of an application were;

“What must the application do, and how will I know that it’s doing it?”

“What must the application prevent, and how will I know that it really will stop it happening?”

“What controls are in place to make sure that all processing is on time, complete, accurate and authorised by the right people?”

The original requirements might be of interest, but if they were missing or hopelessly out of date it was no big deal. What mattered was the relationship between the current business environment and the application. Who cared if the application perfectly reflected the requirements if the world had moved on? What really mattered were the context and the risk.

Auditing with exploratory testing

Application audits were invariably time boxed. There was no question of documenting our planned tests in detailed scripts. We worked in a highly structured manner, but documentation was light. Once we understood the application’s context, and the functions that were crucial, we’d identify the controls that were needed to make sure the right things happened, and the wrong things didn’t.

I’d then go in and see if it worked out that way. Did the right things always happen? Could we force through transactions that should be stopped? Could we evade controls? Could we break the application? I’d approach each piece of testing with a particular mindset, out to prove a particular point.

It’s maybe pushing the point to call it true exploratory testing, but that’s only because we’d never even heard of it. It was just an approach that worked for us. However, in principle I think it was no different from exploratory testing and we could have done our job much better if we had been trained.

Apart from the fun and adrenalin rush of catching criminals on fraud investigations (no use denying it – that really was good fun) this was the best part of auditing. You’d built up a good knowledge of the application and its context, then you went in and tried to see how it really worked. It was always fascinating, and it was nothing like the stereotype of the checklist-driven compliance auditor. It was an engrossing intellectual exercise, learning more and applying each new piece of understanding to learn still more.

”It’s your decision – but this is what’s going on”

I have a horror of organisations that are in hock to their internal auditors. Such bodies are deeply dysfunctional. The job of good auditors is to shine a light on what’s really going on, not to beat people up for breaking the rules. It’s the responsibility of management to act on the findings and recommendations of auditors. It should never be the responsibility of auditors to tell people what they must do. It happens though, and corporations that allow it are sick. It means that auditors effectively take commercial decisions for which they are neither trained nor accountable.

It’s just like letting testers make the decision to ship, without being responsible for the commercial consequences.

Both testers and auditors are responsible for letting the right people have the best available information to make the right decisions. The context varies, and the emphasis is different, but many of the techniques are interchangeable. Of course auditors look at things that testers don’t, but these differences can still offer the chance to learn from each other.

For instance, auditors will pay close attention to the controls and procedures surrounding an application. How else could they judge the effectiveness of the application’s controls? They have to understood how they fit into the wider business if they are to assess the impact and risk of weaknesses. Maybe the broader vision is something testers could work on?

Why not try cultivating your internal audit department? If they’re any good you could learn something. If they’re no good then they could learn a lot from you!

O2 website usability: beating the user up

I’m continuing my discussion of the O2 registration process, and showing how it’s a good example of how testers can bring a more creative approach to testing than just ticking off the test cases. See the previous two posts from which this one follows on; “Usability and O2 Registration” and “O2 website usability: testing, secrets and answers”.

The next problem with the registration form was clearing valid input fields when an error was detected.

This is something that really bugs me. Ok, if I’ve entered something invalid then I deserve to be punished by the application. I’ll take it like a man. Just give me the snotty error message, and clear that field. But clearing other fields that are fine? Well that’s just mean.

Now I’m straying outside my comfort zone here and if this were a real project I’d be asking questions that coders might consider really dumb. On the other hand, they might be shifting uncomfortably in their seats. Fortunately I spent a few years as a computer auditor asking exactly those sort of questions. It’s not that I developed a thick skin (though I did). It’s more that I realised you can get a pretty high hit rate with questions starting “sorry if I’m being a bit dim here but …”.

Server side validation?

Wiping correctly completed fields looks rather like developers are relying on server side validation. If there’s a problem they then have to regenerate the form with an error message.

click on image to view

Does server side validation mean that they’d have to send the password and validation code back from the server to rebuild the form? Is that considered a security exposure? I don’t see why, but I’m not sufficiently technically aware to know for sure. Please someone, enlighten me.

If that is a potential weakness, why are they doing all the validation at the server? Sure, the definitive validation has to be at the server, or you’re leaving yourself wide open, but really, all of the validation? I don’t know the answers. I’m just asking. “Maybe I’m being dumb, but …”.

A further problem with server side validation is that users are given the chance to choose an account name. O2 have over 20 million customers. The chances are that a user’s first choice will have been taken. The validation code and passwords will be removed from the form each time the user asks unsuccessfully for an account name.

This is quite a common problem and there are solutions. You don’t have to make your users suffer. You can use Ajax to communicate with the server in the background without having to refresh the form. Applications can therefore check proposed account names as the user is typing, and provide immediate feedback. Maybe there’s a good reason why Ajax hasn’t been used, maybe not.

The point is to ask, and force developers to justify whacking the users over the head. You mustn’t do it in an aggressive or sarcastic way. People often do things in certain ways because it’s convenient, without questioning their motives. It’s helpful to be forced to sit back and think just why we’re doing it that way, what the implications are, and if that route really is the best.

It’s also vital that you don’t come across as telling the developers that their technical solution is wrong and that you know better. Almost certainly you don’t. You’ve just got a different perspective. If the developers think you’re fronting up to them in a technical showdown then they’ll humiliate you, and frankly that would serve you right!

Do we really need all of these fields?

Next, why are O2 asking for my name and address? At first that seems like a daft question. I’m registering. Of course they want these details. It’s a no brainer. That seems to be the extent of the thought O2 gave to the matter.

However, I’ve got contracts for both phones. O2 already know my name and my address. There’s no validation of the name and address against the details that are already held. Will O2 treat my input as a change to my account details? Actually, they won’t. I know that from experimenting with the site.

So why bother capturing the details? Why bother risking confusion later when they realise they have two addresses for the same account, or when the customer is puzzled that O2 aren’t using the new address? These shouldn’t be rhetorical questions. Testers should be asking questions like that.

Something else that puzzles me is the large number of text boxes to capture the address. Why do they need the address pre-formatted like this? Is this just for the convenience of the coders?

The postcodes should be validated, and should be captured in a separate box. At a pinch it can be justified having the house number, or name, captured separately, especially if the correct postal address is being generated. I’ve seen only two half-way convincing arguments for multiple address boxes.

Firstly, there’s the “users are idiots” argument. They will enter garbage in the address field given half a chance. Secondly, fraudsters love to mangle addresses to make investigation more difficult.

The first argument has some merit, but multiple input boxes don’t help much. Users can still enter rubbish, and that’s why the second argument doesn’t really stand up. I’ve worked on many fraud investigations, and separate input boxes is no handicap to a fraudster.

Insisting that users input their address in a certain way risks annoying them. O2 are not guilty of it here, but some companies use drop down menus with errors in them. My address on Ebay and PayPal is wrong because they’ve made it impossible to enter my correct address. They don’t understand how the system of Scottish postal addresses works. Stuff still gets here, but it annoys me every time I see it.

Splitting the whole address up into a series of separate boxes looks very much like inertia. That’s the way it’s always been done – no need to question it.

Did anyone question why O2 need to capture all the other mandatory fields? Are these genuinely essential, or just nice to have? It’s not as if users have to provide the information. They might react by cancelling the registration process, and denying O2 any benefit at all. Or when they realise they can’t continue without entering anything, then that’s just what they’ll give you; anything! What value does management information have if you’ve goaded users into entering false data?

I recommend this blog piece by David Hamill on the subject. His blog has lots of other good ideas and discussions.

Forms should not be designed by throwing every possible field that could be captured at a wireframe and then telling the coders to crack on. Each question should have to be justified because each extra question will tip some users over the edge.

There’s a trade-off between capturing information and making things easy and pleasant for customers. If your customers think you’ve gone too far in the wrong direction then they’ll punish you.

What’s the big picture?

Specialists focus on their own area of expertise, in which they are supremely confident. They expect others to defer to them, and in turn they defer to other experts in different areas.

Sometimes the whole process needs someone to ask the pertinent questions that will open everything up and help people to see the big picture. Testers can do that.

This look at the O2 registration process has been more interesting than I expected, and I’ve had more to say than I ever thought at the start. However, I promise to pull it all together with my last piece, “The O2 registration series – wrapping up”.

O2 website usability: testing, secrets and answers

I left off my previous blog about the usability of the O2 site’s registration process when I’d got the validation code, which was texted to my phone, and I was about to move on to the registration form itself.

This form is a gem if you’re wanting to look at poor usability and sloppy testing, but if you’re an O2 customer then it’s a charmless mess.

  • It applies inappropriate validation, and compounds the problem with an inaccurate error message.
  • It seems too reliant on server side validation.
  • Valid input is cleared if an error is detected in another field.
  • It has too many text input boxes, and too many of them are mandatory.
  • The form as a whole, and the process it supports, don’t seem to have been thought through.

I’m going to deal with only the first issue in this post because it’s not a straightforward matter of usability. It highlights the security weakness of secret questions and answers.

click on image to view

Do companies allow coders to write error messages for users?

I entered the six digit validation code and all the other details, including choosing a user name and password.

I had to choose a security question from the usual set, i.e. mother’s maiden name, first school etc. I chose my mother’s maiden name, then entered the name, “O’Neill”. By the way, that is neither the question I chose, nor the value I entered, but it serves my point.

I got the following error message.

“Security answer must contain letters and numbers and be 1-50 characters long”

That seemed a bit odd, but I stuck a couple of integers on the end of the name.

It then became clear that the validation code and passwords (initial entry and confirmation) had been removed from the refreshed screen that had come back with the error message. It wasn’t immediately obvious that the verification code had been removed because I was below the fold and the message was out of sight.

So I re-entered all the details and tried to submit again. It still didn’t accept my mother’s name, and the validation code and passwords had gone again.

It then dawned on me that when they said that the answer must have letters and numbers they didn’t actually mean that. Maybe they meant that it couldn’t have special characters? So I tried removing the apostrophe from O’Neill.

Yes, that was the problem! They’d created a freeform text input field with validation to stop special characters being entered, and the error message doesn’t actually say so. Oh dear!

However, I didn’t have the chance to enjoy my success. Now the user name I’d chosen was flagged up as being unavailable. And yes, the passwords and validation codes had been wiped.

I re-entered everything and tried another user name. No joy, and of course I’d lost my data again. Next time I just asked them to select a user name for me.

Success at last! I’d registered.

Filling in the form had taken far longer than it needed to, and had left me exasperated because O2 had ignored some basic usability rules.

The validation didn’t make sense for the input that was requested. If you’re asking for freeform text then you should allow for special characters. If you do decide that they are unacceptable then you should make that clear before users input their data, and you should ensure that your error messages are also clear on the point.

Why ban special characters?

The only special characters on my keyboard that were acceptable were hyphens, commas and full stops.

I wish I were more technical and could identify with confidence what O2 were doing, but it looks suspiciously like a very clumsy defence against SQL injection attacks. As far as I know it’s not necessary to ban special characters from free-form text input fields. Programmers should be sanitising the input to deal with potential attacks, or using bound database variables so user input is strictly segregated from the executable code. They should shouldn’t they? Help me out here!

Anyway, even if it is a reasonable precaution (which I doubt) to ban special characters that could result in user input being treated as executable code, surely it should just be the dangerous characters that should be banned?

Beware of not very secret secrets!

Secret questions and answers are a notorious security weakness. They can be ridiculously easy to guess, especially if you know the customer. For a quick introduction to the subject, check out this recent paper from the University of Cambridge Computer Laboratory.

Some people choose to use special characters in their secret answers to make them harder to crack. It hardly makes sense to stop them.

If you are going to use them you should really allow the users to choose their own question and answer. If you really must insist on giving the user no choice then don’t use the same old obvious ones that O2 have.

  • Mother’s maiden name
  • Name of first school attended
  • Name of your pet
  • Favourite sports team
  • Favourite animal
  • Place of birth

That’s a dreadful set of questions. Any number of people outside my immediate family would either know the answer to most of these, or be able to take an informed guess.

To make it even worse O2 have suggested you set up a user name using “the name of a favourite pet, footie team, house, school, street or town in combination with a number such as your date of birth, house number or mobile number.

That’s enough for now

The poor error message, the dubious validation and the rather naïve use of secret questions all combine to give a poor impression of the site. If your approach to testing is based on deriving scripts from requirements, then I doubt if you’ll detect such problems. Rather, you may see them, but you won’t see them as being problems. Even if you do think that they are a problem it might be difficult to persuade the developers.

I’ll return in a day or so to discuss this further in “O2 website usability: beating the user up”, and talk about how testers can and should try to prevent these problems occurring, rather than just complaining about them when it’s too late to make a difference. Basically, it’s a matter of being able to ask awkward questions at the right time!

Usability and Security – an alternative view

Before concentrating on software testing consultancy I worked at various times as an IT auditor and an information security manager. I also wrote an MSc dissertation on usability and software testing. I’ve therefore had plenty of opportunities to think about usability and security.

Usability and security: a trade-off?

usability & securityIf you search the internet for “usability” and “security” you’ll find plenty of hits. Nearly all of them are concerned with the trade off, the tension between usability and security.

That’s obviously important, and it’s an interesting topic, but it’s not what I want to talk about here. I’d better stress that I’m not really talking about technical security testing of the infrastructure, i.e. penetration testing, or attacks on the network. This article is about the security of the application design, not the infrastructure.

I’m fascinated by the link between usability and security, by the similarities rather than the contrast that people usually focus on.

Functionality – the organisational view

Traditionally software development concentrated on the function of the application; that is, the function as seen by the organisation, not the user. Traditional techniques, especially structured analysis and design, treated people as objects, mere extensions of the machine. The user was just a self-propelled component of the system.

The system designer may have decided that the users should perform the different elements of a task in the sequence A, B, C & D. The natural sequence for the user might be A, C, D and finishing up with B. That sequence may have been more intuitive, and the official sequence may have been confusing and time-wasting for the user, but that was just too bad.

The official sequence may have been chosen for good reasons, or it may have been entirely arbitrary, but that was the way it had to be.

Is traditional functional testing one-dimensional?

When it came to the functional testing the testers would probably have been working with a previously prepared test script along the lines; “do A, B, C then D – does it work?”. After some tweaking, no doubt it would work, and the application would be released.

The users then try to do the task in the sequence A, C, D, B. Maybe it works fine, in which case they settle down to that routine. Maybe it crashes the system. After all the testers were pushed for time. They tested for what the users were supposed to do. They had enough on their plate grinding through hundreds of laboriously documented test scripts. There was no time to see what would happen if the users didn’t follow the rules.

Or maybe the users’ sequence of A, C, D, B just traps users in a dead end because the system doesn’t permit it. They’ll learn the lesson, and get used to it. Their work may take significantly longer than it needs to, and they may hate the system, but the substantial costs are hidden and long term. The relatively small cost savings of ignoring the users’ experience are more immediate and visible.

Another possibility is that the system will accept the unofficial sequence, but with unpredictable results. This is where security becomes relevant. If applications are designed to do what the organisation wants, without enough consideration of what is forbidden and unacceptable, then applications will be launched with all sorts of hidden, latent functionality.

As well as doing what they are supposed to, applications will allow users to do unpredictable things; some will be damaging to the company, some may be illegal.

The sequence A, C, D, B may allow a user to bypass some important control. Or there may be no genuine control. The application’s integrity may depend on nothing more substantial than users following prescribed sequences and routes through the system.

Testers have only a limited time to perform their functional testing, which in consequence can look at the application in a very blinkered, one-dimensional way. If the testing is purely slanted towards providing evidence that the application does what it is supposed to do, then it will be the users who find out the true functionality when it goes live. If there are weaknesses that will allow frauds then users have all the time they want to find them and see how the system can be abused.

Thinking about real user behaviour

The old world of big corporate internal IT development meant users had to like it or lump it when the new applications arrived. That doesn’t work with web applications used by the public. If users hate an application they won’t use it. Companies have to ensure that they test to find out how the users will actually use the application, or at the very least they have to be able to react quickly and refine applications when they see how they’re being used.

Usability and security therefore have a great deal in common, and both stand in contrast to the traditional emphasis on the corporate view of “function”.

In a sense, usability and security are just different ways of looking at the same problem.

How will our respectable users actually use the application?

How will the bad guys actually abuse the application?

Both need to be allowed for in testing, and some of the techniques overlap.

Usability and security as integral parts of functional testing

Usability testing, if it is to be effective, has to be an integral part of the design process. Perhaps it’s so much part of design that it’s hardly really testing at all. A standard technique is to use personas, characters that are fleshed out to help development teams understand who they are developing for.

A possible limitation of these personas is that they will be stock, stereotypical personas that have bland, “good” characteristics and who will behave in neat, predictable ways. And if they’re going to be predictable, why bother with personas at all? Surely you have to think a bit more deeply than that about your users and what motivates them.

If testers are looking only at what the application is supposed to be doing, and not at what it must not be doing, then their knowledge of the application will be hopelessly superficial compared to the deep understanding that real users will acquire over time.

It’s impossible for testers to acquire that deep knowledge in a limited time, but if they use well-judged personas, unleash their imagination and start looking at the application in a different and more cynical way then they can expose serious weaknesses.

Usability and security? It’s not as simple as saying there’s a trade-off. They’re just complementary ways of thinking about real users.

I discussed the sort of negative testing that can help find control weaknesses in an article that appeared in 2009 in Testing Experience and Security Acts magazines. You can find a copy here, on my business site.