A more optimistic conclusion?

This is the final post in a series about how and why so many corporations became embroiled in a bureaucratic mess in which social and political skills are more important than competence.

In my first post “Sick bureaucracies and mere technicians” I talked about Edward Giblin’s analysis back in the early 1980s of the way senior managers had become detached from the real work of many corporations. Not only did this problem persist, but it become far worse.

In my second post, “Digital Taylorism & the drive for standardisation“, I explained how globalisation and technical advances gave impetus to digital Taylorism and the mass standardisation of knowledge work. It was widely recognised that Taylorism damaged creativity, a particularly serious concern with knowledge work. However, that concern was largely ignored, swamped by the trends I discussed in my third post, “Permission to think“.

In this post I will try to offer a more constructive conclusion after three essays of unremitting bleakness!

Deskilling – a chilling future for testing?

When it comes to standardisation of testing the “talented managers” (see “Permission to think“) will tell themselves that they are looking at a bigger picture than the awkward squad (ok, I mean context driven testers here) who complain that this is disastrous for software testing.

Many large corporations are hooked into a paradigm that requires them to simultaneously improve quality and reduce costs, and to do so by de-skilling jobs below the elite level. Of course other tactics are deployed, but deskilling is what concerns me here. The underlying assumption is that standardisation and detailed processes will not only improve quality, but also reduce costs, either directly by outsourcing, or indirectly by permitting benchmarking against outsourcing suppliers.

In the case of testing that doesn’t work. You can do it, but at the expense of the quality of testing. Testing is either a thinking, reflective activity, or it is done badly. However, testing is a mere pawn; it’s very difficult for corporate bureaucrats to make an exception for testing. If they were to do that it would undermine the whole paradigm. If testing is exempt then how could decision makers hold the line when faced with special pleading on behalf of other roles they don’t understand? No, if the quality of testing has to be sacrificed then so be it.

The drive for higher quality at reduced cost is so powerful that its underlying assumption is unchallengeable. Standardisation produces simplicity which allows higher quality and lower costs. That is corporate dogma, and anyone who wants to take a more nuanced approach is in danger of being branded a heretic and denied a hearing. It is easier to fudge the issue and ignore evidence that applying this strategy to testing increases costs and reduces quality.

Small is beautiful

Perhaps my whole story has been unnecessarily bleak. I have been talking about corporations and organisations. I really mean large bodies. The gloomy, even chilling, picture that I’ve been painting doesn’t apply to smaller, more nimble firms. Start-ups, technology focused firms, and specialist testing services providers (or the good ones at least) have a clearer idea of what the company is trying to do. They’re far less likely to sink into a bureaucratic swamp. For one thing it would kill them quickly. Also, to hark back to my first post in this series, “Sick bureaucracies and mere technicians“, such firms are more likely to be task dependent, i.e. the more senior people will probably have a deep understanding of the core business. It is their job to apply that knowledge in the interests of the company, rather than merely to run the corporate bureaucracy.

My advice to testers who want to do good work would be to head for the smaller outfits, the task dependent companies. As a heuristic I’d want to work for a company that was small enough for me to speak to anyone, at any time, who had the power to change things. Then, I’d know that if I saw possible improvements I’d have the chance to sell my ideas to someone who could make a difference. One of the most dispiriting things I ever heard was a senior manager in the global corporation where I worked saying “you’re quite right – but you’d be appalled at how high you could go and find people who’d agree with you, but say that they couldn’t change anything”.

What’s to be done?

Nevertheless, many good testers are working for big corporations, and struggling to make things better. They’re not all going to walk out the door, and they shouldn’t just give up in despair. What can they do? Well, plain speaking will have a limited impact – except on their careers. Senior managers don’t like being told “we’re doing rubbish work and you’re acting like an idiot if you refuse to admit that”.

Corporate managers are under pressure to make the bureaucracy more efficient by standardising working practices and processes. In order to do so they have to redefine what constitutes simple, routine work. Testers have to understand that pressure and respond by lobbying to be allowed to carry out that redefinition themselves. Testing has to be defined by those who understand and respect it so that the thoughtful, reflective, non-routine elements are recognised. Testing must be defined in such a way that it can handle complex problems, and not just simple, ordered problems (see Cynefin).

That takes us back to the segmentation of knowledge workers described by Brown, Lauder and Ashton in The Global Auction (see my previous post “Permission to think“). The workforce is increasingly segmented into developers (those responsible for corporate development, not software developers!), who are given “permission to think”, demonstrators who apply processes, and drones who basically follow a script without being required to engage their brains. If testers have to follow a prescriptive, documentation driven standard like ISO 29119 they are implicitly assigned to the status of drones.

Testers must argue their case so they are represented in the class of developers who are allowed to shape the way the corporation works. The arguments are essentially the same as those that have been deployed against ISO 29119, and can be summed up in the phrase I used at the top; testing is either a thinking, reflective activity, or it is done badly. Testing is an activity that provides crucial information to the corporate elite, the “developers”. As such testers must be given the responsiblity to think, or else senior management will be choking off the flow of vital information about applications and products.

That is a tough task, and I’m sceptical about the chances of testers persuading their corporations to buck a powerful trend. I doubt if many will be successful, but perhaps some brave, persuasive and persistent souls will succeed. They deserve respect and support from the rest of the testing profession.

If large corporations won’t admit their approach is damaging to testing then ultimately I fear that their in-house test teams are doomed. They will be sucked into a vicious circle of commoditised testing that will lead to the work being outsourced to cheaper suppliers. If you’re not doing anything worthwhile there is always someone who can do it cheaper. Iain McCowatt wrote a great blog about this.

Where might hope lie?

Perhaps outsourcing might offer some hope for testing after all. A major motive for adopting standards is to facilitate outsourcing. The service that is being outsourced is standard, neatly defined, and open to benchmarking. Suppliers who can demonstrate they comply with standards have a competitive advantage. That is one of the reasons ISO 29119 is so pernicious. Good testing suppliers will have to ignore that market and make it clear that they are not competing to provide cheaper drones, but highly capable, thinking consultants who can provide valuable insights about products and applications.

The more imaginative, context-driven (and smaller?) suppliers can certainly compete effectively in this fashion. After all they are following an approach that is is both more efficient and more effective. Their focus is on testing rather than documentation and compliance with an irrelevant standard. However, I suspect that is exactly why many large corporations are suspicious of such an approach. The corporate bureaucrat is reassured by visible documents and compliance with an ISO standard.

A new framework?

Perhaps there is room for an alternative approach. I don’t mean an alternative standard, but a framework that shows how good context driven testing is responsible testing that can keep regulators happy. It could tie together the requirements of regulators, auditors and governance professionals with context driven techniques, perhaps a particular context driven approach. The framework could demonstrate links between governance needs and specific context driven techniques. This has been lurking at the back of my mind for a couple of years, but I haven’t yet committed serious effort to the idea. My reading and thinking around the subject of corporate bureaucracy for this series of blog posts has helped shape my understanding of why such an alternative framework might be needed, and why it might work.

An alternative framework in the form of a set of specific, practical, actionable guidelines would ironically be more consistent with ISO’s definition of a standard than ISO 29119 itself is.

A standard is a document that provides requirements, specifications, guidelines or characteristics that can be used consistently to ensure that materials, products, processes and services are fit for their purpose.

Taking the relevant parts of the definition, the framework would provide guidelines that can be used consistently to ensure that testing services are fit for their purpose.

Could this give corporations the quality of testing they require without having to abandon their worldview? Internal testers might still be defined as drones (with a few, senior testers allowed to be demonstrators). External testers can be treated as consultants and allowed to think.

When discussing ISO 29119, and the approach to testing that it embodies, we should always bear in mind that the standard does not exist to provide better testing. It was developed because it fits a corporate mindset that wants to see as many activities as possible defined as simple and routine. Testers who have a vision of better testing, and a better future for testing, have to understand that mindset and deal with it, rather than just kicking ISO 29119 for being a useless piece of verbiage. The standard really is useless, but perhaps we need a more sophisticated approach than just calling it like it is.

Permission to think

This is the third post, or essay, in a series about how and why so many corporations became embroiled in a bureaucratic mess in which social and political skills are more important than competence.

In my first post “Sick bureaucracies and mere technicians” I talked about Edward Giblin’s analysis back in the early 1980s of the way senior managers had become detached from the real work of many corporations. Not only did this problem persist, but it become far worse.

In my second post, “Digital Taylorism & the drive for standardisation“, I explained how globalisation and technical advances gave impetus to digital Taylorism and the mass standardisation of knowledge work. It was widely recognised that Taylorism damaged creativity, a particularly serious concern with knowledge work. However, that concern was largely ignored, swamped by the trends I will discuss here.

The problems of digital Taylorism and excessive standardisation were exacerbated by an unhealthy veneration of the most talented employees (without any coherent explanation of what “talent” means in the corporate context), and a heightened regard for social skills at the expense of technical experience and competence. That bring us back to the concerns of Giblin in the early 1980s.

10,000 times more valuable

Corporations started to believe in the mantra that talent is rare and it is the key to success. There is an alleged quote from Bill Gates that takes the point to the extreme.

A great lathe operator commands several times the wage of an average lathe operator, but a great writer of software code is worth 10,000 times the price of an average software writer.

Also, this is from “The Global Auction” (the book I credited in my last post for inspiring much of this series).

… being good is no longer good enough because much of the value within organizations is believed to be contributed by a minority of employees. John Chambers, CEO of Cisco, is reported as saying, “a world-class engineer with five peers can out-produce 200 regular engineers“. A Corporate Executive Board (CEB) study also found that the best computer programmers are at least 12 times as productive as the average.

These claims raise many questions about the meaning of terms such as “value”, “great”, “worth”, “world-class”, “out-produce” and “productive”. That’s before you get to questions about the evidence on which such claims are based. 10,000 times more valuable? 33 times? 12 times?

I was unable to track down any studies supporting these claims. However, Laurent Bossavit has persevered with the pursuit of similar claims about programmer productivity in his book The Leprechauns of Software Engineering“. Laurent’s conclusion is that such claims are usually either anecdotal or unsubstantiated claims that were made in secondary sources. In the few genuine studies the evidence offered invariably failed to support the claim about huge variations in programmer productivity.

The War for Talent

The CEB study, referred to above, claiming that the best programmers are at least 12 times more productive than the average was reminiscent of one study that did define “productive”.

The top 3% of programmers produce 1,200% more lines of code than the average; the top 20% produce 320% more lines of code than the average.

I’m not sure there’s a more worthless and easily gamed measure of productivity than “lines of code”. No-one who knows anything about writing code would take it seriously as a measure of productivity. Any study that uses it deserves merciless ridicule. If you search for the quote you will find it appearing in many places, usually alongside the claims you can see in this image.Midtown Madness

I have lifted it from a book about hiring “top talent”, “Topgrading (how leading companies win by hiring, coaching and keeping the best people”. The original source for all these claims is the work that the famous consulting firm of McKinsey & Co carried out in the late 1990s. McKinsey’s efforts were turned into an influential, and much cited, book, “The War for Talent“.

The War for Talent argued that there are five “imperatives of talent management”; believing that corporate success is a product of the individual brilliance of “A players”, creating a culture in which superstars will flourish, doing whatever it takes to hire the superstars of the future, accelerating their development, and ruthless differentiation between the superstars and the other employees. The stars should get rewarded lavishly. Of the rest, the also rans, the “B players” should receive modest rewards and the poorer performers, the “C players”, should be dismissed.

Not surprisingly McKinsey’s imperatives have been controversial. The widespread influence of the “War for Talent” attracted unfavourable critical interest. Brown, Lauder and Ashton were politely sceptical about its merits in “The Global Auction”. However, they were more interested in its influence than its reliability. Malcolm Gladwell savaged The War for Talent in a long, persuasive article in the New Yorker (which incidentally is well worth reading just for its discussion of anti-submarine tactics in the Second World War). Gladwell made much of the fact that a prize exemplar of the McKinsey approach was one of its most prominent clients, Enron. The freedom and reckless over-promotion that Enron awarded “the smartest guys in the room” were significant factors in Enron’s collapse. The thrust of Gladwell’s argument resonated with my experience of big corporations; when they thrive it is not because of untrammelled individual brilliance, but because they create the right environment for all their employees to work together effectively.

Andrew Munro of AM Azure Consulting (a British HR consultancy) went much further than Gladwell. In “What happened to The War For Talent exemplars?” (PDF, opens in new tab) he analysed the companies cited in The War for Talent and their subsequent experience. Not only did they seem to be selected initially for no other reason than being McKinsey clients, the more praise they received in the book for their “talent management” the less likely they were to succeed over the following decade.

Munro went into considerable detail. To summarise, he argued that the McKinsey authors started with flawed premises, adopted a dodgy method, went looking for confirmation of their preconceptions, then argued that their findings were best practice and generally applicable when in reality they were the opposite.

The five imperatives don’t even seem applicable to the original sample of U.S. firms. Not only has this approach failed to work as a generic strategy; it looks like it may have had counter-productive consequences

Again, as with the idea that tacit knowledge can be codified, the credibility of the War for Talent is perhaps of secondary importance. What really matters is the influence that it has had. Not even its association with the Enron disaster has tainted it. Nevertheless, it is worth stressing that the most poorly evidenced and indeed damaging strategies can be adopted enthusiastically if they suit powerful people who will personally benefit.

This takes us back to Giblin in the early 1980s. He argued that senior managers were increasingly detached from the real work of the organisation, which they disparaged, because they were more reliant on their social skills than on knowledge of what that real work entailed. As I shall show, the dubious War for Talent, in conjunction with digital Taylorism, made a serious problem worse.

Permission to think

A natural consequence of digital Taylorism and a lauding of the most “talented” employees is that corporations are likely to segment their workforce. In the Global Auction, Brown, Lauder and Ashton saw three types of knowledge worker emerging: developers, demonstrators, and drones.

Developers include the high potentials and top performers… They represent no more than 10–15 percent of an organisation’s workforce given “permission to think” and include senior researchers, managers, and professionals.

Demonstrators are assigned to implement or execute existing knowledge, procedures, or management techniques, often through the aid of software. Much of the knowledge used by consultants, managers, teachers, nurses, technicians, and so forth is standardised or prepackaged. Indeed, although demonstrator roles may include well-qualified people, much of the focus is on effective communication with colleagues and customers.

Drones are involved in monotonous work, and they are not expected to engage their brains. Many call center or data entry jobs are classic examples, where virtually everything that one utters to customers is pre-scripted in software packages. Many of these jobs are also highly mobile as they can be standardized and digitalized.

“Permission to think”? That is an incendiary phrase to bring into a discussion of knowledge work, especially when the authors claim that only 10-15 percent of employees would be allowed such a privilege. Nevertheless, Brown, Lauder and Ashton do argue their case convincingly. This nightmarish scenario follows naturally if corporations are increasingly run by managers who see themselves as a talented elite, and they are under pressure to cut costs by outsourcing and offshoring. That requires the work to be simplified (or at least made to appear simpler) and standardised – and that is going to apply to every activity that can be packaged up, regardless of whether its skilled practitioners actually need to think. Where would testers fit into this? As demonstrators at best, in the case of test managers. The rest? They would be drones.

Empathy over competence

I could have finished this essay on the depressing possibility of testers being denied permission to think. However, digital Taylorism has another feature, or result, that reinforces the trend, with worrying implications for good testing.

As corporations attempted to digitise more knowledge and package work up into standardised processes, the value of such knowledge and work diminished. Or rather, the value that corporations placed on the people with that knowledge and experience reduced. Corporations have been attaching more value to those who have strong social skills rather than traditional technical skills. Brown, Lauder and Ashton quoted at length in The Global Auction the head of global HR at a major bank.

If you are really going to allow people to work compressed hours, work from home, then work needs to be unitised and standardised; other-wise, it can’t be. And as we keep pace with careers, we want to change; we don’t want to stay in the same job for more than 2 years max. They want to move around, have different experiences, grow their skills base so they’re more marketable. So if you’re moving people regularly, they have to be easily able to move into another role. If it’s going to take 6 months to bring them up to speed, then the business is going to suffer. So you need to be able to step into a new role and function. And our approach to that is to deeply understand the profile that you need for the role — the person profile, not the skills profile. What does this person need to have in their profile? If we look at our branch network and the individuals working at the front line with our customers, what do we need there?

We need high-end empathy; we need people who can actually step into the customers’ shoes and understand what that feels like. We need people who enjoy solving problems… so now when we recruit, we look for that high-end empathy and look for that desire to solve problems, that desire to complete things in our profiles…. we can’t teach people to be more flexible, to be more empathetic… but we can teach them the basics of banking. We’ve got core products, core processes; we can teach that quite easily. So we are recruiting against more of the behavioural stuff and teaching the skills stuff, the hard knowledge that you need for the role.

Good social skills are always helpful, indeed they are often vital. I don’t want to denigrate such skills or the people who are good at working with other people. However, there has to be a balance. Corporations require people with both, and it worries me to see them focussing on one and dismissing the other. There is a paradox here. Staff must be more empathetic, but they have to use standardised processes that do their thinking for them; they can’t act in a way that recognises that clients have special, non-standard needs. Perhaps the unspoken idea is that the good soft skills are needed to handle the clients who are getting a poor service?

I recognise this phenomenon. When I worked for a large services company I was sometimes in a position with a client where I lacked the specific skills and experience I should really have had. A manager once reassured me, “don’t worry – just use our intellectual capital, stay one day ahead of the client, and don’t let them see the material you’re relying on”. I had the reputation for giving a reassuring impression to clients. We seemed to get away with it, but I wonder how good a job we were really doing.

If empathy is more valuable than competence then that affects people throughout the corporation, regardless of how highly they are rated. Even those who are allowed to think are more likely to be hired on the basis of their social skills. They will never have to acquire deep technical skills or experience, and what matters more is how they fit in.

In their search for the most talented graduates, corporations focus on the elite universities. They say that this is the most cost-effective way of finding the best people. Effectively, they are outsourcing recruitment to universities who tend to select from a relatively narrow social class. Lauren Rivera in her 2015 book
Pedigree: How Elite Students Get Elite Jobs” quotes a banker, who explains the priority in recruiting.

A lot of this job is attitude, not aptitude… Fit is really important. You know, you will see more of your co-workers than your wife, your kids, your friends, and even your family. So you can be the smartest guy ever, but I don’t care. I need to be comfortable working every day with you, then getting stuck in an airport with you, and then going for a beer after. You need chemistry. Not only that the person is smart, but that you like him.

Rivera’s book is reviewed in informative detail by the Economist, in an article worth reading in its own right.

Unsurprisingly, recruiters tend to go for people like themselves, to hire people with “looking-glass merit” as Rivera puts it. The result, in my opinion, is a self-perpetuating cadre of managerial clones. Managers who have benefited from the dubious hero-worship of the most talented, and who have built careers in an atmosphere of digital Taylorism are unlikely to question a culture which they were hired to perpetuate, and which has subsequently enriched them.

Giblin revisited

In my first post in this series, “Sick bureaucracies and mere technicians“, I talked about Edward Giblin’s concerns in 1981 about how managers had become more concerned with running the corporate bureaucracy whilst losing technical competence, and respect for competence. I summarised his recommendations as follows.

Giblin had several suggestions about how organisations could improve matters. These focused on simplifying the management hierarchy and communication, re-establishing the link between managers and the real work and pushing more decision making further down the hierarchy. In particular, he advocated career development for managers that ensured they acquired a solid grounding in the corporation’s business before they moved into management.

34 years on we have made no progress. The trend that concerned Giblin has been reinforced by wider trends and there seems no prospect of these being reversed, at least in the foreseeable future. In my last post in this series I will discuss the implications for testing and try to offer a more optimistic conclusion.

Digital Taylorism & the drive for standardisation

In my last post “Sick bureaucracies and mere technicians” I talked about Edward Giblin’s analysis back in the early 1980s of the way senior managers had become detached from the real work of many corporations. Not only did this problem persist, but it has become far worse. The reasons are many and complex, but I believe that major reasons are the linked trends of globalisation, a drive towards standardisation through an updating of the Scientific Management theories of Frederick Taylor for the digital age, and an obsession with hiring and rewarding star employees at the expense of most employees. In this essay I’ll look at how globalisation has given fresh life to Taylor’s theories. I will return to the star employee problem in my next post.

The impetus for standardisation

Over the last 20 years the global economy has been transformed by advances in technology and communication combined with rapid improvements in education in developing countries. Non-manufacturing services that were previously performed in Europe and North America can now be readily transferred around the world. There is no shortage of capable, well educated knowledge workers (ie those who can work with information) in countries where wage levels are far lower than in developed countries.

There is therefore huge economic pressure for corporations to take advantage of these trends. The challenge is to exploit “wage arbitrage” (ie shift the work around to follow lower wages) whilst improving, or at least maintaining, existing levels of quality. In principle the challenge is simple. In practice it is extremely difficult to move knowledge work. You have to know exactly what you are moving, and it is notoriously difficult to pin down what knowledge workers do. You can ask them, but even if they try to help they will find it impossible to specify what they do.

Dave Snowden summed up this problem well with his Seven Principles of Knowledge Management. Principles 1, 2, 6 and 7 are particularly relevant to the problem of outsourcing or offshoring knowledge work.

  1. Knowledge can only be volunteered, it cannot be conscripted.
  2. We only know what we know when we need to know it (i.e. knowledge is dependent on the context of its application; we can call on the full range of our knowledge only when we have to apply it).
  3. In the context of real need few people will withhold their knowledge.
  4. Everything is fragmented.
  5. Tolerated failure imprints learning better than success.
  6. The way we know things is not the way we report we know things.
  7. We always know more than we can say, and we always say more than we can write down.

Tacit knowledge is far too large and complex a subject to deal with in this blog. For a brief overview and some pointers to further reading see the section on lessons from the social sciences in my “Personal statement of opposition to ISO 29119 based on principle“.

So corporations were under economic pressure to transfer work overseas, but found it difficult to transfer the knowledge that the work required. They seem to have responded in two ways; attempting to codify what tacit knowledge they could extract from their existing workforce (and by implication assuming that the rest wasn’t relevant), and redefining the work so that it was no longer dependent on tacit knowledge.

Here is an extract from an interview that Mike Daniels, head of IBM Global Technology Services, gave to the Financial Times in 2009.

IBM set out to standardise the way it “manufactures” services, so that exactly the same processes determined how an assignment was carried out in Egypt as in the Philippines. “The real scale comes out of doing the work in a codified way”, says Mr Daniels. “The key breakthrough was to ask ‘How do you do the work at the lowest­ level components?’”

The clear implication of defining the work at a base level and codifying it is to remove the dependence on tacit knowledge. I was led to this Financial Times article by “The Global Auction” a fascinating and enlightening book by Phillip Brown, Hugh Lauder, and David Ashton. This 2013 paper summarises the book’s arguments. The book was a major inspiration for this series of articles and I’ve drawn heavily on it. This striking passage is worth quoting at length.

…Suresh Gupta from Capco Consulting foresees the arrival of the “financial services factory” because as soon as banks or insurance companies begin to break tasks into a series of procedures or components that can be digitalized, it gives companies more sourcing option such as offshoring. If these trends continue, “tomorrow’s banks would look and behave no differently to a factory”.

This is part of a new vocabulary of digital Taylorism that includes components, modules, and competencies. The way these are combined to create a new model of the modular corporation was revealed to us in an interview with the female head of global human resources for a major bank with operations in 85 countries. Until 2000, the bank adopted a country-based approach with little attempt to integrate its operations globally. It then set up a completely separate business to manage its high-volume, low-value transactions using operations in China, India, Malaysia, and the Philippines. She commented, “So what we were doing is arbitraging the wage costs”, but this initial approach to offshoring based on “lift and shift” did not go according to plan. “We had errors, we had customer dissatisfaction, all sorts of bad stuff”.

She recalled that it took time to realize it is not easy to shift a process that has been done in the same place and in the same way for a long time. When people are asked to document what they have been doing for many years, there are inevitably going to be blind spots because they know it so well. As a result, “The semi-documented process gets shunted off while the process itself is dependent on long-term memory that is suddenly gone, so it really doesn’t work”.

Thus, the bank recognized the need to simplify and standardise as much of the company’s operations as possible before deciding what could be offshored. “So we go through that thinking process first, which means mapping these processes, changing these processes”. She also thought that this new detailing of the corporate division of labor was in its infancy “because you need the simplicity that comes with standardisation to succeed in today’s world”.

“You need the simplicity that comes with standardisation to succeed in today’s world”. Just roll that thought around in your mind. Standardisation brings simplicity which brings success. Is that right? Is it possible to strip out tacit knowledge from people and codify it? Again, that’s a big subject I haven’t got space for here, but in a sense it’s irrelevant. I’m very sceptical, but the point is that many big corporations do believe it. What’s more, senior managers in these corporations know that success does indeed come through following that road. Whether or not that makes the corporation more successful in the longer run it does unquestionably enhance the careers of managers who are incentivised to chase shorter term savings.

There is an interesting phrase in the extract I just quoted from “The Global Auction”; digital Taylorism. This is an umbrella term covering this drive for standardisation. Digital Taylorism describes a range of fashions and fads that go a long way to explaining why the advice of people such as Edward Giblin was ignored and why so many corporations are trapped in a state of bureaupathology.

Digital Taylorism

In my last post “Sick bureaucracies and mere technicians” I talked about Giblin’s analysis back in the early 1980s of the way senior managers had become detached from the real work of many corporations. Not only did this problem persist, but it has become far worse.

In the late 19th and early 20th centuries Frederick Winslow Taylor developed the idea of Scientific Management, or Taylorism as it is often called. That entailed analysing and measuring jobs, decomposing them into their simplest elements, removing discretion from workers, enforcing standardisation and best practice. It also advocated differential treatment for workers; the most productive should receive bonuses, and the least productive sacked. Taylorism was initially reserved for manufacturing jobs, but in recent decades technology has allowed it to be extended to services and knowledge jobs. Digital Taylorism had arrived and it was closely linked with the drive towards standardisation I discussed above.

Taylorism was successful at increasing manufacturing productivity, though at great cost as factories became dehumanised. Workers were not allowed to use their initiative and their creativity was stifled. That might have made sense with routine manufacturing work, but the justification was always suspect with knowledge work where the loss of creativity is a far more serious problem. Was the drive towards standardisation strong enough to overcome such concerns? Perhaps not on its own. There may have been more discernment when identifying the jobs might be suitable for such treatment, and those which should be exempt because the loss of creativity would be too damaging. However, the trend towards Taylorised standardisation was greatly reinforced by a related trend, a feature of Taylorism that took on its own life. From the 1990s corporations became increasingly obsessed with the need to attract and lavishly reward the most talented employees, and then to treat them very differently from the mass workforce. I shall return to this in my next post, “Permission to think” and discuss how this swamped concerns about a loss of creativity.

Sick bureaucracies and mere technicians

Why do organisations do things that appear to be downright irrational? The question is of vital importance to testing because many corporations undertake testing in a way that is ineffective and expensive, following techniques (and standards) that lack any basis in evidence, and indeed which are proven to be lousy. If we are to try and understand what often seems to be senseless we have to think about why organisations behave the way that they do. It’s pointless to dismiss such behaviour as stupid. Big organisations are usually run by very intelligent people, and senior managers are generally acting rationally given the incentives and constraints that they see facing them.

In order to make sense of their behaviour we have to understand how they see the world in which they are competing. Unless you are working for a specialist testing services firm you are engaged in a niche activity as far as senior management is concerned. If they think about testing at all it’s an offshoot of software development, which itself is probably not a core activity.

Whatever wider trends, economic forces or fads are affecting corporations, as perceived by their management, will have a knock-on impact all the way through the organisation, including testing. If these wider factors are making an organisation think about adopting testing standards then it is extremely difficult to argue against them using evidence and logic. It’s hard to win an argument that way if you’re dealing with people who are starting from a completely different set of premises.

I’ve looked at this sort of topic before, concentrating more on individuals and the way they behave in organisations. See “Why do we think we’re different?” and “Teddy bear methods”. Here I want to develop an argument that looks at a bigger picture, at the way big corporations are structured and how managers see their role.

Bureaupathology and the denigration of competence

I recently came across this fascinating paper, “Bureaupathology and the denigration of competence”, by Edward Giblin from way back in 1981. As soon as I saw the title I couldn’t wait to read it. I wasn’t disappointed. Giblin identified problems that I’m sure many of you will recognise. His argument was that big corporations often act in ways that are inconsistent with their supposed goals, and that a major reason is the way the hierarchy is structured and managers are rewarded.

Giblin drew on the work of the German sociologist Klaus Offee, who argued against the widely accepted belief that managers are rewarded according to their performance. The more senior the manager the less relevant is specific job knowledge and the more important are social skills. These social skills and the way they are applied are harder to assess than straightforward task skills.

Offee distinguished between “task-continuous” organisations and those which are “task-discontinuous”. In a task-continuous organisation the hierarchy and therefore seniority is aligned with the core task. As you look further up the hierarchy you find deeper expertise. In a task discontinuous organisation you find quite distinct, separate skills at the higher levels.

Good examples of task continuous organisations would be small family businesses, or business, run by experienced experts, with skilled workers and apprentices underneath them. Task discontinuous organisations are far more common nowadays. Modern corporations are almost invariably run by specialist managers who have only a loose grip on the core activity, if they have any proficiency at all. I think this distinction between the two types of organisation has big implications for software testing, and I’ll return to that in my next post.

Social skills trump technical skills

The perceived ability of managers in large corporations is mainly dependent on their social skills, how good they are at persuading other people to do what they want. Such abilities are hard to measure, and managers are more likely to be rewarded for their ability to play the corporate game than for driving towards corporate objectives. Conventional performance evaluation and compensation techniques lack credibility; they don’t effectively link performance with rewards in a way that helps the business. Back in the early 1960s Victor Thompson coined the term bureaupathology to describe organisations in which the smooth running of the bureaucracy took precedence over its real work. In such organisations the bureaucracy is no longer a means to an end. It has become the end. In bureaupathic organisations managers have the freedom and incentive to pursue personal goals that are at variance with the official corporate goals.

With senior managers pursuing their own agenda Giblin argued that twin systems of authority emerge in large organisations. There is the official system, exercised by the generalist managers, and there is an unofficial system utilised by experienced staff relying on “surreptitious behaviour to accomplish the work of the organisation”. With much of the real work being carried out informally, even invisibly, the senior managers’ perspective naturally becomes warped.

“As many generalists either didn’t have the technical knowledge to begin with, or allowed it to atrophy over time, they depend solely on the power of their position to maintain their status. Their insecurity… leads to needs that seem irrational from the standpoint of organisational goals, because these needs do not advance these goals. This also leads them, defensively, to denigrate the importance of the technical competence they don’t have.”

Giblin cited the example of a consultancy firm in which the senior partners spent their time on administration and supervision of the professional consultants, with minimal client contact.

“(It) became fashionable among the administrators to refer to outstanding consultants as ‘mere technicians’. This despite the fact that such ‘mere technicians’ were the firm’s only revenue-producing asset.”

Even though these observations and arguments were being made several decades ago the results will be familiar to many today. I certainly recognise these problems; a lack of respect for key practitioners, and the informal networks and processes that are deployed to make the organisation work in spite of itself.

Big corporations charge further down the wrong road

Giblin had several suggestions about how organisations could improve matters. These focused on simplifying the management hierarchy and communication, re-establishing the link between managers and the real work and pushing more decision making further down the hierarchy. In particular, he advocated career development for managers that ensured they acquired a solid grounding in the corporation’s business before they moved into management.

The reason I found Giblin’s paper so interesting is that it described and explained damaging behaviour with which I was very familiar and it came up with practical, relevant solutions that have been totally ignored. Over the following decades the problems identified by Giblin grew worse and corporations did the exact opposite of what he recommended. This didn’t happen by accident. It was the result of global trends that swamped any attempts individuals might have made to adopt a more rational approach. These trends and their repercussions have had a significant impact on testing, and they will continue to do so. I will develop this in my next post, “Digital Taylorism & the drive for standardisation“.

Principles for Testing?

In June (2015) I gave a talk, “Principles for Testing?” at a one-day conference of SIGIST (the British Computer Society’s Specialist Group in Software Testing) in London. I intended to work my talk up into an article, but haven’t got round to it (yet). In the meantime I’m posting a PDF of the slides (opens in new tab). Unusually for my talks these slides, along with the notes, carry enough of the story to make some sense on their own.

This was the abstract for the talk.

“There has been much debate in recent years about the balance between principles and rules when regulation is framed. Software development and testing are complex activities that do not lend themselves to fixed rules or prescriptive “best practice”. If stakeholders are to be confident that testers will provide value then perhaps we need clear principles against which testing can be evaluated. Testing lacks principles intended to guide and shape behaviour. I will show how this has contributed to some of the confusion and disagreement arising from ISO 29119 and the Stop 29119 campaign. I will also argue that we can learn from the “rules based versus principles based” debate and I will initiate a discussion about what form principles might take for testing, and where we should look for sources for these principles.”

Plenty of people contributed to the discussion, which was interesting but inconclusive. This is something I will have to persevere with. Please get in touch if you want anything clarified, or if you want to discuss these issues.

I used a brief clip from Apollo 13, which doesn’t appear in the PDF. It was an example of a complex problem requiring experimentation, the sort of situatio where vague principles rather than fixed rules are more helpful. Here is a slightly longer version of that clip.

Audit and Agile (part 2)

This is the second part of my article about the training day I attended on October 8th, organised by the Scottish Chapter of ISACA and presented by Christopher Wright.

In the first part I set the scene and explained why good auditors have no problem in principle with agile development. However, it does pose problems in practice, especially for the inexperienced auditors, who can find agile terrifyingly vague.

In this second part I’ll talk about how auditors should get involved in agile projects and about testing. The emphasis was very much on Scrum, but the points apply in general to all agile development.

The importance of working together constructively

Christopher emphasised that auditors should be proactive. They should get involved in developments as early as possible and encourage developers to speak to them. Developers are naturally suspicious of auditors. They think auditors “want us to stop having fun”. These are messages I’ve been harping on about ever since I started in audit.

Developers, and testers, make assumptions about what auditors will do, and what they will expect. These assumptions can shape behaviour, for the worse, if they are not discussed with the auditors. That can waste a huge amount of time.

Christopher developed an argument that I have also often made. Auditors can see a bigger picture than developers. They will often have wider experience of what can go wrong, and what controls should be in place to protect the company. Auditors can be a useful source of misuse stories. They can even usefully embed themselves in an agile development team writing stories and tests that should help make the application more robust.

Auditors have to go native to a certain extent, accepting agile on its terms and adapting to the culture. Christopher advised the audience to conform to the developers’ dress code; “lose the tie” and remove any unnecessary barriers between the auditors and the developers. The final tip was one that will resonate with context driven testers. “Focus on people and product – not paperwork”.

Testing

Discussion of testing comprised just one, relatively small, part of the day. Obviously I would have preferred more time. However, the general guidance throughout the day about working with agile provided a good guide for auditing agile testing. Auditors who have absorbed these general lessons should be able to handle an audit of agile testing.

Christopher did have a couple of specific criticisms of agile testing. He thinks the standard is generally fairly poor, though he did not offer any comparisons with more traditional testing. He also expects to see testing that is repeatable, and wants to see more automated testing where possible. Christopher observed that too few projects develop repeatable, automated tests for regression testing. He is probably right on that point. I’m not sure that this is really just an agile problem.

Traditional projects were often planned and costed in a way that gave the test manager little incentive to make an investment for the future by automating tests. The difference under an agile regime is that a failure to invest in appropriate automation is likely to create problems for the current project rather than leave them lurking for the future support team.

In addition to his comments on automation Christopher’s key points were that auditors should look for evidence of appropriate planning and preparation, and evidence of the test results. There might not be a single, standard, documented agile development method, but each organisation should be clear and consistent about how it does agile.

Christopher did use the word “scripts” a lot, but he made it clear that auditors should not expect an agile test script to be as detailed and prescriptive as a traditional script; it shouldn’t go down to the level of specifying the values to go into every field. Together with the results the script should allow testing to be recreated. The auditor should be able to see exactly what was planned, what was tested and what was found.

Conclusion

The day was interesting and very worthwhile. It was reassuring to see auditors being encouraged to engage with agile in an open minded and constructive manner. It was also encouraging to see auditors responding positively to the message, even if the reality of dealing with agile is rather frightening for many auditors. Good auditors are accustomed to the challenge of being scared by the prospect of difficult jobs. It is a yardstick of good auditors that they cope with that challenge.

I don’t have a great deal of sympathy with auditors who shy away from auditing agile projects because it’s too difficult, or with those who bring inappropriate prejudices or assumptions to the audit. Auditing is like testing; it isn’t meant to be easy, it’s meant to be valuable.

Internal auditors should not be aliens who beam down to a project to beat up the developers and testers, then shoot off to their next assigment leaving bruised and hurt victims. I’m afraid that is how some auditors have worked, but good auditors are broadly on the same side as developers and testers. They are trying to achieve the same ends, but they have a different perspective. They should have wider knowledge of the context in which the project is working, and they should have a bleaker view of reality and of human nature. They should know what can go wrong, how that can happen, and what might be effective and efficient ways of preventing that.

Following the happy path is a foreign concept to good, experienced auditors. Their path is a narrow one. They strive to warn of the unseen dangers lurking all around while also providing constructive input, all the time maintaining sufficient independence so that they can comment dispassionately and constructively on what they see. As I’ve said, it’s not easy.

Auditors and testers should resist any attempts to redefine their difficult jobs to try and make them appear easier. Such attempts require a refusal to deal with reality, and a pretence, a delusion, that we can do something worthwhile if we refuse to engage with complex and messy reality.

Testing and auditing are both jobs that it is possible to fake, going through the motions in a plausible manner, while producing nothing of value. That approach is easier in the short tun, but it is deeply short sighted and irresponsible. It trashes the credibility and reputation of practitioners, it short-changes people who expect to receive valuable information, and it leaves both testers and auditors open to being replaced by semi-skilled competition. If you’re doing a lousy job and focusing on cost, there is always someone who can do it cheaper.

Audit and Agile (part 1)

Training in stuff I know

Last Thursday, October 8th, I went to a training day organised by the Scottish Chapter of ISACA, the international body representing IT governance, control, security and audit professionals. The event was entitled “Audit and Control of Agile Projects”. It was presented by Christopher Wright, who has over thirty years experience in IT audit and risk management.

This is a topic about which I am already very well informed but I was keen to go along for a mixture of reasons. When discussing auditing and the expectations of auditors we are not dealing with absolutes. There is no hard and fast right answer. We can’t say “this interpretation is correct and that one is wrong”. Well, we could, but that ignores the possibility that others might disagree, and they might be auditors who are conducting audits on a basis that we believe to be flawed.

It is therefore important to think deeply about, and understand, the approach that we believe to be appropriate, but also to consider alternative opinions, who holds them and why. Consultants offering advice in this area have to know the job, but they must also be able to advise clients about other approaches. We have to stay in touch with what other people are saying and doing, regardless of whether we agree with them.

I was keen to hear what Christopher Wright had to say, and also to talk to other attendees, who work for a wide range of employers. Happily the event was free to members, which made the decision to attend even easier!

As it transpired I didn’t hear anything from Christopher that was really new to me, but that was reassuring. His message was very much aligned with the advice I’ve been pushing over the last few years to clients, in my writing and in training tutorials. What was also reassuring was that the attendees were receptive to Christopher’s message and there was no sign of lingering attachment to older, traditional ways of working.

I don’t want to cover everything Christopher said. I will just focus on a few points that I thought were key. I will split them into two posts. I will try to make it clear when I am offering my own opinion rather than reporting Christopher’s views.

Agile is potentially an appropriate response to the problems of software development

Firstly, and crucially, agile can be entirely consistent with good governance. The “can be” is an important qualification. You can do agile well or badly. It is not a cop out by organisations that couldn’t make traditional methods work. In many situations it is the most appropriate response. Christopher ran through a quick discussion of the factors involved in making a decision whether to go with an agile or a traditional, waterfall approach. His views were orthodox, current software development thinking rather than a grudging auditor’s acceptance of what has been happening in development circles.

Put simply, Christopher argued that a waterfall approach is fine when the problem is stable and the solution is well understood and predictable. If we know at the outset, with justified confidence, what we are going to do then there’s no problem with waterfall, indeed we might as well use it because it is simpler to manage. We are dealing with a well defined problem and we are not going to be swamped by a succession of changes late in the project. I would argue that that is usually not the case when we are developing new software intended to provide a new solution to a problem.

If the problem is not trivial then it is unlikely that we can understand it fully at the start of the project, and we can only build our understand once the project is well underway. Agile is appropriate in these circumstances. We need to learn what is needed through a process of iteration, experimentation and discovery.

We had a brief discussion about predictable solutions. I would happily have spent much longer mulling over the idea of predictability. My view is that software development has been plagued throughout its history by a yearning for unattainable predictability. That has been damaging because we have pretended that we could predict solutions and project outcomes when the reality has been that we didn’t know, and couldn’t have known at the time. The paradox is that if we try to define a predictable outcome too early that pushes back the point when we truly can predict with confidence what is required and possible.
Mary Poppendieck made the point very well back in 2003 with an excellent article “Lean Development and the Predictability Paradox” (PDF, opens in new window).

Agile is scary for auditors

Christopher argued some important points about why many traditional auditors are suspicious of agile, and I was very much in agreement with him. There are many agile methods, and none of them are rigidly defined or standardised. Auditors can’t march into a project with a checklist and say “you’ve not produced a burndown chart in the form prescribed by… ”.

The auditors have to base their work on the Agile Manifesto, but they need to read and understand it carefully. It is not a matter of choosing either working software or comprehensive documentation, but valuing the former over the latter. Auditors have to satisfy themselves that an appropriate balance is being struck, and this requires judgment and experience. The Agile Manifesto can guide them to ask useful questions, but it can’t provide them with answers.

Crucially, auditors have to ask open questions when they are auditing an agile project. This is one of my hobby horses, and I have often written and spoken about it. Auditors must have highly developed questioning skills. They need the soft skills that will allow them to draw out the information they need from auditees. They must not rely on checklists that encourage them to ask simplistic questions that invite simple, binary answers; yes or no.

Christopher told a sadly plausible story of an audit team that reviewed an agile project and produced a report listing 50 detailed problems. The auditors did not have experience with agile and had conducted the audit using a Prince2 checklist that was designed for waterfall projects.

This might seem a ludicrously extreme example, but I’ve seen similar appallingly unprofessional and incompetent behaviour by auditors. I believe that when this happens the best outcome is that the auditors have wasted everyone’s time, failed to do anything useful and trashed their credibility.

What is far more damaging, in my opinion, is a situation where the auditors have real power, regardless of their competence, and auditees start to tailor their work to fit the prejudices and assumptions of the auditors. Managers start to do things they know are wrong, wasteful and damaging because they fear criticism from auditors. Clients become infuriated because the supplier staff are ignoring their needs and focusing on appeasing the auditors.

In this scenario the auditors are taking commercial decisions, but are not accountable for the consequences. They are probably not even aware that this is what they are doing. They lack the experience, knowledge and insight to understand the damage they are doing.

I therefore agree with Christopher that auditing IT projects is no job for the inexperienced. Closed questions are dangerous, and should be used only to confirm information that has already been elicited. The auditor must not ask “do you have a detailed stage plan?”, but ask “can you show me how you plan the work?”. The former question might simply produce a “no”. The latter question will allow the auditor to see, and assess, how the planning is being performed. Auditees are naturally suspicious of auditors and many people will offer as little information as they can. Allowing them to answer with a curt yes or no helps nobody.

Of course, the problem for inexperienced auditors is that if they ask open ended questions they have to understand and interpret the answers then vary their follow up depending on what they are told. That can be difficult. Well, tough. Auditing isn’t meant to be easy. Like testing it should offer valuable information, and redefining the job to make it easy but pointless is deeply unprofessional.

And in part two…

I am splitting this article into two parts and will post the second one as soon as possible. You could easily write a book about this topic, as indeed Christopher has already done. It’s called “Agile Governance and Audit”.

In part two I cover how auditors can work constructively with audit teams, and also discuss testing. I wanted to set the scene in part one before moving on to testing. I don’t think it’s worth treating testing in isolation, and it’s useful to understand first where modern auditors are coming from. The way they should approach testing follows on naturally from the way that they engage with agile in general.