Challenging the culture

Fiona Charles wrote an interesting and challenging piece on Sticky Minds this week that addressed a point I’ve been thinking about.

Fiona argued that we must all be brave enough to tell uncomfortable truths, and not deploy the rather cowardly defence that we have to look after our own security and keep our heads down.

This made me think again about a topic I’m interested in; how the culture of an organisation shapes the way that testing is done, and the role of tester in challenging that culture.

We’re all rational, aren’t we?

Before I entered the workplace I took it for granted that companies and government bodies were rational organisations. If they weren’t they wouldn’t survive. Survival and success required a basic level of competence on the part of the management, and competence was more or less synonymous with rational decision making.

Rationality might not be a sufficient condition for success or survival, but it was definitely necessary.

When I started working for large companies I gradually realised that the reality was more complex. Sometimes rationality was clearly subordinate to vision, ambition or pride, but most people were rational most of the time.

However, individuals acting rationally don’t always add up to the whole organisation acting rationally. Rational behaviour can be damaging to the organisation.

That might seem nonsense, but the organisation can act rationally only if its culture is aligned with its goals.

The processes, rewards and incentives must be aligned with the goals, and crucially the decision makers must have accurate information about what is going on.

Regardless of the official goals, people’s behaviour will be dictated by their knowledge of what will be rewarded and what will be punished.

However rational the senior management is, if their decisions are informed by inaccurate information, or incomplete knowledge, then the results will be poor.

Culture isn’t necessarily good!

Organisations often strive for a positive culture. That’s good, but it frequently mutates into a culture in which people fear that messengers bearing bad news will be shot.

If senior managers don’t hear bad news then that removes vital feedback they need to correct their attempts to steer the company.

Failure to pass bad news down the line also damages the organisation because staff realise that the only way they will hear bad news is by rumour, thus giving credibility to every piece of scare-mongering gossip. However, that is not my point.

My real concern is that testers can, and should, be willing to challenge the culture and pass on the information that is required to permit rational decisions. That can be information about a particular product, or wider information about how the whole culture is affecting quality.

Yesterday I was listening to Michael Bolton give a webinar presentation of “Testers: Get Out of the Quality Assurance Business”. His point, boiled down to one sentence, is that testers can’t assure quality; they provide the information that the people responsible for quality need.

This raises all sorts of interesting issues about what form that information takes. Michael argues that testers have similarities to many other professionals, who have skills and perspectives that we can learn from.

One of these fields is anthropology. We need to understand how organisations work, and why some ideas take hold and work in some places, and don’t work in others.

That’s something that fascinates me. Ideas don’t necessarily win through on merit. Good ideas can flunk. Lousy ones can thrive.

Ideas don’t have to be smart to thrive

Ideas can spread and take hold even if they’re fundamentally wrong, or inferior to rival ideas. That’s not because ideas have some intrinsic force, that allows them to spread from host to host. I’m not a fan of the concept of memes, except as metaphor.

The success of ideas is entirely down to decisions made by individuals, though possibly under heavy outside influence, and with consequences that can be unpredictable and surprising when taken en masse.

Sometimes these decisions are taken irrationally. However, in software development and business in general I think the problem is less that people are irrational and more that we often fail to analyse and understand peoples’ motivation. A decision might appear irrational to the detached and objective observer, but it might actually be perfectly rational given the available information and the incentives, or pressures, driving the individual.

Often the problem isn’t a lack of rationality, but a lack of transparency and honesty in decision making. An example that worries me is the almost wilful confusion of quality with control. It is usually difficult for managers to say; “forget quality, we can live with an application that is a crock of crap provided we hit our dates and budget”.

What is acceptable is to set up a tough regime of project governance that looks like it will help ensure you get the dates and costs right, whilst assuming incorrectly that the rigid structure will produce a quality product.

I don’t want to decry governance, or control, but its relationship to quality is similar to the relationship between the criminal justice system and honesty. The justice system doesn’t make people honest. It supports a society in which honest people can live good lives, and it ensure that transgressors are punished.

Likewise quality doesn’t come from rules, it comes from good people with a high level of skill and pride working in a supportive culture. Governance supports that culture, but doesn’t create it.

I’ve seen dreadful process structures with slightly Orwellian names like “Quality Management System” that mean the exact opposite of what they promise. They drive quality down because good people are producing shelfware rather than software, and the iteration that good software requires is discouraged in order to permit a neat, structured project plan.

When the quality of the final product is disappointing there is a pretence that someone screwed up, or events conspired to defeat us, or that we were unlucky. There’s no admission that the approach was flawed, and that the consequences were an inevitable result of our decisions. Next time it will be ok because there’s no reason to assume we’ll be as unlucky again. Is this a case of cognitive dissonance; a determination to find reasons for failure that don’t challenge our view of the world?

People who work like that are taking rational decisions at each stage. They might be wildly mistaken but that’s because the premises on which they base their rational decisions are flawed. They know that they’re judged by time and cost, and they’ve been assured that governance equals quality, so they proceed on that basis. Each individual decision might seem plausibly rational at the time, in the circumstances, but when you step back and look at the bigger picture you see the insanity. Rational individuals don’t necessarily add up to a rational organisation.

Testers should understand and challenge the culture

As testers I think it is one of our challenges to be the counter-cultural iconoclasts; the ones who see the emperor’s got no clothes, and call it out. We need to spot the unspoken assumptions, to shine a light on the hidden little secrets, and force everyone to acknowledge the embarrassing truth; that this organisation doesn’t really care about quality, and it doesn’t have the guts to admit it. However, you’d better phrase it a bit more diplomatically than that, perhaps by pointing out that the processes and goals aren’t aligned, and that the organisation hasn’t really followed through on its commitment to quality.

Ideas might spread virally, but that’s just a metaphor. They are not cultural viruses. They can be discussed and challenged. They can be blocked or promoted by conscious intervention by shrewd operators; maybe not on the macro level of the whole society, but at the micro level of one organisation.

Whether, and how, we promote them or fight them depends on our own motivation, our beliefs, and crucially on our reading of others. If we fail to understand why people are acting in a seemingly irrational way then we’re not going to change their minds.

In the example above, we won’t get far implying that an individual manager is doing a bad job if the senior management have bought into the spiel of some long gone consultant that governance equals quality. So long as they believe that, and set in place a structure of rewards and incentives based on that fallacy then good people are going to the wrong things.

If we’re to change that then there isn’t an easy route. Sometimes it’s just not possible. If that’s the case then maybe it’s best to get out. If it is possible then that requires us to understand the politics and culture of the organisation; to know who’s pulling the levers, and what levers produce what results. Software people often shy away from “office politics”. That can be a naïve abdication of the field to the manipulators who’re simply playing the game to win, and aren’t interested in great software, or delighted customers.

It can take time, and maybe the timing isn’t right. In the 90s whilst I was working as a computer auditor I was banging on about the need to have more formal testing; to have testing as a separate discipline, with its own methods and tools.

At that time the big company I worked for regarded testing as something that the developers winged their way through once they’d built the application. I was shot down and ignored, but I had my say, and then moved on to other things.

Within a few years everything I’d argued for came about. No-one thanked me of course, but my efforts may have shifted things a little way along. You have to understand the influential people, what motivates them and you have to work away at it. One clever Powerpoint presentation isn’t going to do the trick.

Sure it’s all difficult, but it’s fascinating, trying to understand why good people are doing seemingly daft things, trying to understand why an organisation is dysfunctional, trying to open minds to the great ideas that are always going on out there, whilst trying to understand and resist the dangerous, possibly fashionable nonsense.

I’d love to have a better understanding of anthropology and psychology. If I have time I’ll follow that up in my reading. In the meantime I’ll keep using the priceless skill; persisting in asking “why” in response to answers that are only partly true. I can’t recommend it highly enough!

Please use the comments to let me know your thoughts. It’s always good to have feedback and get discussions going.