Decisions about decisions

Last week I was sounding off about the poor way we often handle problems. There’s another topic that’s been bugging me; the poor way we deal with decisions, applying a decision making style suited to one environment in a completely inappropriate context.

People often have a strange attitude towards decisions. Their instinct is that important decisions require a lot of time and effort. Also, they have an irrational gut feeling that decisions become important and justify lots of our time simply because they are complex. We can get rattled by trivial decisions, where there are surprising complexities that make it difficult to give a definitive right answer.

I’ve been given two pieces of invaluable advice about decision making. The first was from my father when I was still at school.

”Very often the worst decision is no decision. Often the difference between two options is less than the consequences of hanging around, wasting time and not making any decisions at all. Just make your decision and get on with it.”

The second piece of advice was from my manager just after I left university. He went on to become a famous and wealthy businessman. He also had a somewhat cavalier approach to business ethics, which ultimately attracted some very unfavourable press, so I won’t name him. I don’t want to give the impression I endorse his views in general. We disagreed about the way to conduct business, but I’ve found his advice about decision making valuable.

He’d asked me to research a company that we might invest in. I took a couple of days to produce a detailed and thorough analysis explaining all the pros and cons of investing.

He read it, then told me that it was a good, thorough piece, but I’d wasted time on delving into too much detail.

“There are probably no more than about three factors you have to take account of before you make a decision. The rest are either unimportant or irrelevant. The trick is being able to recognise the key points and not waste time on stuff that won’t actually affect the decision”.

Often we get intimidated by the thought of taking responsibility and taking decisions. So we retreat into time-wasting detail. It shows we’re taking our responsibilities seriously and helpfully defers the point when we have to make our decision.

So, yes, my father and my manager were right. An unwillingness to take appropriately quick decisions can have a paralysing effect, and we do have a tendency to plunge into suffocating detail rather than focussing on what really matters.

However, the advice from my father and my manager weren’t rigid rules. They were just guidelines, and shouldn’t be applied inflexibly in every situation.

During the Second World War my father served in the airborne forces, a branch of the army in which making quick decisions and then getting on with it wasn’t a management style, it was a matter of life and death. Later he was a successful salesman and senior sales manager. That career encouraged and rewarded his decisiveness.

My manager was leading a fast moving investment team, buying and selling shares every day. Again, cool and quick thinking were crucial.

The context is crucial when it comes to selecting a style of decision making. In software development we’ve been hopeless at taking decisions and that’s largely because we haven’t recognised the context, which is different from the environment my father and manager thrived in.

Some organisations have a culture that assumes sound decisions require time and detail; if they throw a load of time and analytical effort at a problem they can confidently set their decisions in stone. That’s how the Waterfall, and traditional development methods operated.

Other places favour bold, early decisions, imposing arbitrary implementation dates on projects before anyone knows how much work is involved, or settling on outline design solutions before the requirements are understood (or instead of the requirements, see my blog on Tom Gilb’s thoughts on requirements).

Mostly we’ve mixed up our decision making styles in a way that defies logic and business sense and has given us the worst of both worlds.

Senior executives have applied the advice of my father and manager recklessly, going for ill-informed decisions while project teams have got lost in detail, ignoring big and simple signs that could have let them take important decisions earlier.

The paradox for such teams has been that they might have been taking months to reach irreversible decisions, but the truth was that the decisions were often taken too early in the logical sequence of the project, e.g. the confusion of design and requirements (Gilb again) and the pretence that a viable user interface can be confirmed before the users have tried it.

So, my experience in IT has taught me a third maxim.

The right initial decision is usually to decide what sort of decision we’re looking at.

Should we just do it? Take a quick decision and move on. Either it doesn’t matter, or the right answer is clear now.

Should we park it till later and not waste time agonising over it now, when we don’t have the right supporting information?

Or should we make a provisional decision, act on it and revisit it when we know more?

The trick is to know what sort of problem we’re looking at.

That takes us on to Lean, which has a very pragmatic and helpful attitude towards decision making. It incorporates the principle of taking decisions at the “last responsible moment”.

That doesn’t mean the last possible moment. It’s not a recipe for indecision and procrastination. The last responsible moment could be very early, depending on the circumstances.

This is one of the aspects of Lean that appeals to me. It not only makes sense in software development terms, it also recognises the general point that context is crucial rather than pretending that a single, inflexible approach to decision making applies in every circumstance.

As Mary Poppendieck put it in “The Predictability Paradox”, sadly not available on-line.

“You need to develop a sense of when decisions must be made and then make them when their time has come. It is equally important to develop a sense of what is critically important in the domain and make sure these areas are not overlooked in the decision-making process.”

Mary and Tom Poppendieck expanded on this in their book “Lean software development: an agile toolkit”.

So it’s useful to keep running mental checks, to make sure you’re treating the decision appropriately, and not just taking a decision because you’re feeling gung ho, or deferring a decision because you want more and more information that won’t really affect the final decision.

It’s also important to have confidence in your own judgement and experience. Point out to critics that a quick decision isn’t necessarily a hasty one, and a deferred decision doesn’t mean indecisiveness. It all depends on the context.

Finally, and please, if you know me, don’t inflict on me any more meetings that last for an hour after everyone has reached agreement, but in which people felt they had to drag in more and more ammunition to support the big decision. One day I may just go berserk!

The problem is not the problem

There are two issues that have been bugging me for a while now and I mean to get them off my chest this week.

They’re not software testing topics but as with so many psychological, organisational or social problems they do have a very real relevance for testers. We don’t exist in a technical or procedural bubble that insulates us from the blizzard of conflicting and confusing influences that swirl around every organisation.

The first issue is that of problems. Very often the problem is not the problem. The problem we see is not necessarily the real, possibly very damaging, problem.

As testers this shouldn’t be news. We’re used to root cause analysis, delving behind the curtain to find out why things have gone wrong and not being deceived by what we initially see. We should also be familiar with the “five whys”, or some variant on that; the discipline of repeatedly challenging answers till we burrow down to the heart of the problem.

That’s not what I want to talk about here. Things go wrong, in life, at work, and especially in software development. That’s the way of it.

What really matters, certainly in the long run, is often not the problem itself but the way we deal with it.

We can deal with the problem in a brisk and efficient manner, resolving the immediate concerns, learning the right lessons and moving on. Or we can blunder into any one of several traps that could hurt us far more than the original problem did.

The first is the cover up. Watergate is the great historical example. The original crime might have been handled without damaging Richard Nixon, but it was the cover up that ended his presidency.

When we, as software developers, hit problems the worst thing we can do is attempt to cover up. Honesty with customers and users isn’t simply honourable and ethical, it also buys you trust and support.

Own up on the smaller problems and it may just buy you the credibility and goodwill to save your skin when you hit big, career threatening problems.

Denial is the sibling of the cover up. It might not be a vigorous, outspoken denial. It may just be a matter of kidding ourselves that there’s nothing really wrong, that we can carry on as before once we’ve sorted the problem. It was just bad luck, or a bit of carelessness. No big deal. Just move on folks. Nothing to learn here.

The next trap is the one you plunge into if you head off in the opposite direction; over-reaction. Losing your temper and savaging staff is obviously damaging and isn’t tolerated in healthy organisations any more, but a more insidious variant is still too prevalent. That is the tendency to blame individuals.

Someone must be to blame, somebody must have screwed up and be held to account. I once worked somewhere with an appalling blame culture. Problems were something to run away from, not something to fix. No-one wanted to jump in and proactively resolve problems in case they got implicated in the management investigation and fallout.

A more respectable form of over-reaction is the assumption that the problem justifies sweeping changes or drastic action rather than careful tweaking. Instead of a careful analysis of what actually went wrong and what the cause was, there is the presumption that it was confirmation of a trend, or deeper problem, that was previously suspected.

That leads me on to the fourth trap, confirmation bias. We convince ourselves the world should work a certain way. Any and every problem can be interpreted as supporting evidence.

The glaring example in the history of software development was the conviction that software development should be essentially the same as construction engineering. Early experts saw practitioners struggling to manage projects with a more or less structured and orderly linear process. They concluded that the answer to the problem was to try and make the process even more rigidly structured, more ordered and more linear. We got Structured Methods (shudder!).

After all, nobody builds a bridge incrementally, opening it bit by bit. No-one erects a building in several iterations. So software engineering shouldn’t be any different. That was more or less the line of thought. Life might be simpler if software development were a highly predictable and controllable process, but the truth is that development is not like that.

I referred to this problem on my blog earlier this year when I explained the predictability paradox.

The correct response to the problem would have been to go with the grain of the problem, embracing the uncertainty, building iteration and incremental development into the development process rather than pretending that we understand the business problem sufficiently well to proceed down a rigid, linear route.

The problem was assumed to be that software engineering was not sufficiently like construction engineering. Divergence between the two activities was seen not as evidence of a genuine difference, but as proof that the software developers were out of line. The truth was that software development is a quite separate discipline and the real problem was that developers were forced to contort their practices to fit a wildly inappropriate paradigm.

The real problem wasn’t the problem the profession saw. It was their response to the problem.

None of this is new to marketing people. Apparently BMW owners who have a problem with their car are more likely to buy another BMW than those who have no problems. Why? It’s because of the way that BMW deal with problems. Is it true? Well, I don’t have access to the statistics, but that’s BMW’s reputation. Problems have become an asset, rather than remaining a problem.

What does this mean for any of us? It’s not easy to draw a neat solution to this problem of problems. All I can pass on is something I have found useful.

Often we plunge into one of the traps because we choose to respond in a way that suits our instincts or prejudices, or which is just the easy, comfortable response.

Take a breath before committing yourself and ask yourself whether your response is what ought to happen. Or is it the response that you’d really like to be the right one? Is it the easy one? Are you maybe dodging the real issue, going for short term expediency rather than long term results?

Run through the traps I’ve discussed. Do any of these make you wonder about your response?

Is your reaction to the problem going to be seen as the real problem further down the line?

After I wrote this I checked the phrase “the problem is not the problem” to see if anyone had used it before. Initially I was a bit disappointed to see that Tom Peters has, but then I cheered up when I realised he’s a pretty respectable ally! Watch the video in the link or, if you’re in the office, check out the transcript (PDF, opens in a new window). It’s very much in line with my argument, but he puts a bit of extra spin on it.

The other thing I want to get off my chest this week is about making decisions. I’ll be back in a day or so.

Please add any comments. I do like to see what people think.