Dysfunction and fanatics

A game developer starts putting bugs in his code deliberately.  A teacher helps their students to cheat on an exam.  Another teacher throws a student’s exam paper in the bin rather than submitting it.  A shoe factory starts producing nothing but size 7, left foot shoes.  A customer service representative hangs up the phone on a customer without warning or apparent reason.  A CEO drives their company into the ground with short-sighted decisions.  A cop spends all their time on trivial offences, ignoring murders and other serious crimes in their area.  An NFL cornerback gambles on an interception when all his team needs him to do is prevent a deep pass.  A farmer lets food rot in his field while people elsewhere in his country go hungry.

You’ll probably have seen or read about many stories like these (although the shoe factory may be apocryphal).  Why would someone who loves what they do, work so directly and deliberately against the goals of their position?

In fact, there’s a single explanation behind all these examples: measurement dysfunction.  Here’s a classic way for it to arise:

  1. Management are not observing every aspect of a worker’s performance
  2. Management believe they can observe enough aspects to have a complete and accurate picture of a worker’s performance
  3. Management put in place a programme to measure these observable aspects, and then reward workers, either implicitly or explicitly, based on these measurements.

Given that managers are blissfully unaware of point 1, from their point of view, it seems a logical way to reward good performance, and motivate employees.  How does the worker see the situation?  To get rewards, they have to do well on the measurements.  Assuming the measurements are carefully designed and well-intentioned, they can initially do this by working a bit harder, or a bit better.  But at some point, they reach a plateau and are unable to work harder or better.  At this point, whether consciously or not, they achieve artificial improvements on the measurements by igoring certain aspects of their job (the aspects not measured), performing their job in a suboptimal way, so as to boost the metrics without working harder or better.  All of the examples make sense in this light.

The game developer example is real, coming from a friend of a friend.  This fellow went off to work for a well-known mega-publisher; toiling away as a tiny cog in a sea of cubicles, he was hauled up one day for not having enough bugs in his code.  What?  Technically, he was reprimanded for not fixing enough bugs in his code, but he couldn’t help that – his code was great and there just weren’t enough bugs in it for him to keep them happy.  Hence the deliberate bugs.  Additionally, he was seen as lazy because he went home on time most nights.  So he began staying late to play WoW at work, and stopped shaving so he looked more tired.  Proper, real-life Dilbert.

Dilbert.com

This pattern is repeated all over the world, every day, and as an engineer moving into management, you have to face up to the issue of measurement at some point.

Losing control

Maybe you’re drawn to the idea of measurement because the task of managing other people’s performance makes you uncomfortable.  When you move from doing the work to managing the work, it can be quite a frightening loss of direct control over results.  For some, measurements might be tempting, as they give an impression (illusion?) of control.  They appear to be highly objective, allowing you to evade all that fuzzy people skills stuff.

Unfortunately, software development is ready-made for dysfunction.  It’s clearly impossible to observe a programmer’s performance precisely.  The best you can end up with are incredibly crude proxy measures such as bug counts.  And just because you’re not linking measurements to salary, doesn’t mean there isn’t an incentive to do well on them.  Even the existence of informational measurements can lead developers trying to do better on them, if you’re not very careful.  The simple act of observing people will often change their behaviour.  Measuring people is not like measuring simple physical quantities.

Under pressure

A more common situation is that you feel pressured to apply measurement by others, and are struggling to express your natural doubts on the topic.  This pressure could come from someone at work, or just from advice in books and articles.  I recommend counteracting the pressure by reading this book:

It will present you with well-reasoned counter-arguments, and help you to express those doubts more precisely.  You need this, because the pro-metrics camp is quite simply fanatical.

Here are some of the techniques and arguments they use:

  • How else can you possibly tell when you’re succeeding?  Or, if you want to improve at what you’re doing, how can you tell whether you’re improving?  Without an objective, quantifiable assessment, they claim you can’t tell reliably.
  • The massive success of science in our modern world has put something of a spell over people, whereby anything that looks quantitative and scientific is seen as a better way of doing things.  Never forget that simply having numbers and analysing them doesn’t make what you’re doing scientific (they might also use the term ‘factual’).  Don’t be ashamed of subjective assessments.
  • Bad analogies – from the awfully cheesy (as in, running a project is like flying a plane … you need those instruments in your cockpit!) to bad comparisons with other industries, such as manufacturing (neglecting key differences such as how feasible it is to observe performance accurately).
  • Any failures of a measurement programme can, of course, always be put down to “doing it wrong”.  Perhaps you measured the wrong things: activities rather than outcomes, or effects rather than causes.  Perhaps you presented or communicated your measurements poorly.  Perhaps you didn’t explain the goals of your measurement programme to your staff.  They’ll make any excuse, but never question the very basic question: why measure at all?  What’s really interesting about Austin’s book is that he shows dysfunction to be inevitable under the right conditions – in which case trying to “do it right” is futile.
  • My favourite is when they suggest that you’re afraid of having measurements, because they might show you up, they might prove quantifiably that you’re not doing a good job.

Finally, there are the success stories, which typically take this kind of form:

A real estate management software developer stamped out several “software runaways” by getting management indicators for all projects under way. The organization has put size metrics into place for client-server and OO projects built in languages such as Visual C++ and Visual Basic. Independent assessment of all critical projects characterizes whether they are in a “red light,” “yellow light,” or “green light” condition. Actions are taken on all projects that are yellow or red.

I’m sorry, could we have any less information, or any less evidence that measurement did any good here?  We’re meant to believe that simply by counting lines of code, they made vaguely ominous-sounding management decisions (“stamped out”), and that that was somehow a good thing?

Setting goals

I was given this book to read on my management course:

Making fun of this is just too easy (what is it with management theorists needing to invent jargon … it makes a big point of calling itself a “mindbook/workbook”, which is just a stupid way of saying that each chapter has some theory followed by some exercises), so I’ll try to refrain. It’s all about setting performance goals, and making them measurable.  The “logic” goes: a goal is no use unless you can later assess whether or not you’ve achieved it (fair enough), therefore it’s best to have an objective, quantifiable measurement of that (hold on a minute!).  To be fair, he does point out that some goals can be more subjective, but he really doesn’t dwell on that for long.

Now having goals is good (I’m not talking about anything phoney here: all software teams have goals – at the very least, you’re probably trying to ship a product!).  Goals help people to pull in the same direction rather than random directions, achieving them feels good, and if your goals aren’t clear, it’s hard to make good decisions.  Hurray for goals!

Moving from the technical ranks to management, you may well feel uncomfortable setting goals with your team.  I certainly did, which is no doubt why I was given this book to read!  Just don’t be afraid to have goals that fit the nature of the work you’re doing.  One of the goals we ended up focussing on as a team was to get better at releasing on time.  While you could quantify this fairly easily, why bother?  We know how well we’ve done at this.  Recognising that we needed to get better, coming up with ideas for improvement, implementing them and then talking about it afterwards was enough.  We didn’t need arbitrary statistical formulae – which could well have tempted us into ‘cheating’.

The metrics movement

Software metrics is a significant field in its own right, with consultants, books and conferences dedicated to the subject. Usually, an explanation of software metrics starts out sounding quite reasonable: the idea being that in order to estimate and track project progress, you need to have some kind of quantitative information.  I don’t think anyone in their right mind would argue with that.  Even a super-lightweight agile process is going to provide you with something: perhaps lists of user stories completed, or burndown charts of progress within an iteration, and so on.

The problem is that it all turns ugly pretty quickly.  The type of people who love metrics just can’t help themselves.  In this overview of the metrics scene, after explaining that the main motivation is to see whether critical projects are on track, make decisions and assess estimates, they just can’t help tacking this on:

Some other process-related questions include:

  • What is our current capability?
  • How do we compare in terms of speed, efficiency, and quality?
  • Is our productivity improving?

Just stop!  And of course, if a few simple metrics are a good idea, they say, then you should formalise it all and have a “metrics programme”.  If you hear anyone saying this, run away fast.  Thanks to NASA, they even have a role model for you!  According to their measurement guidebook:

The guidebook also clarifies the role that measurement can and must play in the goal of continual, sustained improvement for all software production and maintenance efforts … these measurement activities have generated over 200,000 data collection forms, are reflected in an online database, and have resulted in more than 200 reports and papers. More significantly, they have been used to generate software engineering models and relationships that have been the basis for the software engineering policies, standards, and procedures used in the development of flight dynamics software.

I have heaps of respect for the stuff they build, and I certainly know very little about mission-critical-engineering, but this is surely fanaticism talking.  I mean, all software?  Really?  And have you noticed the touch of pride in how many “data collection forms” they have?

Make your own mind up.  Don’t just assume that NASA’s recommendations apply to everything you do.

This entry was posted in Management for Geeks. Bookmark the permalink.

One Response to Dysfunction and fanatics

  1. Weeble says:

    Surely you missed an example? City bankers bring their banks and the global economy to ruin chasing after massive bonuses.😉

Comments are closed.