Wednesday, May 30, 2012

The dilemmas of measuring

A recurring theme in many of the jobs I've done over the years is how to measure different things. How to measure the impact of a project, how to measure progress, how to measure this and that. A cynical viewpoint would note that in many cases the measurements are irrelevant. Too often the measurements are merely used to justify one thing or another. To motivate funders to give more money or to ensure that the project doesn't get killed off. The objective has already been devised and then the purpose is to ensure that the measurements support the objective. At best these are naive metrics and at worst it's blatant cherry picking.

One specific problem relates to developing metrics. Often metrics are developed partly or entirely outside of the context, e.g. a project or a program that are ultimately being measured. Some external person devises a plan of how to assess the project and what to measure. Or a person calls up the project manager and tells them that the quality of software needs to be measured. And because developing metrics is fundamentally difficult, compromises are made and software quality is assessed verbally and qualitatively with biases and personal agendas in place. Or then traffic lights are used to indicate status.

Once the metrics have been defined one way or another, things don't get any rosier. There's the problem of changing metrics as the project learns more. Then there's the problem of metrics often not being automated, which causes a relatively large amount of resources being spent on compiling measurements and aggregating them. And because manual labor is involved and biases and agendas persist, each step has a risk of the data being altered either consciously or unconsciously, so that when it ultimately reaches the decision makers there's no guarantee what they are actually seeing and basing their decisions upon.

Then there is the issue of the object of the measurements not necessarily being too interested in measuring things or being measured. This always spawns existential questions; should the project be canned or not, or was tax-payer money wasted or not? This also links to accountability: when the shit hits the fan, who was responsible and who gets axed? So it's not difficult to see why there may be a certain amount of resistance to measuring things and why biases and agendas emerge.

However, ideally measuring progress is a very good thing indeed. But it has to come from within, and the object of measurement should inherently also see the value. As a fairly naive example, I identified a few years back that I'm not in very good shape. This wasn't necessarily a problem per se at the time, but I was afraid that it might have negative impacts down the line. Not being in good shape would possibly negatively affect my performance in work and in life. It might limit my degrees of freedom in performing things or in taking up hobbies. It might also have adverse effects on my physical appearance, and so on. Identifying that something needed to be done, it was obvious that changes needed to be made at the level of routines, and not exercising was one fundamental problem.

One of the easier and still fairly relevant measurements was just keeping a track of the hours spent exercising. The point of this was to reduce the barriers to exercising. Jogging is difficult if you don't know where your shoes are or what good jogging routes are or how much time you're going to spend running a certain route. So obviously static friction is larger than kinetic friction, so ensuring basic movement will help things further down the line.

Once the static friction was beat, the next thing was to ensure that the exercise was heterogenous enough that I don't get trapped into practicing a small subset of exercises and instead strive towards a holistic enough approach. So this was then tracked by reviewing the types of sports I had already done, which I'd been already recording while keeping track of exercise hours. Running and gym (with a certain routine) was relatively ok, but I was for instance neglecting flexibility or crossfit-type exercises. So obviously that needed to be fixed.

Then came the issues of ensuring that I'm eating enough and the right things. And observing correlation between training amounts and types when reflected against injuries and the time spent in being unable to train. The next step is then to start measuring outcomes, i.e. my strength and ability to do a portfolio of different things. I've already been running half marathons on an at least yearly basis, so some data points are available in respect to cardio and endurance, but ensuring a more holistic approach to tracking progress might be good. At least in a toolbox fashion which would allow me to test myself every once in a while. Another thing going parallel to this has been a purely subjective assessment on a scale of 1-5 after each exercise on how good it felt.

The point here is that the metrics and their relevance has evolved over time as progress happens and the environment changes. The will to measure is also intrinsic and the measurements have been very lightweight with the heavier measurements (e.g. diet and calorie amounts) being done every once in a while on a temporary basis. The measurements are also reflected against the overall goals of becoming more fit (across a multitude of dimensions) and so on.

An interesting question, however, would be whether hiring a personal trainer would bring better results or improve the situation.

No comments: