Damn lies, metrics and statistics
This thought piece was inspired by a tweet from @jeremywaite via @csterwa reminding me of a Dilbert cartoon on the use of metrics by ‘management’.
One constant in my career has been the need to measure. While working in a sawmilling technology research team at Scion in the late ‘90’s, my catch-cry to a very traditional industry became “You can’t fix it if you don’t measure it”. This sentiment is the same in our software industry. If a change is made to software without the means to measure the effect, then all that can be done is to hope that everything is as intended.
If you’re living in the land of the hopeful and don’t know where to start, I suggest looking at something simple, direct and, most importantly, easy to automate. There’s no point in starting out on this endeavour by creating something that is onerous to perform and maintain, as it just won’t last.
For instance, code coverage of unit tests is built in to many systems. Set up a continuous integration system to build the code, run the unit tests on every commit and publish the coverage somewhere visible. Even better, publish the trend over many commits. With this simple metric, you now know which pieces of your code need the most tender loving care.
In our example of code coverage, it is usually used most effectively as an alert. Low coverage for an application or module alerts the team that there isn’t much proof of the logic being correct, whereas high coverage doesn’t really prove anything as other quality questions such as dynamic interactions become more interesting.
My final comment is that metrics and trends exist to remind us what we should and shouldn’t be doing – but if they become a target, the metric becomes an end rather than a means to improving quality.