Saturday, September 05, 2009

Metrics

The last couple of weeks I've been spending a little effort on getting some code metrics up and running. I've blogged about measuring in software development before, and I think code is definitely one of the easiest and most important things to measure.

The most important thing about metrics is that you monitor them. I am of the firm belief that if you run a static code-analysis report manually, it will give you very little. Your team will say "wow, we have 3000 FindBugs warnings", fix a couple of them, and then forget about it. Then the next time you run the report, you'll find new bugs that have crept in, with no idea of when or who did it.

There are two ways to track these kind of metrics:

  1. IDE warnings
  2. Continuous integration reports

IDE warnings

Most of you have probably enabled the built-in warning system inside Eclipse (unused method, potential null-pointer, etc). Some of you strive to minimize these numbers of warnings, some of you have perhaps even turned on more than the default warnings in Eclipse. The key practice which I recommend you all adopt is the zero-warning-policy.

(I couldn't find a good place to quote, so I'll come up with my own definition:)

The zero-warning-policy is a programming practice which states that at any time, the number of warnings in a code base should be zero. With this in place, it is easy to discover and take care of any violations immediately, before commit. Gradually, new warning rules are introduced to the developer's team, and the existing violations are promptly removed, making sure the level is back to zero before every commit.

After some time, you may find that your team has exhausted the IDE's warning policies. At this point you have to introduce plugins to the IDE to facilitate more warnings. Now it gets a bit tricky, cause every team member needs to have the same IDE configuration.

Continuous integration reports

This approach works fine in coalition with the IDE warnings. The clue is to implement the warning policies into your CI-server. Some teams practice zero-warning-policies also here, the strictest variety would be that any violation will actually break the build. This is a good way to enforce zero warnings, but can be a bit too strict for most teams.

There are two strengths of CI reports: Firstly, they are centralized, so there's no need for the team members to set up any configuration or plugins. Secondly, they can track metrics historically, so you can see from day to day whether the number of violations are decreasing or increasing.

Historical data is a good substitute for the zero-warning policy. You may have 1000 violations in your code-base, but the key is to discover when new ones are introduced, and catch the violating developer in the act of producing bad code.

It is also interesting to track metrics over time to see whether coding practices have any effect on the quality of the code. Did introducing pair-programming/commit-mails/TDD lead to fewer violations? Now you have proof.


The landscape of metrics

Now that we have some motivation for putting code metrics into play, which tools should we use? There is a number of alternatives, both commercial and open source. Googling around will probably set you in the right direction, but here are my experiences:

Which reporting tools?

I like to divide into a families of metrics:
We played around with these tools, and found that the best combination for us now was JavaNCSS, FindBugs and PMD. We still haven't got enough test coverage to even bother measuring how low it is, unfortunately.

Which IDE?

Both Eclipse and IDEA have a good number of built-in warning policies, and most of the tools above come as plugins for both.

Ant or Maven?

Most of the tools above come as either Maven-reports or Ant tasks. Even though I'm a big Maven fan, I found it to be quite easy to set up all the tools with Ant. The Panopticode project, which aims to provide a number of metrics into one easy setup helped me a lot with finding the way to structure this into our existing Ant project. If you've got a Maven project, introducing a report should be as easy as five lines of XML in your pom.xml.

Which CI-server?

I've been using Hudson for so long now that I didn't bother with trying out anything else, but I'm guessing that none of the other products come close to Hudson in the number of tools supported by plugins. If I did have the money for Atlassian's Bamboo, I would definitely give the Bamboo + Clover mix a run.

Run it nightly

Finally, I found it best to not run these metrics after every commit. They take too long, and nobody inspects the changes in number of warnings from build to build. Instead, we run metrics nightly, so every morning we can see that there are X more/fewer violations in the code base. I'm really happy with how well it ended up working, and I hope this lends some inspiration for you to try the same.

Happy measuring!