Skip to main content

Metrics

The last couple of weeks I've been spending a little effort on getting some code metrics up and running. I've blogged about measuring in software development before, and I think code is definitely one of the easiest and most important things to measure.

The most important thing about metrics is that you monitor them. I am of the firm belief that if you run a static code-analysis report manually, it will give you very little. Your team will say "wow, we have 3000 FindBugs warnings", fix a couple of them, and then forget about it. Then the next time you run the report, you'll find new bugs that have crept in, with no idea of when or who did it.

There are two ways to track these kind of metrics:

  1. IDE warnings
  2. Continuous integration reports

IDE warnings

Most of you have probably enabled the built-in warning system inside Eclipse (unused method, potential null-pointer, etc). Some of you strive to minimize these numbers of warnings, some of you have perhaps even turned on more than the default warnings in Eclipse. The key practice which I recommend you all adopt is the zero-warning-policy.

(I couldn't find a good place to quote, so I'll come up with my own definition:)

The zero-warning-policy is a programming practice which states that at any time, the number of warnings in a code base should be zero. With this in place, it is easy to discover and take care of any violations immediately, before commit. Gradually, new warning rules are introduced to the developer's team, and the existing violations are promptly removed, making sure the level is back to zero before every commit.

After some time, you may find that your team has exhausted the IDE's warning policies. At this point you have to introduce plugins to the IDE to facilitate more warnings. Now it gets a bit tricky, cause every team member needs to have the same IDE configuration.

Continuous integration reports

This approach works fine in coalition with the IDE warnings. The clue is to implement the warning policies into your CI-server. Some teams practice zero-warning-policies also here, the strictest variety would be that any violation will actually break the build. This is a good way to enforce zero warnings, but can be a bit too strict for most teams.

There are two strengths of CI reports: Firstly, they are centralized, so there's no need for the team members to set up any configuration or plugins. Secondly, they can track metrics historically, so you can see from day to day whether the number of violations are decreasing or increasing.

Historical data is a good substitute for the zero-warning policy. You may have 1000 violations in your code-base, but the key is to discover when new ones are introduced, and catch the violating developer in the act of producing bad code.

It is also interesting to track metrics over time to see whether coding practices have any effect on the quality of the code. Did introducing pair-programming/commit-mails/TDD lead to fewer violations? Now you have proof.


The landscape of metrics

Now that we have some motivation for putting code metrics into play, which tools should we use? There is a number of alternatives, both commercial and open source. Googling around will probably set you in the right direction, but here are my experiences:

Which reporting tools?

I like to divide into a families of metrics:
We played around with these tools, and found that the best combination for us now was JavaNCSS, FindBugs and PMD. We still haven't got enough test coverage to even bother measuring how low it is, unfortunately.

Which IDE?

Both Eclipse and IDEA have a good number of built-in warning policies, and most of the tools above come as plugins for both.

Ant or Maven?

Most of the tools above come as either Maven-reports or Ant tasks. Even though I'm a big Maven fan, I found it to be quite easy to set up all the tools with Ant. The Panopticode project, which aims to provide a number of metrics into one easy setup helped me a lot with finding the way to structure this into our existing Ant project. If you've got a Maven project, introducing a report should be as easy as five lines of XML in your pom.xml.

Which CI-server?

I've been using Hudson for so long now that I didn't bother with trying out anything else, but I'm guessing that none of the other products come close to Hudson in the number of tools supported by plugins. If I did have the money for Atlassian's Bamboo, I would definitely give the Bamboo + Clover mix a run.

Run it nightly

Finally, I found it best to not run these metrics after every commit. They take too long, and nobody inspects the changes in number of warnings from build to build. Instead, we run metrics nightly, so every morning we can see that there are X more/fewer violations in the code base. I'm really happy with how well it ended up working, and I hope this lends some inspiration for you to try the same.

Happy measuring!

Comments

  1. Thanks for a nice post Thomas!

    Zero warning policy on the CI for a already legacy system might be as you say not a positive thing. It will probably demotivate people rather than inspire them.

    I would check the metrics and take it up with a team member one on one during a lunch or some other time that you can catch a developer alone. I would ask that person for help with the problem. Maybe this person will find some way of helping you without you bringing it up for everyone. Maybe you could ask more than one person as well? Have another coffee later that day with another person?

    Have a goal is also important. Why is it being measured? What do we achieve by having a zero-warning policy? And are we measuring what we gain by it?

    Thanks again for a nice post :)

    Thommy

    ReplyDelete
  2. Thanks for the comment, Thommy.

    I don't think ZWP is particularly demotivating or the opposite. It's more of a focus thing. If you mandate ZWP, you tell the team we're gonna have a strict no-new-warnings policy, and then introduce new warnings one by one. ZWP means turning off *most* of the warnings in the system so you get down to zero. It's easy and clean, but then again you shut your eyes to all the warnings that are not enabled (until you one day enable them again, but that will take a while).

    We haven't defined any goals around the metrics yet, except for (a) we want higher test coverage, and (b) we want to avoid FindBugs bugs. These are just general advice, not hard goals.

    I realize that static code analysis is a bit dangerous. It's a very one-dimensional way to criticize code, and I think if you have a problem with high avg. complexity per method, for instance, you're not gonna get anywhere by bitching about the high complexity per method number. You're better off realizing that perhaps there is a problem here, that people don't dare to extract method and do more object-oriented code.

    ReplyDelete

Post a Comment

Popular posts from this blog

Open source CMS evaluations

I have now seen three more or less serious open source CMS reviews. First guy to hit the field was Matt Raible ( 1 2 3 4 ), ending up with Drupal , Joomla , Magnolia , OpenCms and MeshCMS being runner-ups. Then there is OpenAdvantage that tries out a handful ( Drupal , Exponent CMS , Lenya , Mambo , and Silva ), including Plone which they use for their own site (funny/annoying that the entire site has no RSS-feeds, nor is it possible to comment on the articles), following Matt's approach by exluding many CMS that seem not to fit the criteria. It is somewhat strange that OpenAdvantage cuts away Magnolia because it "Requires J2EE server; difficult to install and configure; more of a framework than CMS", and proceed to include Apache Lenya in the full evaluation. Magnolia does not require a J2EE server. It runs on Tomcat just like Lenya does (maybe it's an idea to bundle Magnolia with Jetty to make it seem more lightweight). I'm still sure that OpenAdvant

Encrypting and Decrypting with Spring

I was recently working with protecting some sensitive data in a typical Java application with a database underneath. We convert the data on its way out of the application using Spring Security Crypto Utilities . It "was decided" that we'd be doing AES with a key-length of 256 , and this just happens to be the kind of encryption Spring crypto does out of the box. Sweet! The big aber is that whatever JRE is running the application has to be patched with Oracle's JCE  in order to do 256 bits. It's a fascinating story , the short version being that U.S. companies are restricted from exporting various encryption algorithms to certain countries, and some countries are restricted from importing them. Once I had patched my JRE with the JCE, I found it fascinating how straight forward it was to encrypt and decrypt using the Spring Encryptors. So just for fun at the weekend, I threw together a little desktop app that will encrypt and decrypt stuff for the given password

What I've Learned After a Month of Podcasting

So, it's been about a month since I launched   GitMinutes , and wow, it's been a fun ride. I have gotten a lot of feedback, and a lot more downloads/listeners than I had expected! Judging the numbers is hard, but a generous estimate is that somewhere around 2000-3000 have listened to the podcast, and about 500-1000 regularly download. Considering that only a percentage of my target audience actively listen to podcasts, these are some pretty good numbers. I've heard that 10% of the general population in the western world regularly listen to podcasts (probably a bit higher percentage among Git users), so I like to think I've reached a big chunk of the Git pros out there. GitMinutes has gathered 110 followers on Twitter, and 63, erm.. circlers on Google+, and it has received 117 +'es! And it's been flattr'ed twice :) Here are some of the things I learned during this last month: Conceptually.. Starting my own sandbox podcast for trying out everythin

Git tools for keeping patches on top of moving upstreams

At work, we maintain patches for some pretty large open source repositories that regularly release new versions, forcing us to update our patches to match. So far, we've been using basic Git operations to transplant our modifications from one major version of the upstream to the next. Every time we make such a transplant, we simply squash together the modifications we made in the previous version, and land it as one big commit into the next version. Those who are used to very stringent keeping of Git history may wrinkle their nose at this, but it is a pragmatic choice. Maintaining modifications on top of the rapidly changing upstream is a lot of work, and so far we haven't had the opportunity to figure out a more clever way to do it. Nor have we really suffered any consequences of not having an easy to read history of our modifications - it's a relatively small amount of patches, after all. With a recent boost in team size, we may have that opportunity. Also the need for be

The Git Users Mailing List

A year ago or so, I came across the Git-user mailing list (aka. "Git for human beings"). Over the year, I grew a little addicted to helping people out with their Git problems. When the new git-scm.com webpage launched , and the link to the mailing list had disappeared, I was quick to ask them to add it again . I think this mailing list fills an important hole in the Git community between: The Git developer mailing list git@vger.kernel.org  - which I find to be a bit too hard-core and scary for Git newbies. Besides, the Majordomo mailing list system is pretty archaic, and I personally can't stand browsing or searching in the Gmane archives. The IRC channel #git on Freenode, which is a bit out-of-reach for people who never experienced the glory days of IRC. Furthermore, when the channel is busy, it's a big pain to follow any discussion. StackOverflow questions tagged git , these come pretty close, but it's a bit hard to keep an overview of what questio