Skip to main content

Metrics

The last couple of weeks I've been spending a little effort on getting some code metrics up and running. I've blogged about measuring in software development before, and I think code is definitely one of the easiest and most important things to measure.

The most important thing about metrics is that you monitor them. I am of the firm belief that if you run a static code-analysis report manually, it will give you very little. Your team will say "wow, we have 3000 FindBugs warnings", fix a couple of them, and then forget about it. Then the next time you run the report, you'll find new bugs that have crept in, with no idea of when or who did it.

There are two ways to track these kind of metrics:

  1. IDE warnings
  2. Continuous integration reports

IDE warnings

Most of you have probably enabled the built-in warning system inside Eclipse (unused method, potential null-pointer, etc). Some of you strive to minimize these numbers of warnings, some of you have perhaps even turned on more than the default warnings in Eclipse. The key practice which I recommend you all adopt is the zero-warning-policy.

(I couldn't find a good place to quote, so I'll come up with my own definition:)

The zero-warning-policy is a programming practice which states that at any time, the number of warnings in a code base should be zero. With this in place, it is easy to discover and take care of any violations immediately, before commit. Gradually, new warning rules are introduced to the developer's team, and the existing violations are promptly removed, making sure the level is back to zero before every commit.

After some time, you may find that your team has exhausted the IDE's warning policies. At this point you have to introduce plugins to the IDE to facilitate more warnings. Now it gets a bit tricky, cause every team member needs to have the same IDE configuration.

Continuous integration reports

This approach works fine in coalition with the IDE warnings. The clue is to implement the warning policies into your CI-server. Some teams practice zero-warning-policies also here, the strictest variety would be that any violation will actually break the build. This is a good way to enforce zero warnings, but can be a bit too strict for most teams.

There are two strengths of CI reports: Firstly, they are centralized, so there's no need for the team members to set up any configuration or plugins. Secondly, they can track metrics historically, so you can see from day to day whether the number of violations are decreasing or increasing.

Historical data is a good substitute for the zero-warning policy. You may have 1000 violations in your code-base, but the key is to discover when new ones are introduced, and catch the violating developer in the act of producing bad code.

It is also interesting to track metrics over time to see whether coding practices have any effect on the quality of the code. Did introducing pair-programming/commit-mails/TDD lead to fewer violations? Now you have proof.


The landscape of metrics

Now that we have some motivation for putting code metrics into play, which tools should we use? There is a number of alternatives, both commercial and open source. Googling around will probably set you in the right direction, but here are my experiences:

Which reporting tools?

I like to divide into a families of metrics:
We played around with these tools, and found that the best combination for us now was JavaNCSS, FindBugs and PMD. We still haven't got enough test coverage to even bother measuring how low it is, unfortunately.

Which IDE?

Both Eclipse and IDEA have a good number of built-in warning policies, and most of the tools above come as plugins for both.

Ant or Maven?

Most of the tools above come as either Maven-reports or Ant tasks. Even though I'm a big Maven fan, I found it to be quite easy to set up all the tools with Ant. The Panopticode project, which aims to provide a number of metrics into one easy setup helped me a lot with finding the way to structure this into our existing Ant project. If you've got a Maven project, introducing a report should be as easy as five lines of XML in your pom.xml.

Which CI-server?

I've been using Hudson for so long now that I didn't bother with trying out anything else, but I'm guessing that none of the other products come close to Hudson in the number of tools supported by plugins. If I did have the money for Atlassian's Bamboo, I would definitely give the Bamboo + Clover mix a run.

Run it nightly

Finally, I found it best to not run these metrics after every commit. They take too long, and nobody inspects the changes in number of warnings from build to build. Instead, we run metrics nightly, so every morning we can see that there are X more/fewer violations in the code base. I'm really happy with how well it ended up working, and I hope this lends some inspiration for you to try the same.

Happy measuring!

Comments

  1. Thanks for a nice post Thomas!

    Zero warning policy on the CI for a already legacy system might be as you say not a positive thing. It will probably demotivate people rather than inspire them.

    I would check the metrics and take it up with a team member one on one during a lunch or some other time that you can catch a developer alone. I would ask that person for help with the problem. Maybe this person will find some way of helping you without you bringing it up for everyone. Maybe you could ask more than one person as well? Have another coffee later that day with another person?

    Have a goal is also important. Why is it being measured? What do we achieve by having a zero-warning policy? And are we measuring what we gain by it?

    Thanks again for a nice post :)

    Thommy

    ReplyDelete
  2. Thanks for the comment, Thommy.

    I don't think ZWP is particularly demotivating or the opposite. It's more of a focus thing. If you mandate ZWP, you tell the team we're gonna have a strict no-new-warnings policy, and then introduce new warnings one by one. ZWP means turning off *most* of the warnings in the system so you get down to zero. It's easy and clean, but then again you shut your eyes to all the warnings that are not enabled (until you one day enable them again, but that will take a while).

    We haven't defined any goals around the metrics yet, except for (a) we want higher test coverage, and (b) we want to avoid FindBugs bugs. These are just general advice, not hard goals.

    I realize that static code analysis is a bit dangerous. It's a very one-dimensional way to criticize code, and I think if you have a problem with high avg. complexity per method, for instance, you're not gonna get anywhere by bitching about the high complexity per method number. You're better off realizing that perhaps there is a problem here, that people don't dare to extract method and do more object-oriented code.

    ReplyDelete

Post a Comment

Popular posts from this blog

Open source CMS evaluations

I have now seen three more or less serious open source CMS reviews. First guy to hit the field was Matt Raible ( 1 2 3 4 ), ending up with Drupal , Joomla , Magnolia , OpenCms and MeshCMS being runner-ups. Then there is OpenAdvantage that tries out a handful ( Drupal , Exponent CMS , Lenya , Mambo , and Silva ), including Plone which they use for their own site (funny/annoying that the entire site has no RSS-feeds, nor is it possible to comment on the articles), following Matt's approach by exluding many CMS that seem not to fit the criteria. It is somewhat strange that OpenAdvantage cuts away Magnolia because it "Requires J2EE server; difficult to install and configure; more of a framework than CMS", and proceed to include Apache Lenya in the full evaluation. Magnolia does not require a J2EE server. It runs on Tomcat just like Lenya does (maybe it's an idea to bundle Magnolia with Jetty to make it seem more lightweight). I'm still sure that OpenAdvant

Encrypting and Decrypting with Spring

I was recently working with protecting some sensitive data in a typical Java application with a database underneath. We convert the data on its way out of the application using Spring Security Crypto Utilities . It "was decided" that we'd be doing AES with a key-length of 256 , and this just happens to be the kind of encryption Spring crypto does out of the box. Sweet! The big aber is that whatever JRE is running the application has to be patched with Oracle's JCE  in order to do 256 bits. It's a fascinating story , the short version being that U.S. companies are restricted from exporting various encryption algorithms to certain countries, and some countries are restricted from importing them. Once I had patched my JRE with the JCE, I found it fascinating how straight forward it was to encrypt and decrypt using the Spring Encryptors. So just for fun at the weekend, I threw together a little desktop app that will encrypt and decrypt stuff for the given password

What I've Learned After a Month of Podcasting

So, it's been about a month since I launched   GitMinutes , and wow, it's been a fun ride. I have gotten a lot of feedback, and a lot more downloads/listeners than I had expected! Judging the numbers is hard, but a generous estimate is that somewhere around 2000-3000 have listened to the podcast, and about 500-1000 regularly download. Considering that only a percentage of my target audience actively listen to podcasts, these are some pretty good numbers. I've heard that 10% of the general population in the western world regularly listen to podcasts (probably a bit higher percentage among Git users), so I like to think I've reached a big chunk of the Git pros out there. GitMinutes has gathered 110 followers on Twitter, and 63, erm.. circlers on Google+, and it has received 117 +'es! And it's been flattr'ed twice :) Here are some of the things I learned during this last month: Conceptually.. Starting my own sandbox podcast for trying out everythin

The academical approach

Oops, seems I to published this post prematurely by hitting some Blogger keyboard shortcut. I've been sitting for some minutes trying to figure out how to approach the JavaZone talk mentioned in my previous blog-post. Note that I have already submitted an abstract to the comittee, and that I won't publish the abstract here in the blog. Now of course the abstract is pretty detailed on what the talk is going to be about, but I've still got some elbow room on how to "implement" the talk. I will use this blog as a tool to get my aim right on how to present the talk, what examples to include, what the slides should look like, and how to make it most straightforward and understandable for the audience. Now in lack of having done any presentations at a larger conference before, I'm gonna dig into what I learned at the University, which wasn't very much, but they did teach me how to write a research paper, a skill which I will adapt into creating my talk: The one

Git Stash Blooper (Could not restore untracked files from stash)

The other day I accidentally did a git stash -a , which means it stashes *everything*, including ignored output files (target, build, classes, etc). Ooooops.. What I meant to do was git stash -u , meaning stash modifications plus untracked new files. Anyhows, I ended up with a big fat stash I couldn't get back out. Each time I tried, I got something like this: .../target/temp/dozer.jar already exists, no checkout .../target/temp/core.jar already exists, no checkout .../target/temp/joda-time.jar already exists, no checkout .../target/foo.war already exists, no checkout Could not restore untracked files from stash No matter how I tried checking out different revisions (like the one where I actually made the stash), or using --force, I got the same error. Now these were one of those "keep cool for a second, there's a git way to fix this"situation. I figured: A stash is basically a commit. If we look at my recent commits using   git log --graph --