Skip to main content

Metrics

The last couple of weeks I've been spending a little effort on getting some code metrics up and running. I've blogged about measuring in software development before, and I think code is definitely one of the easiest and most important things to measure.

The most important thing about metrics is that you monitor them. I am of the firm belief that if you run a static code-analysis report manually, it will give you very little. Your team will say "wow, we have 3000 FindBugs warnings", fix a couple of them, and then forget about it. Then the next time you run the report, you'll find new bugs that have crept in, with no idea of when or who did it.

There are two ways to track these kind of metrics:

  1. IDE warnings
  2. Continuous integration reports

IDE warnings

Most of you have probably enabled the built-in warning system inside Eclipse (unused method, potential null-pointer, etc). Some of you strive to minimize these numbers of warnings, some of you have perhaps even turned on more than the default warnings in Eclipse. The key practice which I recommend you all adopt is the zero-warning-policy.

(I couldn't find a good place to quote, so I'll come up with my own definition:)

The zero-warning-policy is a programming practice which states that at any time, the number of warnings in a code base should be zero. With this in place, it is easy to discover and take care of any violations immediately, before commit. Gradually, new warning rules are introduced to the developer's team, and the existing violations are promptly removed, making sure the level is back to zero before every commit.

After some time, you may find that your team has exhausted the IDE's warning policies. At this point you have to introduce plugins to the IDE to facilitate more warnings. Now it gets a bit tricky, cause every team member needs to have the same IDE configuration.

Continuous integration reports

This approach works fine in coalition with the IDE warnings. The clue is to implement the warning policies into your CI-server. Some teams practice zero-warning-policies also here, the strictest variety would be that any violation will actually break the build. This is a good way to enforce zero warnings, but can be a bit too strict for most teams.

There are two strengths of CI reports: Firstly, they are centralized, so there's no need for the team members to set up any configuration or plugins. Secondly, they can track metrics historically, so you can see from day to day whether the number of violations are decreasing or increasing.

Historical data is a good substitute for the zero-warning policy. You may have 1000 violations in your code-base, but the key is to discover when new ones are introduced, and catch the violating developer in the act of producing bad code.

It is also interesting to track metrics over time to see whether coding practices have any effect on the quality of the code. Did introducing pair-programming/commit-mails/TDD lead to fewer violations? Now you have proof.


The landscape of metrics

Now that we have some motivation for putting code metrics into play, which tools should we use? There is a number of alternatives, both commercial and open source. Googling around will probably set you in the right direction, but here are my experiences:

Which reporting tools?

I like to divide into a families of metrics:
We played around with these tools, and found that the best combination for us now was JavaNCSS, FindBugs and PMD. We still haven't got enough test coverage to even bother measuring how low it is, unfortunately.

Which IDE?

Both Eclipse and IDEA have a good number of built-in warning policies, and most of the tools above come as plugins for both.

Ant or Maven?

Most of the tools above come as either Maven-reports or Ant tasks. Even though I'm a big Maven fan, I found it to be quite easy to set up all the tools with Ant. The Panopticode project, which aims to provide a number of metrics into one easy setup helped me a lot with finding the way to structure this into our existing Ant project. If you've got a Maven project, introducing a report should be as easy as five lines of XML in your pom.xml.

Which CI-server?

I've been using Hudson for so long now that I didn't bother with trying out anything else, but I'm guessing that none of the other products come close to Hudson in the number of tools supported by plugins. If I did have the money for Atlassian's Bamboo, I would definitely give the Bamboo + Clover mix a run.

Run it nightly

Finally, I found it best to not run these metrics after every commit. They take too long, and nobody inspects the changes in number of warnings from build to build. Instead, we run metrics nightly, so every morning we can see that there are X more/fewer violations in the code base. I'm really happy with how well it ended up working, and I hope this lends some inspiration for you to try the same.

Happy measuring!

Popular posts from this blog

Encrypting and Decrypting with Spring

I was recently working with protecting some sensitive data in a typical Java application with a database underneath. We convert the data on its way out of the application using Spring Security Crypto Utilities. It "was decided" that we'd be doing AES with a key-length of 256, and this just happens to be the kind of encryption Spring crypto does out of the box. Sweet!

The big aber is that whatever JRE is running the application has to be patched with Oracle's JCE in order to do 256 bits. It's a fascinating story, the short version being that U.S. companies are restricted from exporting various encryption algorithms to certain countries, and some countries are restricted from importing them.

Once I had patched my JRE with the JCE, I found it fascinating how straight forward it was to encrypt and decrypt using the Spring Encryptors. So just for fun at the weekend, I threw together a little desktop app that will encrypt and decrypt stuff for the given password and sa…

Managing dot-files with vcsh and myrepos

Say I want to get my dot-files out on a new computer. Here's what I do:

# install vcsh & myrepos via apt/brew/etc
vcsh clone https://github.com/tfnico/config-mr.git mr
mr update

Done! All dot-files are ready to use and in place. No deploy command, no linking up symlinks to the files. No checking/out in my entire home directory as a Git repository. Yet, all my dot-files are neatly kept in fine-grained repositories, and any changes I make are immediately ready to be committed:

config-atom.git
    -> ~/.atom/*

config-mr.git
    -> ~/.mrconfig
    -> ~/.config/mr/*

config-tmuxinator.git  
    -> ~/.tmuxinator/*

config-vim.git
    -> ~/.vimrc
    -> ~/.vim/*

config-bin.git   
    -> ~/bin/*

config-git.git          
    -> ~/.gitconfig

config-tmux.git  
    -> ~/.tmux.conf    

config-zsh.git
    -> ~/.zshrc

How can this be? The key here is to use vcsh to keep track of your dot-files, and its partner myrepos/mr for operating on many repositories at the same time.

I discovere…

Always use git-svn with --prefix

TLDR: I've recently been forced back into using git-svn, and while I was at it, I noticed that git-svn generally behaves a lot better when it is initialized using the --prefix option.

Frankly, I can't see any reason why you would ever want to use git-svn without --prefix. It even added some major simplifications to my old git-svn mirror setup.

Update: Some of the advantages of this solution will disappear in newer versions of Git.

For example, make a standard-layout svn clone:

$ git svn clone -s https://svn.company.com/repos/project-foo/

You'll get this .git/config:

[svn-remote "svn"]
        url = https://svn.company.com/repos/
        fetch = project-foo/trunk:refs/remotes/trunk
        branches = project-foo/branches/*:refs/remotes/*
        tags = project-foo/tags/*:refs/remotes/tags/*

And the remote branches looks like this (git branch -a):
    remotes/trunk
    remotes/feat-bar

(Compared to regular remote branches, they look very odd because there is no remote name i…

Joining eyeo: A Year in Review

It's been well over a year since I joined eyeo. And 'tis the season for yearly reviews, so...

It's been pretty wild. So many times I thought "this stuff really deserves a bloggin", but then it was too inviting to grab onto the next thing and get that rolling.

Instead of taking a deep dive into some topic already, I want to scan through that year in review and think for myself, what were the big things, the important things, the things I achieved, and the things I learned. And then later on, if I ever get around to it, grab one of these topics and elaborate in a dedicated blog-post. Like a bucket-list of the blog posts that I should have written. Here goes:
How given no other structures, silos will grow by themselves This was my initial shock after joining the company. Only a few years after taking off as a startup, the hedges began growing, seemingly almost by themselves, and against the will of the founders. I've worked in silos, and in companies without the…

Automating Computer Setup with Boxen

I just finished setting up a new laptop at work, and in doing so I revamped my personal computer automation quite a bit. I set up Boxen for installing software, and I improved my handling of dot-files using vcsh, which I'll cover in the next blog-post after this one.

Since it's a Mac, it doesn't come with any reasonable package manager built in. A lot of people get along with a combination of homebrew or MacPorts plus manual installs, but this time I took it a step further and decided to install all the "desktop" tools like VLC and Spotify using GitHub's Boxen:

  include vlc
  include cyberduck
  include pgadmin3
  include spotify
  include jumpcut
  include googledrive
  include virtualbox

If the above excerpt looks like Puppet to you, it's because it is. The nice thing about this is that I can apply the same puppet scripts on my Ubuntu machines as well. Boxen is Mac-specific, Puppet is not.

It was a little weird to get started with Boxen, as you're offered…