The most important thing about metrics is that you monitor them. I am of the firm belief that if you run a static code-analysis report manually, it will give you very little. Your team will say "wow, we have 3000 FindBugs warnings", fix a couple of them, and then forget about it. Then the next time you run the report, you'll find new bugs that have crept in, with no idea of when or who did it.
There are two ways to track these kind of metrics:
- IDE warnings
- Continuous integration reports
IDE warnings
Most of you have probably enabled the built-in warning system inside Eclipse (unused method, potential null-pointer, etc). Some of you strive to minimize these numbers of warnings, some of you have perhaps even turned on more than the default warnings in Eclipse. The key practice which I recommend you all adopt is the zero-warning-policy.
(I couldn't find a good place to quote, so I'll come up with my own definition:)
The zero-warning-policy is a programming practice which states that at any time, the number of warnings in a code base should be zero. With this in place, it is easy to discover and take care of any violations immediately, before commit. Gradually, new warning rules are introduced to the developer's team, and the existing violations are promptly removed, making sure the level is back to zero before every commit.
After some time, you may find that your team has exhausted the IDE's warning policies. At this point you have to introduce plugins to the IDE to facilitate more warnings. Now it gets a bit tricky, cause every team member needs to have the same IDE configuration.
This approach works fine in coalition with the IDE warnings. The clue is to implement the warning policies into your CI-server. Some teams practice zero-warning-policies also here, the strictest variety would be that any violation will actually break the build. This is a good way to enforce zero warnings, but can be a bit too strict for most teams.
There are two strengths of CI reports: Firstly, they are centralized, so there's no need for the team members to set up any configuration or plugins. Secondly, they can track metrics historically, so you can see from day to day whether the number of violations are decreasing or increasing.
Historical data is a good substitute for the zero-warning policy. You may have 1000 violations in your code-base, but the key is to discover when new ones are introduced, and catch the violating developer in the act of producing bad code.
It is also interesting to track metrics over time to see whether coding practices have any effect on the quality of the code. Did introducing pair-programming/commit-mails/TDD lead to fewer violations? Now you have proof.
Which reporting tools?
I like to divide into a families of metrics:
- Test coverage: Crap4J, Clover (commercial), Cobertura and Emma. These tools also usually include some notion of complexity (execution paths), as coverage is a product of complexity.
- Code style: CheckStyle, PMD
- Duplication: Simian (commercial), PMD
- Static code analysis: FindBugs, JavaNCSS, JDepend, Complexian
Which IDE?
Both Eclipse and IDEA have a good number of built-in warning policies, and most of the tools above come as plugins for both.
Ant or Maven?
Most of the tools above come as either Maven-reports or Ant tasks. Even though I'm a big Maven fan, I found it to be quite easy to set up all the tools with Ant. The Panopticode project, which aims to provide a number of metrics into one easy setup helped me a lot with finding the way to structure this into our existing Ant project. If you've got a Maven project, introducing a report should be as easy as five lines of XML in your pom.xml.
Run it nightly
Finally, I found it best to not run these metrics after every commit. They take too long, and nobody inspects the changes in number of warnings from build to build. Instead, we run metrics nightly, so every morning we can see that there are X more/fewer violations in the code base. I'm really happy with how well it ended up working, and I hope this lends some inspiration for you to try the same.
Happy measuring!
Thanks for a nice post Thomas!
ReplyDeleteZero warning policy on the CI for a already legacy system might be as you say not a positive thing. It will probably demotivate people rather than inspire them.
I would check the metrics and take it up with a team member one on one during a lunch or some other time that you can catch a developer alone. I would ask that person for help with the problem. Maybe this person will find some way of helping you without you bringing it up for everyone. Maybe you could ask more than one person as well? Have another coffee later that day with another person?
Have a goal is also important. Why is it being measured? What do we achieve by having a zero-warning policy? And are we measuring what we gain by it?
Thanks again for a nice post :)
Thommy
Thanks for the comment, Thommy.
ReplyDeleteI don't think ZWP is particularly demotivating or the opposite. It's more of a focus thing. If you mandate ZWP, you tell the team we're gonna have a strict no-new-warnings policy, and then introduce new warnings one by one. ZWP means turning off *most* of the warnings in the system so you get down to zero. It's easy and clean, but then again you shut your eyes to all the warnings that are not enabled (until you one day enable them again, but that will take a while).
We haven't defined any goals around the metrics yet, except for (a) we want higher test coverage, and (b) we want to avoid FindBugs bugs. These are just general advice, not hard goals.
I realize that static code analysis is a bit dangerous. It's a very one-dimensional way to criticize code, and I think if you have a problem with high avg. complexity per method, for instance, you're not gonna get anywhere by bitching about the high complexity per method number. You're better off realizing that perhaps there is a problem here, that people don't dare to extract method and do more object-oriented code.