Skip to main content

Getting More Agile - 2009 in Review

One of the reason I like blogging is that writing forces me to gather insights on what I do. It's kind of like a personal retrospective. In my current situation, I'm now at a bit of a stand-still, where we (our team at work) have achieved what we set out to do last year, and now we are a bit unsure on what to do next.

So, before we move on to have a look at our future goals, let's do a little year-in-review, 2009:

I started working at IP Labs in the beginning of the year. Part of my role in the team was to do some agile coaching. Why did they need this? I think they were feeling some frustrations which are typical for growing teams. They come from a situation where they were small enough to master chaos, but over time, as the code-base and team grows, they need some protocols, rituals and routines to help them keep on top of the mud. In short, agile practices.

There were times where I got really frustrated because things were moving to slow, and the team did not adopt some practices as vigurously as I would have wanted. Now, more than a year after, I'm really proud of the team, and I think we've done really well and achiveved a lot.

Below, I'll iterate through the practices we've been having a go at, and I'll link up to the Cantara agile wiki where appropriate. Here's a small TOC to start off with, as the post got a bit long:

  1. Continuous Integration
  2. Continuous Deployment
  3. Centralized artifact management
  4. Metrics
  5. Stand-ups
  6. Knowledge meetings and Retrospectives
  7. Team structure
  8. Agile methodologies
  9. Summary and wishes for the future

Continuous Integration


Before:

There was a nightly cron-job that build the whole shebang, running unit-tests. If the build failed, the team received a mail with the console-output of the build. Build-breakage during the day was discovered by random SCM updates, followed by random office visits and team mails to get it fixed again. I also think there was a habit of only performing an update early morning.

Measures:

We replaced the cron-job with Hudson. We increased the frequency of the build to poll SCM every five minutes and extended the test-suite to include a full database-teardown/setup, so we also discover inconsistencies in our schema (we used to have one centralized developer database with the according havoc, but now everyone can set up a local database in seconds). We also set up commit-mails, so developers get instant notification when someone does a commit.

After:

We now discover build breakage within 30 minutes of a commit, along with a ready list of commits for the build to track down and fix the break. We added more machines to the Hudson cluster to increase build speed, so the different branches of our product can be built in parallel.

Continuous Deployment


Before:

We had a set of dedicated test servers in the various datacenters we have around the world. These servers were deployed manually on demand. In order to try out some new functionality or verify a bug, you had to get a developer to find an available test-server, manually deploy and then communicate the deployment back.

Measures:

We set up internal nightly deployment jobs in Hudson, one for each branch of our product. If the build is not successful, nothing is deployed. After a successful build, we run web-based integration tests to make sure things are up and running again. We also set up a weekly build, which is deployed early monday morning.

After:

Each morning, QA, managers, designers, etc. can find the latest greatest version of our product running on an internal server with no effort needed from developers. The weekly deployment proves helpful because we have an earlier version of our product we can compare with if we find something wrong, and is also a somewhat more stable environment for our testers, etc.

Centralized artifact management (with Maven)


Before:

Every library was built with Ant and manually committed into our product's libs folder. I think that this lead to less modularization, because it was a bit of a pain to externalize a module from the product. I've blogged extensively about this subject before.

Measures:

We started using Maven for building our libraries, and are currently undergoing an experiment to mavenize the build of our whole product. We set up Nexus as an artifact repository.

After:

We have gotten as far as making a parallell build system, so now we've got both our Maven build *and Ant build under continuous integration. It's a little bit of overhead maintaining both dependency sets, but we're hoping that after a few months, we can convince the whole team to make the switch. We just need to get our whole build/deployment cycle working with Maven first, and make sure that the IDE usage does not suffer (testing m2eclipse extensively these days).

Metrics

Before:

We had a coding standard including Eclipse warnings. Violating our compilation-warnings was a no-go (zero-warning-policy). Beyond this, we didn't measure much in our code base.

Measures:


After:

With our mavenization efforts, we were recently able to make use of Sonar (which rocks, btw) instead of the old, more primitive (boring) Ant reports in Hudson. We're still not so good at making use of these reports, but perhaps putting some numbers on our complexity and test coverage will help make it clear to the team where the weak spots in our code base lies, and how it can steadily get better/worse depending on our efforts.

Stand-ups


There were no standups, only the odd rush meeting when needed.

Measures:

After creating our Java lounge, we started having standup meetings every morning. With 18 people, it quickly got boring. There was simply too many people to have a productive standup, in my opinion. We also had very little news that were worth communicating to all of us on a day-to-day basis, so after some weeks, we reduced the frequency to twice a week: Monday morning (what will we do this week), and Friday morning (what did we do this week) in combination with our weekly knowledge meeting (see below).

After:

Having a bi-weekly standup is much nicer, I think, although you have to support it with *something else*. Having the Friday standup sitting down in the knowledge meeting gives much more room for productive discussion, and it's not the end of the world if we go beyond our designated 10 minute round.

Knowledge meetings and Retrospectives

Before:

Every Wednesday, we had a one hour long presentation by one of the team members, either about a module or feature in the product, or some general technology (Spring, Maven, etc). There was never much retrospectives to speak of.

Measures:

We've kept great practice of the knowledge meetings, although moved it to Friday. We've also started using them for release retrospectives and more internal discussions.

After:

We've only had one real retrospective so far, but it was really awesome, so much feedback and opinions from the team about how we can work differently. Having our knowledge meeting on Fridays also gives a more relaxed, week-in-review kind of tone, I think. Wednesdays were also bad because they collided with our deployments. One outcome of our first retrospective at the end of last year is that we put some sub-structure into the team (see below).

Structure of the teams

Before:

One big team with 18 developers. Individuals, or some time pairs, were appointed responsibility for a certain new feature or refactoring (some times even more people, depending on the size of the task). You knew (well, it's still a lot like this) that Mr. X is responsible for component Y.

Measures:

Based on said retrospective, we split the team into four sub-teams.

After:

We still do standups together, but leave the sub-teams to do their own standups, situps, whatever. The sub-teams are completely self-organized, and can choose whatever practices they choose. My team tried Scrum for a while, but dropped it for Kanban, which two other teams have now picked up as well. The fourth team is really small and focused, so I think they manage fine without any method to speak of. Today, new features and tasks are more assigned on a sub-team basis, and less on an individual basis. It's easier to bring up things for discussions in the sub-team, easier to get someone to pair-program with, and over all more ownership, I think.

Agile Methodologies

Before:

There was kind of dynamic/chaotic flow in the team. People knew who did what, and who to turn to in order to get a feature. The team manager tried to stay on top of what everyone was doing in relation to which customer/manager's demands.

Measures:

I tried hard to introduce Scrum on the team, but it never really took off. I now think that management-wize, the team already had much of the agility it needed and Scrum didn't add anything extra to the mix. The team was self-organized, and mostly well shielded from the business side. Even though our Scrum prototype-project was successfully delivered, Scrum failed to charm my colleagues (it was more popular with the business/customer side). The daily standups, the estimation, the planning, the burndown, it was all too much overkill for a team used to "just doing it".

After:

With the organizing of the sub-teams, we now leave leave the decision to Scrum or not to the sub teams. Three of our teams have adopted Kanban boards to provide transparency of work-in-progress, although we still don't have any mechanisms in place for communicating progress to the business side.

Summary

Wow, that's actually a big list. Like I said, I'm really proud of us for how far we've come with getting more agile in just a year. Now, this presents us with the question, what should we aim for now?

Some things I would like for us to achieve in the future:

  • More pair-programming. I've tried to do this, but I'm not so good at it myself, and its not wildly popular with the team either (yet).
  • Coding dojos. I've wanted to introduce this for a long while, but hey, let's face it, programming in front of everybody is straight out scary. Hope I can summon the courage some time the next few months.
  • More test coverage. We're now at about 12% unit-test coverage (excluding the tests that use database). I'd like it to grow, say to 50% over the next couple of years.
  • More integration tests. We still depend on too much manual testing before we can make a release. I would like it if 99% of our bugs were discovered by automatic tests way ahead of the release date.
  • More transparency to the business side. I want them to be able to see what we're doing without having to ask us about it.
  • More frequent releases. Well, who doesn't want this. We currently do a bugfix release every two weeks, and a major release about every quarter. I would like to tear down the difference between these two kinds, and increase frequency, automate more of the process.

That's it. Please comment with any ideas or feedback. Would be nice to hear how you have done it at your place :)

Comments

  1. Interesting. One thing needs to be clarified though. It is not really continuous deployment if you are deploying to a test environment. Real continuous deployment is about deploying to production.

    ReplyDelete
  2. When it comes to communicating with the business side I would say that the kanban boards are a good start, but if they're analog will incur a overhead to capture the information you want to communicate. Unless the business people in mind can read them directly. Digital kanban boards can maybe be more suitable in that regards since they are easier to aggregate and extract information from. The key concept is information radiators so the info can be pull on demand rather than pushed to receiver.

    But I guess the real question is: what does the business want and need? And then take it from there and find the best tools for the job.

    ReplyDelete
  3. Niklas, that is true. I stand corrected (but can't be bothered updating the post :)).

    Knut: Aye. The good thing is that we're all co-located, so we can get away with something analog, I'm certain of that. But first we have to get those real questions answered.

    ReplyDelete
  4. Anonymous22/2/10 22:59

    How about an upper tier kanban board that consolidates all teams and their tasks (where the in development sub-tier is just another state)? We do this with the "MMF" features (features that are broad enough to firmly place business value on)). For instance track these features through the entire value stream. (From Idea, to analysis, prioritization, Dev, QA, to production, and other states).

    Other uses for overview boards are tracking backflow (features that move backwards - i.e. wrongly analysed features), speedlanes for bugs (as reported by the business side) and architectual/integration dependencies.

    That way the business side could join you in your bi-weekly standups i.e. to (re)prioritize the queue of features and to gain insight to your process and progress.

    Then it's just to start tracking this board for potential bottlenecks and setting limits on states to further optimize the flow between the sub-teams!

    ReplyDelete
  5. Anonymous22/2/10 23:10

    Another extension you should try with your kanban boards is visualizing and measuring the blockage in the flow (i.e. waiting for feedback from business side, other teams, specialists, ...) is also very important to do in order to be able to limit the work in progress and optimize the entire system.

    The measuring part is easy aswell - read up on using CFDs for recording/analysing lead/cycle time. You can easily gain (proper) statistical insights on the quality of the system process, far more applicable than velocity! The downside is that someone has to record it (with analog boards)!

    ReplyDelete
  6. Ole Christian, thanks for your suggestions. I've already sketched out an overview board, but it has yet to be put in proper use (we have some organizational politics to figure out before we can gather proper interest from the business side).

    Also I'd like to measure more of the cumulative flow. Will try implementing your tips over the next few weeks.

    ReplyDelete

Post a Comment

Popular posts from this blog

Open source CMS evaluations

I have now seen three more or less serious open source CMS reviews. First guy to hit the field was Matt Raible ( 1 2 3 4 ), ending up with Drupal , Joomla , Magnolia , OpenCms and MeshCMS being runner-ups. Then there is OpenAdvantage that tries out a handful ( Drupal , Exponent CMS , Lenya , Mambo , and Silva ), including Plone which they use for their own site (funny/annoying that the entire site has no RSS-feeds, nor is it possible to comment on the articles), following Matt's approach by exluding many CMS that seem not to fit the criteria. It is somewhat strange that OpenAdvantage cuts away Magnolia because it "Requires J2EE server; difficult to install and configure; more of a framework than CMS", and proceed to include Apache Lenya in the full evaluation. Magnolia does not require a J2EE server. It runs on Tomcat just like Lenya does (maybe it's an idea to bundle Magnolia with Jetty to make it seem more lightweight). I'm still sure that OpenAdvant

Encrypting and Decrypting with Spring

I was recently working with protecting some sensitive data in a typical Java application with a database underneath. We convert the data on its way out of the application using Spring Security Crypto Utilities . It "was decided" that we'd be doing AES with a key-length of 256 , and this just happens to be the kind of encryption Spring crypto does out of the box. Sweet! The big aber is that whatever JRE is running the application has to be patched with Oracle's JCE  in order to do 256 bits. It's a fascinating story , the short version being that U.S. companies are restricted from exporting various encryption algorithms to certain countries, and some countries are restricted from importing them. Once I had patched my JRE with the JCE, I found it fascinating how straight forward it was to encrypt and decrypt using the Spring Encryptors. So just for fun at the weekend, I threw together a little desktop app that will encrypt and decrypt stuff for the given password

What I've Learned After a Month of Podcasting

So, it's been about a month since I launched   GitMinutes , and wow, it's been a fun ride. I have gotten a lot of feedback, and a lot more downloads/listeners than I had expected! Judging the numbers is hard, but a generous estimate is that somewhere around 2000-3000 have listened to the podcast, and about 500-1000 regularly download. Considering that only a percentage of my target audience actively listen to podcasts, these are some pretty good numbers. I've heard that 10% of the general population in the western world regularly listen to podcasts (probably a bit higher percentage among Git users), so I like to think I've reached a big chunk of the Git pros out there. GitMinutes has gathered 110 followers on Twitter, and 63, erm.. circlers on Google+, and it has received 117 +'es! And it's been flattr'ed twice :) Here are some of the things I learned during this last month: Conceptually.. Starting my own sandbox podcast for trying out everythin

The academical approach

Oops, seems I to published this post prematurely by hitting some Blogger keyboard shortcut. I've been sitting for some minutes trying to figure out how to approach the JavaZone talk mentioned in my previous blog-post. Note that I have already submitted an abstract to the comittee, and that I won't publish the abstract here in the blog. Now of course the abstract is pretty detailed on what the talk is going to be about, but I've still got some elbow room on how to "implement" the talk. I will use this blog as a tool to get my aim right on how to present the talk, what examples to include, what the slides should look like, and how to make it most straightforward and understandable for the audience. Now in lack of having done any presentations at a larger conference before, I'm gonna dig into what I learned at the University, which wasn't very much, but they did teach me how to write a research paper, a skill which I will adapt into creating my talk: The one

Git Stash Blooper (Could not restore untracked files from stash)

The other day I accidentally did a git stash -a , which means it stashes *everything*, including ignored output files (target, build, classes, etc). Ooooops.. What I meant to do was git stash -u , meaning stash modifications plus untracked new files. Anyhows, I ended up with a big fat stash I couldn't get back out. Each time I tried, I got something like this: .../target/temp/dozer.jar already exists, no checkout .../target/temp/core.jar already exists, no checkout .../target/temp/joda-time.jar already exists, no checkout .../target/foo.war already exists, no checkout Could not restore untracked files from stash No matter how I tried checking out different revisions (like the one where I actually made the stash), or using --force, I got the same error. Now these were one of those "keep cool for a second, there's a git way to fix this"situation. I figured: A stash is basically a commit. If we look at my recent commits using   git log --graph --