Skip to main content

Legacy Code, Broken Windows and Code Quality

Long time since last post. It's not that I have been particularly busy, just haven't had anything *that* interesting to write about. Well, this might be interesting:

I know this legacy system. Well, it's not that much of a legacy system, it's barely a couple of years old. But it sure didn't take it too long to become difficult to maintain. Maybe you've seen similar scenarios... A big team of developers and consultants with lots of funding creates a big-bang super solution. Struggling to reach a deadline, at some point quality was left behind (or post-poned). We're talking compiler warnings, copy/paste code, hacks, quick'n'dirty (quoting Uncle Bob: There is no quick'n'dirty. Dirty means slow.), bad object-oriented design, the lot.

Well, I have to say that 80% of the code was golden: agile method, top of the line modern open source lightweight technologies, test-driven development, code reviews, continous integration, etc. But that last 20% of patchwork really was the start of things getting worse. We did have this great plan prepared for how we would get things right as soon as we reached the deadline, oh yes, we were going to make everything right again.

So we made the deadline and the project was a huge success. And what happened? Business people were happy and choffed all the consultants out. One maintainer was left behind to do "critical bug fixes". The broken windows remained and the make-things-right-again-plan was buried.

So a year went by. Business decided the solution needed new features. The maintainers desperately tried to keep up with business demands, again consultants were hired to satisfy the need for new functionality. This time, with developers unfamiliar with the existing code-base, more patch-work was done. The coding standards, patterns and technologies were unknown, unclear or inconsistent.

The existing broken window code had a devestating effect on the work that was done by the new consultants, and of course they were under the same pressure from the business-people, if not more. At the same time the maintainers were trying to do bug fixes in parallell, so the code was branched for several months. Upon merging again, the codebase suffered from alot of conflicts, but still the team didn't dare roll back to the stable code-base. The merge had to be used. It took several months of heroic effort from the maintainers (again, the consultant were choffed out prematurely) to get the codebase stable again, adding up to a total of 6 months without releases in a business critical system. That's 12 sprints the way we counted them.

I suppose scenarios like these end up with the system slowly decaying into the awful legacy system everyone loves to hate until finally its decided to be replaced with an all new big-bang system. And so the circle continues..

But not this time! The business has for some mind-perplexing reason actually realised that there is money to be saved from restoring the quality of this system! There will be a series of iterations were the focus is to rid the system of its technical debt (man, I love that term). Of course some business value will be delivered in each iteration, but hopefully this time quality tasks will be top priority.

The moral of the story comes two-fold:

1) Business-people can be taught the value of software quality. They can understand broken windows, it just takes alot of time to get it into their heads. Keep trying. Use good arguments, references and your experience (if you have any), and convince them that quality is the most pragmatic and profit-bound way. And if the project manager won't listen, go to the one above him and tell so. If they won't listen, find another place to work.

2) As a professional programmer, it is your duty to follow a respectful amount of common sense of ethichs and software craftmanship. Listen to the tiny voice inside your head saying "this code stinks" and never leave it behind before the little voice says "this code is good enough".

I've had it with people who write lousy code, calling themselves "pragmatic". From now on, I want to be an idealist :)

I'll finish off with another Uncle Bob quote (or maybe it was Jeff Atwood, good blog btw):

Always leave the code better than when you found it.

PS: This post was actually going be about how I wriggled around with Scala yesterday, getting my project to build Scala with Maven and still work on it from Eclipse (got it working somewhat but not 100%). Appearantly I have forgotten everything I learned about functional languages in Uni (ML), but I have a feeling this approach *might* be the easiest path down the concurrency landscape to come for us Java devs.


  1. Very nice post Thomas!

    Quality matters! It's never too late to make it better, even not when it is just a little tiny bit.

    Uncle Bob also said: Never check in bad code!



Post a Comment

Popular posts from this blog

Open source CMS evaluations

I have now seen three more or less serious open source CMS reviews. First guy to hit the field was Matt Raible ( 1 2 3 4 ), ending up with Drupal , Joomla , Magnolia , OpenCms and MeshCMS being runner-ups. Then there is OpenAdvantage that tries out a handful ( Drupal , Exponent CMS , Lenya , Mambo , and Silva ), including Plone which they use for their own site (funny/annoying that the entire site has no RSS-feeds, nor is it possible to comment on the articles), following Matt's approach by exluding many CMS that seem not to fit the criteria. It is somewhat strange that OpenAdvantage cuts away Magnolia because it "Requires J2EE server; difficult to install and configure; more of a framework than CMS", and proceed to include Apache Lenya in the full evaluation. Magnolia does not require a J2EE server. It runs on Tomcat just like Lenya does (maybe it's an idea to bundle Magnolia with Jetty to make it seem more lightweight). I'm still sure that OpenAdvant

Considerations for JavaScript in Modern (2013) Java/Maven Projects

Disclaimer: I'm a Java developer, not a JavaScript developer. This is just what I've picked up the last years plus a little research the last days. It's just a snapshot of my current knowledge and opinions on the day of writing, apt to change over the next weeks/months. We've gone all modern in our web applications, doing MVC on the client side with AngularJS or Ember , building single-page webapps with REST backends. But how are we managing the growing amount of JavaScript in our application? Yeoman 's logo (not necessarily the conclusion of this blog post) You ain't in Kansas anymore So far we've just been doing half-random stuff. We download some version of a library and throw it into our src/main/webapp/js/lib , or we use it from a CDN , which may be down or unreachable when we want to use the application.. Some times the JS is minified, other times it's not. Some times we name the file with version number, other times without. Some

Git Stash Blooper (Could not restore untracked files from stash)

The other day I accidentally did a git stash -a , which means it stashes *everything*, including ignored output files (target, build, classes, etc). Ooooops.. What I meant to do was git stash -u , meaning stash modifications plus untracked new files. Anyhows, I ended up with a big fat stash I couldn't get back out. Each time I tried, I got something like this: .../target/temp/dozer.jar already exists, no checkout .../target/temp/core.jar already exists, no checkout .../target/temp/joda-time.jar already exists, no checkout .../target/foo.war already exists, no checkout Could not restore untracked files from stash No matter how I tried checking out different revisions (like the one where I actually made the stash), or using --force, I got the same error. Now these were one of those "keep cool for a second, there's a git way to fix this"situation. I figured: A stash is basically a commit. If we look at my recent commits using   git log --graph --

Managing dot-files with vcsh and myrepos

Say I want to get my dot-files out on a new computer. Here's what I do: # install vcsh & myrepos via apt/brew/etc vcsh clone mr mr update Done! All dot-files are ready to use and in place. No deploy command, no linking up symlinks to the files . No checking/out in my entire home directory as a Git repository. Yet, all my dot-files are neatly kept in fine-grained repositories, and any changes I make are immediately ready to be committed: config-atom.git     -> ~/.atom/* config-mr.git     -> ~/.mrconfig     -> ~/.config/mr/* config-tmuxinator.git       -> ~/.tmuxinator/* config-vim.git     -> ~/.vimrc     -> ~/.vim/* config-bin.git        -> ~/bin/* config-git.git               -> ~/.gitconfig config-tmux.git       -> ~/.tmux.conf     config-zsh.git     -> ~/.zshrc How can this be? The key here is to use vcsh to keep track of your dot-files, and its partner myrepos/mr for o

Leaving eyeo

Thirteen blog posts later, this one notes my departure from eyeo after 4 years and 3 months. I joined eyeo around the headcount of 80 employees, and now I think there's just over 250 people there. My role coming in was as operations manager, doing a mix of infrastructure engineering and technical project management. I later on took on organizational development to help the company deal with its growing pains . We introduced cross-functional teams, departments (kind of like guilds), new leadership structures, goal-setting frameworks, onboarding processes and career frameworks.  And all of this in a rapidly growing distributed company. I'm proud and happy that for a long time I knew every employee by name and got to meet every single new-hire through training them on company structure and processes.  At some point, we had enough experienced leaders and organizational developers that I could zoom back in on working in one team, consulting them on  Git and continuous integration