Skip to main content

Projects are bad for software.

This post is mildly inspired by the discussions from this year's Smidig conference (Oslo's agile conference) inspired by Marcus Ahnve's talk about "totality outside the project", roughly translated. I didn't see the talk myself (looking at it right now though), but when I heard the announcement of the title of the open space, I immediately recognized the idea as something that I've been lingering around for years, without being able to identify properly.

In my last post, I wrote about how I learned about project management in uni, and how it is the way for building software. In my own opinion, that is not true at all.

Over the handful of years I've been working with software, both in project and maintenance mode, I can just about sum it up in one sentence:

Projects are bad for software.

There is one exception (discussed below), but in all practical cases, I find this to be the reality, and if you think about it a little bit, I'm sure you'll agree. Here are some points that come to mind:

Projects create irregularities in software
Two times I've worked on larger projects that involved improving the functionality on existing software. Basically, the process involves boosting the maintenance team (0-2 people) with an additional team of 5-10 (typically consultants like myself). The super-team introduces lots of fancy new technologies and methods and grows new functionality into the codebase over some months. Unfortunately, the rushed nature of the project doesn't give the fresh team time to understand the existing domain model, and instead of morphing the code into doing the new stuff, they add and pack stuff on top of existing code that does things differently. Now we might have two different technologies that do the same thing, or two domain objects that are similar, but have no notion of re-use. Broken windows, if you will.

Projects push people in, and afterwards yank them out
For some reason, after project delivery the team is cut down to the minimum required to keep the software operational. The super-team are pretty happy with the work they did, because they understand the new stuff perfectly. Unfortunately, the ones stuck maintaining the project probably weren't able to absorb the new domain bits and techonologies in time, and are stuck maintaining something they don't understand. This is bad knowledge management.

Projects are meant to be temporary, but software is long-lasting
The majority of work on software is done after the initial development. It is maintained, monitored, bug-fix-patched, upgraded, expanded and extended. Throughout this phase, we spend a lot of time fighting bad stuff that was forced into the code-base during intensive project development. All the developers that contributed during the project who would be great aid in cleaning up the mess are long-gone, and besides the maintaners are too busy with bug-fixes and small functionality changes from the business side, so there's no resouces left to get quality back to a respectable level.

Projects are based on the notion that software go into production and live happily ever after. This is not the case! Start measuring how many change requests come in from your IT-department or business side in just a month.

Yet, we have created an artificial boundary between the software development and software maintenance because of some twisted desire that maintaining software should be cheaper than developing it. Does anyone seriously believe that performing change on software that is running in production is cheaper than changing software in development? If so, they probably have never worked with a database, or software at all.

So when is project the right thing?

There is only one scenario that justifies the use of project as an organisation form: Termination of a system. You are migrating data away from a legacy database, shutting down a server, quitting the use of a service provider, etc. There is a clear deadline and date in sight for when the development will be finished, and after that, all the code that was written can be thrown out and forgotten.

(Oh, a second case: Cancelled projects.)

There are some scenarios like these taking place, but often they are part of something larger, like building the new system, or using a new service provider, and these larger invests are meant for a longer life, and do not (or should not) use project as a form of organization.


So how come we ended up doing projects?

I think it was a sort of panic reaction from leaders to try to control quickly escelating IT costs. Time-boxing, money-boxing (budgeting). There's nothing wrong with budgets or time-boxes. They are perfect planning tools that ask the team the right questions.

But business side wanted to be more efficient, so they put cheap labour doing maintenance (because they think it is easy), and smart expensive labour doing development (because it is hard). Truth is, development and maintenance is equally hard. In fact, they are the same thing, and by seperating them we are only doing these people a favour: consultants. It's an entire industry that feeds on the fact that their customers are unable to manage projects and maintenance on their own.

I could go on about how some large organizations I know of control and drive development projects by intensive consultant development, and afterwards dump them over to the maintenance team in the IT-department. You've probably seen this yourself, and how much fustration and problems it creates.

Now let us do this instead..
Instead of hiring ten project developers and one maintainer, hire three developer/maintainers instead. Believe me, they will deliver just as much functionality, maybe not in the first three months, but over time. The team of three developers through a whole year will give you a much higher velocity over one year than boosting the team to ten developers the first three months.

It's cheaper too. Here's the math:

10 devs for three months + 9 months of 1 maintainer = 39 work months. And alot of these are probably expensive consultants as well.

3 devs for twelve months = 36 work months.

Besides, a stable team of three developers will give you a stable code base with better quality that grows in rythm and improves with the skills of the developers.

It's time we start making consultants their money worth
Don't use them as developers. Use them as advisors, teachers, coaches. Have them assist those three developers, not code for them. And when the consultants seem to run out of new stuff they can learn your developers, chuff them out.

I'm afraid this post cranked a bit too many thoughts into one, but it's still a field where I only have my own limited experience to base the ideas on. I'm sure there are many other smarter people who have some thoughts on this, and I would appreciate if you could give me some pointers in their direction.

Comments

  1. I think you are on to something here Thomas!

    The use of in-house personell in projects are often too little. The knowledge leaves as soon as the project is finished and the initial installment has been made.

    I second your opinions on having the consultants working as a support for the in-house developers instead of the other way around.

    There is a lot more complex situations than the one you describe though..

    ReplyDelete
  2. Anonymous25/1/09 16:33

    What you're referring to is bad projects/bad Project Management, and not projects/Project Management in general.

    Your math is correct (for the 39 vs 36 months), and it's a better way, but to argue that projects shouldn't really have an end date is unrealistic (in this day and age). I agree that projects put stress on the team (because of deadlines), but how else can you achieve things?

    Now after a project is tested and accepted (both part of the project closure, this is an article focusing on these subjects), you will enter the maintenance phase. A lot of projects out there have some of the people who originally created the software supporting the client/fixing the project during this ongoing phase (of course if you have new functionalities, you can fork another project/mini-project).

    You also probably missed a very important point in your article that people do quit their jobs. So even if you have 3 of the people that were involved at the time from the start, there's no guarantee they will remain there.

    ReplyDelete

Post a Comment

Popular posts from this blog

Open source CMS evaluations

I have now seen three more or less serious open source CMS reviews. First guy to hit the field was Matt Raible ( 1 2 3 4 ), ending up with Drupal , Joomla , Magnolia , OpenCms and MeshCMS being runner-ups. Then there is OpenAdvantage that tries out a handful ( Drupal , Exponent CMS , Lenya , Mambo , and Silva ), including Plone which they use for their own site (funny/annoying that the entire site has no RSS-feeds, nor is it possible to comment on the articles), following Matt's approach by exluding many CMS that seem not to fit the criteria. It is somewhat strange that OpenAdvantage cuts away Magnolia because it "Requires J2EE server; difficult to install and configure; more of a framework than CMS", and proceed to include Apache Lenya in the full evaluation. Magnolia does not require a J2EE server. It runs on Tomcat just like Lenya does (maybe it's an idea to bundle Magnolia with Jetty to make it seem more lightweight). I'm still sure that OpenAdvant

Encrypting and Decrypting with Spring

I was recently working with protecting some sensitive data in a typical Java application with a database underneath. We convert the data on its way out of the application using Spring Security Crypto Utilities . It "was decided" that we'd be doing AES with a key-length of 256 , and this just happens to be the kind of encryption Spring crypto does out of the box. Sweet! The big aber is that whatever JRE is running the application has to be patched with Oracle's JCE  in order to do 256 bits. It's a fascinating story , the short version being that U.S. companies are restricted from exporting various encryption algorithms to certain countries, and some countries are restricted from importing them. Once I had patched my JRE with the JCE, I found it fascinating how straight forward it was to encrypt and decrypt using the Spring Encryptors. So just for fun at the weekend, I threw together a little desktop app that will encrypt and decrypt stuff for the given password

The Git Users Mailing List

A year ago or so, I came across the Git-user mailing list (aka. "Git for human beings"). Over the year, I grew a little addicted to helping people out with their Git problems. When the new git-scm.com webpage launched , and the link to the mailing list had disappeared, I was quick to ask them to add it again . I think this mailing list fills an important hole in the Git community between: The Git developer mailing list git@vger.kernel.org  - which I find to be a bit too hard-core and scary for Git newbies. Besides, the Majordomo mailing list system is pretty archaic, and I personally can't stand browsing or searching in the Gmane archives. The IRC channel #git on Freenode, which is a bit out-of-reach for people who never experienced the glory days of IRC. Furthermore, when the channel is busy, it's a big pain to follow any discussion. StackOverflow questions tagged git , these come pretty close, but it's a bit hard to keep an overview of what questio

Git tools for keeping patches on top of moving upstreams

At work, we maintain patches for some pretty large open source repositories that regularly release new versions, forcing us to update our patches to match. So far, we've been using basic Git operations to transplant our modifications from one major version of the upstream to the next. Every time we make such a transplant, we simply squash together the modifications we made in the previous version, and land it as one big commit into the next version. Those who are used to very stringent keeping of Git history may wrinkle their nose at this, but it is a pragmatic choice. Maintaining modifications on top of the rapidly changing upstream is a lot of work, and so far we haven't had the opportunity to figure out a more clever way to do it. Nor have we really suffered any consequences of not having an easy to read history of our modifications - it's a relatively small amount of patches, after all. With a recent boost in team size, we may have that opportunity. Also the need for be

Managing dot-files with vcsh and myrepos

Say I want to get my dot-files out on a new computer. Here's what I do: # install vcsh & myrepos via apt/brew/etc vcsh clone https://github.com/tfnico/config-mr.git mr mr update Done! All dot-files are ready to use and in place. No deploy command, no linking up symlinks to the files . No checking/out in my entire home directory as a Git repository. Yet, all my dot-files are neatly kept in fine-grained repositories, and any changes I make are immediately ready to be committed: config-atom.git     -> ~/.atom/* config-mr.git     -> ~/.mrconfig     -> ~/.config/mr/* config-tmuxinator.git       -> ~/.tmuxinator/* config-vim.git     -> ~/.vimrc     -> ~/.vim/* config-bin.git        -> ~/bin/* config-git.git               -> ~/.gitconfig config-tmux.git       -> ~/.tmux.conf     config-zsh.git     -> ~/.zshrc How can this be? The key here is to use vcsh to keep track of your dot-files, and its partner myrepos/mr for o