Skip to main content

The Sweet Spot of Docker

I just stumbled across this "Ask HN: What is the actual purpose of Docker?".

After using Docker more and more over the last months, my answers have gradually changed. It used to be more hype-like, with "immutable infrastructure", "portable" and stuff like that. Now it's more practical, I feel I can say more concretely what our benefits are.

My favorite answer comes down to Docker being a standardized way of deploying and running applications.

The old way of deploying our software was complex with a taste of chaos, then became managed but complicated through the introduction of Puppet (or your configuration management tool of choice). I'm hoping Docker will nudge it more towards the simple (quadrant).

How we used to deploy (and still do) - most of these are done through home-made shell scripts we distribute using Puppet:

  • Installing Debian packages (mostly standard packages, sometimes from 3rd party repositories)
  • Dropping WAR files into Tomcat (application server)
  • Expanding tar.gz files with Java applications embedding Jetty (application server) and home-made init/service-scripts

On top of that, some extra configuration is again provided by Puppet.

Our scripts handle downloading artifacts from Maven repositories, restarting application servers, running the services, PID-files and log files. Always some variations from application to application.

So in order to get an application running on a new server, we'd do this:

  1. Acquire the server
  2. Install OS and provision environment using Puppet
  3. Include deployment scripts for downloading and setting up the application
  4. Include service scripts for the application (start, stop)
  5. Run the deployment scripts and start the application

With Docker, we do this:
  1. Acquire the server
  2. Install OS and set up Docker log into the Docker repository (using Puppet)
  3. docker pull our application image
  4. docker run our application image

It looks kind of similar, and it's not really a big drastic change. But we are saving a couple of steps:

  • We don't have to write and distribute the deploy script for the application.
  • We don't have to nurse the service scripts for the the application.

Docker provides the above routines for us. And we can use the same routine whether it's a Java application built with appassembler, a Tomcat with a Grails application in it, a database, some simple executable or a cronjob. I always wanted something like appmgr for fixing this for my Java applications, but Docker solves it for everything.

We do have to provision the container's parameters/configuration, but at least this is a uniform step no matter what application we're talking about.

Of course, it's a lot of work to dockerize your infrastructure, and if it was only for the sake of the above benefits alone, it might not be worth it. As often mentioned in the HN discussion, I think Vagrant is a much more helpful tool to gain the benefits that you get from Docker from a developer's perspective, but Vagrant doesn't help you deploy software out on the real servers. So right now we've got Vagrant recipes that use Puppet to install Docker (see routine above, replace "acquire the server" with "vagrant up").

Popular posts from this blog

Encrypting and Decrypting with Spring

I was recently working with protecting some sensitive data in a typical Java application with a database underneath. We convert the data on its way out of the application using Spring Security Crypto Utilities. It "was decided" that we'd be doing AES with a key-length of 256, and this just happens to be the kind of encryption Spring crypto does out of the box. Sweet!

The big aber is that whatever JRE is running the application has to be patched with Oracle's JCE in order to do 256 bits. It's a fascinating story, the short version being that U.S. companies are restricted from exporting various encryption algorithms to certain countries, and some countries are restricted from importing them.

Once I had patched my JRE with the JCE, I found it fascinating how straight forward it was to encrypt and decrypt using the Spring Encryptors. So just for fun at the weekend, I threw together a little desktop app that will encrypt and decrypt stuff for the given password and sa…

Managing dot-files with vcsh and myrepos

Say I want to get my dot-files out on a new computer. Here's what I do:

# install vcsh & myrepos via apt/brew/etc
vcsh clone https://github.com/tfnico/config-mr.git mr
mr update

Done! All dot-files are ready to use and in place. No deploy command, no linking up symlinks to the files. No checking/out in my entire home directory as a Git repository. Yet, all my dot-files are neatly kept in fine-grained repositories, and any changes I make are immediately ready to be committed:

config-atom.git
    -> ~/.atom/*

config-mr.git
    -> ~/.mrconfig
    -> ~/.config/mr/*

config-tmuxinator.git  
    -> ~/.tmuxinator/*

config-vim.git
    -> ~/.vimrc
    -> ~/.vim/*

config-bin.git   
    -> ~/bin/*

config-git.git          
    -> ~/.gitconfig

config-tmux.git  
    -> ~/.tmux.conf    

config-zsh.git
    -> ~/.zshrc

How can this be? The key here is to use vcsh to keep track of your dot-files, and its partner myrepos/mr for operating on many repositories at the same time.

I discovere…

Always use git-svn with --prefix

TLDR: I've recently been forced back into using git-svn, and while I was at it, I noticed that git-svn generally behaves a lot better when it is initialized using the --prefix option.

Frankly, I can't see any reason why you would ever want to use git-svn without --prefix. It even added some major simplifications to my old git-svn mirror setup.

Update: Some of the advantages of this solution will disappear in newer versions of Git.

For example, make a standard-layout svn clone:

$ git svn clone -s https://svn.company.com/repos/project-foo/

You'll get this .git/config:

[svn-remote "svn"]
        url = https://svn.company.com/repos/
        fetch = project-foo/trunk:refs/remotes/trunk
        branches = project-foo/branches/*:refs/remotes/*
        tags = project-foo/tags/*:refs/remotes/tags/*

And the remote branches looks like this (git branch -a):
    remotes/trunk
    remotes/feat-bar

(Compared to regular remote branches, they look very odd because there is no remote name i…

Joining eyeo: A Year in Review

It's been well over a year since I joined eyeo. And 'tis the season for yearly reviews, so...

It's been pretty wild. So many times I thought "this stuff really deserves a bloggin", but then it was too inviting to grab onto the next thing and get that rolling.

Instead of taking a deep dive into some topic already, I want to scan through that year in review and think for myself, what were the big things, the important things, the things I achieved, and the things I learned. And then later on, if I ever get around to it, grab one of these topics and elaborate in a dedicated blog-post. Like a bucket-list of the blog posts that I should have written. Here goes:
How given no other structures, silos will grow by themselves This was my initial shock after joining the company. Only a few years after taking off as a startup, the hedges began growing, seemingly almost by themselves, and against the will of the founders. I've worked in silos, and in companies without the…

The End of GitMinutes (my podcast)

I'm just about ship GitMinutes episode 46, which is going to be the final episode. I'll just paste the outro script here, as it sums up the sentimental thoughts pretty well:

I’m happy to have finally finished [publishing the last episodes from Git-Merge 2017], just in time before Git-Merge 2018 takes place in March. I won’t be going there myself, so I’m counting on someone else to pick up the mic there.

It’s sad to be shipping this one as it is probably the last GitMinutes episode ever. To go a bit down memory lane, 6 years ago, my daughter was born, and as I used a little of that paternity leave to set up my podcasting infrastructure and produce the first few episodes. Initially it was just going to be 10 episodes and call the experiment finished. Instead, I got to 46 episodes, the last dozen or so lazily tailing the last few Git-Merge conferences.

To every one of my guests, thank you so much again for coming on to share your passion in this little niche of computer science a…