Skip to main content

Git+SVN #5: Centralized Git-SVN Mirror

This post is part of a series on Git and Subversion. To see all the related posts, screencasts and other resources, please click here

Another episode on how to live with Git and Subversion in parallel:

Only a few days left till GearConf, where I will be repeating the exercise, adding all sorts of useful hints and tips on the way.

NOTE: At the end of the cast, I presented this little shell-script that I normally use for committing:

git update-ref refs/remotes/git-svn refs/remotes/origin/master
git svn dcommit

Some more background:

git svn dcommit actually updates the refs/remotes/git-svn

However, in the case that I first do a git pull from the bare repo, getting the new commits via the "pure" git command, no svn refs are updated! Example:

Let's say bob commits a change. John then updates his repo:

tfnico@flint:~/john/website/>git pull --rebase
remote: Counting objects: 8, done.
remote: Compressing objects: 100% (4/4), done.
remote: Total 6 (delta 2), reused 0 (delta 0)
Unpacking objects: 100% (6/6), done.
From /Users/tfnico/john/../git-repos/website
884f657..1cb7f98 master -> origin/master
First, rewinding head to replay your work on top of it...
Fast-forwarded master to 1cb7f98dbcc6fd9351108021e3ab9aa29a6bcb6a.
tfnico@flint:~/john/website/>vim README.txt
tfnico@flint:~/john/website/>git commit -a -m "Fixed readme again."
tfnico@flint:~/john/website/>git svn dcommit
Committing to file:///Users/tfnico/svn-repos/company-repo/website ...
Transaction is out of date: File '/website/' is out of date at /opt/local/libexec/git-core/git-svn line 572

See? We can't push back to SVN, because the ref is out of date. This is where the update-ref comes into play:

tfnico@flint:~/john/website/>git update-ref refs/remotes/git-svn refs/remotes/origin/master
tfnico@flint:~/john/website/>git svn dcommit
Partial-rebuilding .git/svn/refs/remotes/git-svn/.rev_map.748a8128-3b48-42b3-854a-26eb1451c56d ...
Currently at 8 = fe775358fcec6db0cc130f2377549c1cc5668400
r9 = a72a3e8c37f3fa174c5ec7464ab97a8fddbf4652
r10 = 1cb7f98dbcc6fd9351108021e3ab9aa29a6bcb6a
Done rebuilding .git/svn/refs/remotes/git-svn/.rev_map.748a8128-3b48-42b3-854a-26eb1451c56d
Committing to file:///Users/tfnico/svn-repos/company-repo/website ...
Committed r11
r11 = c019fc06ad36b06ef644518e85085da653335fb9 (refs/remotes/git-svn)
No changes between current HEAD and refs/remotes/git-svn
Resetting to the latest refs/remotes/git-svn

The dcommit was now successful. Remember: If you pull normal git style, nothing happens with the svn refs even if we pull changes from SVN. Perhaps this is a fault in git-pull, but we'll have to do this work-around for now.

Another important note: The reason why John uses --rebase for his pull is that he has local commits, and he wants to keep history linear for the sake of Subversion, like we discussed in previous episodes. If you have local commits, always pull from the bare repo with --rebase.


  1. Nice video as always.

    In your examples, you always have access to the SVN repo on the filesystem. Is it possible to use git-svn over a http svn connection?

    PS: Remember to turn of the autocompletion beep. There's a lot of "beep" going on as you complete commands on the command line.

  2. Hi Johannes, thanks for the comment!

    Yeah, I noticed the beeps got very annoying, but couldn't be bothered editing them out or re-casting. I'll turn it off now though.

    I usually use git-svn over https, so http should work fine, I believe.

  3. What is the advantage of the separation between the fetching repo and the bare repo?

    Why do devA, ... devC not direclty use the fetching repository?

  4. Hi Christoph, thanks for your questions.

    As I started off with this, I didn't know why there was this separation. I just did it like that cause it was outlined in the GitFAQ:

    That said, I haven't tried out doing fetching and sharing in the same repo. But having this clean separation between them allowed me to do some major changes on the git-svn setup in the fetching repo without disturbing those who already had cloned the bare repo.

    Also, if someone accidently pushes directly to the bare repo (instead of committing through svn), they won't disturb the fetching process. I haven't seen this happen though, so not sure how it will play out.

  5. I have another question because of the different recipes floating around on how to synchronize git and svn:

    Jon Loeliger describes in his book a similar setup. But he uses the fetch repository exclusively to communicate with SVN. The developers do not directly communicate with SVN.

    He suggests that one dcommits only on detached heads and describes how to do this.

    Unfortunately he is not very specific on how to synchronize the fetch repository and the bare repository.

    What is the advantage of your proposal compared to his?

  6. Hi again, thanks for the input. I came up with this setup (in desperation) because I couldn't find any other recipes on the net that tried to do something similar.

    Unfortunately I haven't read Jon's book, and I don't understand how this setup works exactly.. If you have a how-to somewhere online I would love to try it out and compare :)

  7. I do not know any online reference. You can however see a summary of it in the comment by Josh to this blog entry:

    Or if you get by the Arithmeum and ask for me I could hand you over the relevant chapter.

  8. Aha,

    It seems this is a setup where the committers themselves are responsible for keeping the bare-repo up to date with the latest changes.

    In my setup, the bare-repo is automatically updated via an svn-hook (that triggers a svn fetch in the fething repo and a push to the bare repo).

    So as far as I understand, Jon's setup is a bit more tedious for the developers, but necessary if you have no way to automatically update the central git repos. We use svn-commit hooks, but maybe you don't have admin access to the svn-server? You could also use a polling process, but then you need a server to run that on.

    Jon or others might argue that my setup is flawed because when the commit "comes back" to the committer, it will cause a conflict (or a merge commit) because the commit is already there. Luckily, git recognizes that the local commit and the incoming one is the same, and things just work.

    I'll ask Josh about it in the comments over in that other post.

  9. One thing to note with this setup is that you have to have a local git branch with the *same name* as the one in origin, to be able to dcommit, unless I'm doing something wrong. You can of course then make sub-branches in git and merge them into that branch, which I guess is a recommended work flow anyway - having a local "master" for each remote branch.

    We will also setup an SVN hook later, but for now I simply have an alias in .bash_aliases to:

    ssh mygitserver 'cd /var/git/git-svn-gateway; git svn fetch; git push origin'

  10. Hi Jacob, thanks for your comment!

    I haven't used different branch names between local and remote (as far as I remember), so I can't advise there.

    We have the git svn fetch+push command running in a Hudson job which is triggered by a http-request. This makes it more transparent to all whenever the git svn fetch is run, and prevents "race conditions", say multiple people ssh'ing in to sync the repos at the same time.

  11. Joerg Rosenkranz12/9/11 17:21

    We are using a very similar setup modeled after the receipt from . Doing it that way you don't need the update-ref and can git svn dcommit directly.

    The interesting part is to use the same name ("mirror" in this case) in
    git clone -o mirror ...
    git svn init --prefix=mirror/ ...

  12. Hi, Joerg. That link is really interesting! I'm a bit disappointed that I haven't found that page before.

    This prefix thing is a very nice touch. I wish I had understood that earlier.. When/if I get the time, I'll try to incorporate that into my material.


Post a Comment

Popular posts from this blog

Open source CMS evaluations

I have now seen three more or less serious open source CMS reviews. First guy to hit the field was Matt Raible ( 1 2 3 4 ), ending up with Drupal , Joomla , Magnolia , OpenCms and MeshCMS being runner-ups. Then there is OpenAdvantage that tries out a handful ( Drupal , Exponent CMS , Lenya , Mambo , and Silva ), including Plone which they use for their own site (funny/annoying that the entire site has no RSS-feeds, nor is it possible to comment on the articles), following Matt's approach by exluding many CMS that seem not to fit the criteria. It is somewhat strange that OpenAdvantage cuts away Magnolia because it "Requires J2EE server; difficult to install and configure; more of a framework than CMS", and proceed to include Apache Lenya in the full evaluation. Magnolia does not require a J2EE server. It runs on Tomcat just like Lenya does (maybe it's an idea to bundle Magnolia with Jetty to make it seem more lightweight). I'm still sure that OpenAdvant

Encrypting and Decrypting with Spring

I was recently working with protecting some sensitive data in a typical Java application with a database underneath. We convert the data on its way out of the application using Spring Security Crypto Utilities . It "was decided" that we'd be doing AES with a key-length of 256 , and this just happens to be the kind of encryption Spring crypto does out of the box. Sweet! The big aber is that whatever JRE is running the application has to be patched with Oracle's JCE  in order to do 256 bits. It's a fascinating story , the short version being that U.S. companies are restricted from exporting various encryption algorithms to certain countries, and some countries are restricted from importing them. Once I had patched my JRE with the JCE, I found it fascinating how straight forward it was to encrypt and decrypt using the Spring Encryptors. So just for fun at the weekend, I threw together a little desktop app that will encrypt and decrypt stuff for the given password

What I've Learned After a Month of Podcasting

So, it's been about a month since I launched   GitMinutes , and wow, it's been a fun ride. I have gotten a lot of feedback, and a lot more downloads/listeners than I had expected! Judging the numbers is hard, but a generous estimate is that somewhere around 2000-3000 have listened to the podcast, and about 500-1000 regularly download. Considering that only a percentage of my target audience actively listen to podcasts, these are some pretty good numbers. I've heard that 10% of the general population in the western world regularly listen to podcasts (probably a bit higher percentage among Git users), so I like to think I've reached a big chunk of the Git pros out there. GitMinutes has gathered 110 followers on Twitter, and 63, erm.. circlers on Google+, and it has received 117 +'es! And it's been flattr'ed twice :) Here are some of the things I learned during this last month: Conceptually.. Starting my own sandbox podcast for trying out everythin

Git tools for keeping patches on top of moving upstreams

At work, we maintain patches for some pretty large open source repositories that regularly release new versions, forcing us to update our patches to match. So far, we've been using basic Git operations to transplant our modifications from one major version of the upstream to the next. Every time we make such a transplant, we simply squash together the modifications we made in the previous version, and land it as one big commit into the next version. Those who are used to very stringent keeping of Git history may wrinkle their nose at this, but it is a pragmatic choice. Maintaining modifications on top of the rapidly changing upstream is a lot of work, and so far we haven't had the opportunity to figure out a more clever way to do it. Nor have we really suffered any consequences of not having an easy to read history of our modifications - it's a relatively small amount of patches, after all. With a recent boost in team size, we may have that opportunity. Also the need for be

Managing dot-files with vcsh and myrepos

Say I want to get my dot-files out on a new computer. Here's what I do: # install vcsh & myrepos via apt/brew/etc vcsh clone mr mr update Done! All dot-files are ready to use and in place. No deploy command, no linking up symlinks to the files . No checking/out in my entire home directory as a Git repository. Yet, all my dot-files are neatly kept in fine-grained repositories, and any changes I make are immediately ready to be committed: config-atom.git     -> ~/.atom/* config-mr.git     -> ~/.mrconfig     -> ~/.config/mr/* config-tmuxinator.git       -> ~/.tmuxinator/* config-vim.git     -> ~/.vimrc     -> ~/.vim/* config-bin.git        -> ~/bin/* config-git.git               -> ~/.gitconfig config-tmux.git       -> ~/.tmux.conf     config-zsh.git     -> ~/.zshrc How can this be? The key here is to use vcsh to keep track of your dot-files, and its partner myrepos/mr for o