Skip to main content

Git+SVN #5: Centralized Git-SVN Mirror

This post is part of a series on Git and Subversion. To see all the related posts, screencasts and other resources, please click here

Another episode on how to live with Git and Subversion in parallel:

Only a few days left till GearConf, where I will be repeating the exercise, adding all sorts of useful hints and tips on the way.

NOTE: At the end of the cast, I presented this little shell-script that I normally use for committing:

git update-ref refs/remotes/git-svn refs/remotes/origin/master
git svn dcommit

Some more background:

git svn dcommit actually updates the refs/remotes/git-svn

However, in the case that I first do a git pull from the bare repo, getting the new commits via the "pure" git command, no svn refs are updated! Example:

Let's say bob commits a change. John then updates his repo:

tfnico@flint:~/john/website/>git pull --rebase
remote: Counting objects: 8, done.
remote: Compressing objects: 100% (4/4), done.
remote: Total 6 (delta 2), reused 0 (delta 0)
Unpacking objects: 100% (6/6), done.
From /Users/tfnico/john/../git-repos/website
884f657..1cb7f98 master -> origin/master
First, rewinding head to replay your work on top of it...
Fast-forwarded master to 1cb7f98dbcc6fd9351108021e3ab9aa29a6bcb6a.
tfnico@flint:~/john/website/>vim README.txt
tfnico@flint:~/john/website/>git commit -a -m "Fixed readme again."
tfnico@flint:~/john/website/>git svn dcommit
Committing to file:///Users/tfnico/svn-repos/company-repo/website ...
Transaction is out of date: File '/website/' is out of date at /opt/local/libexec/git-core/git-svn line 572

See? We can't push back to SVN, because the ref is out of date. This is where the update-ref comes into play:

tfnico@flint:~/john/website/>git update-ref refs/remotes/git-svn refs/remotes/origin/master
tfnico@flint:~/john/website/>git svn dcommit
Partial-rebuilding .git/svn/refs/remotes/git-svn/.rev_map.748a8128-3b48-42b3-854a-26eb1451c56d ...
Currently at 8 = fe775358fcec6db0cc130f2377549c1cc5668400
r9 = a72a3e8c37f3fa174c5ec7464ab97a8fddbf4652
r10 = 1cb7f98dbcc6fd9351108021e3ab9aa29a6bcb6a
Done rebuilding .git/svn/refs/remotes/git-svn/.rev_map.748a8128-3b48-42b3-854a-26eb1451c56d
Committing to file:///Users/tfnico/svn-repos/company-repo/website ...
Committed r11
r11 = c019fc06ad36b06ef644518e85085da653335fb9 (refs/remotes/git-svn)
No changes between current HEAD and refs/remotes/git-svn
Resetting to the latest refs/remotes/git-svn

The dcommit was now successful. Remember: If you pull normal git style, nothing happens with the svn refs even if we pull changes from SVN. Perhaps this is a fault in git-pull, but we'll have to do this work-around for now.

Another important note: The reason why John uses --rebase for his pull is that he has local commits, and he wants to keep history linear for the sake of Subversion, like we discussed in previous episodes. If you have local commits, always pull from the bare repo with --rebase.


  1. Nice video as always.

    In your examples, you always have access to the SVN repo on the filesystem. Is it possible to use git-svn over a http svn connection?

    PS: Remember to turn of the autocompletion beep. There's a lot of "beep" going on as you complete commands on the command line.

  2. Hi Johannes, thanks for the comment!

    Yeah, I noticed the beeps got very annoying, but couldn't be bothered editing them out or re-casting. I'll turn it off now though.

    I usually use git-svn over https, so http should work fine, I believe.

  3. What is the advantage of the separation between the fetching repo and the bare repo?

    Why do devA, ... devC not direclty use the fetching repository?

  4. Hi Christoph, thanks for your questions.

    As I started off with this, I didn't know why there was this separation. I just did it like that cause it was outlined in the GitFAQ:

    That said, I haven't tried out doing fetching and sharing in the same repo. But having this clean separation between them allowed me to do some major changes on the git-svn setup in the fetching repo without disturbing those who already had cloned the bare repo.

    Also, if someone accidently pushes directly to the bare repo (instead of committing through svn), they won't disturb the fetching process. I haven't seen this happen though, so not sure how it will play out.

  5. I have another question because of the different recipes floating around on how to synchronize git and svn:

    Jon Loeliger describes in his book a similar setup. But he uses the fetch repository exclusively to communicate with SVN. The developers do not directly communicate with SVN.

    He suggests that one dcommits only on detached heads and describes how to do this.

    Unfortunately he is not very specific on how to synchronize the fetch repository and the bare repository.

    What is the advantage of your proposal compared to his?

  6. Hi again, thanks for the input. I came up with this setup (in desperation) because I couldn't find any other recipes on the net that tried to do something similar.

    Unfortunately I haven't read Jon's book, and I don't understand how this setup works exactly.. If you have a how-to somewhere online I would love to try it out and compare :)

  7. I do not know any online reference. You can however see a summary of it in the comment by Josh to this blog entry:

    Or if you get by the Arithmeum and ask for me I could hand you over the relevant chapter.

  8. Aha,

    It seems this is a setup where the committers themselves are responsible for keeping the bare-repo up to date with the latest changes.

    In my setup, the bare-repo is automatically updated via an svn-hook (that triggers a svn fetch in the fething repo and a push to the bare repo).

    So as far as I understand, Jon's setup is a bit more tedious for the developers, but necessary if you have no way to automatically update the central git repos. We use svn-commit hooks, but maybe you don't have admin access to the svn-server? You could also use a polling process, but then you need a server to run that on.

    Jon or others might argue that my setup is flawed because when the commit "comes back" to the committer, it will cause a conflict (or a merge commit) because the commit is already there. Luckily, git recognizes that the local commit and the incoming one is the same, and things just work.

    I'll ask Josh about it in the comments over in that other post.

  9. One thing to note with this setup is that you have to have a local git branch with the *same name* as the one in origin, to be able to dcommit, unless I'm doing something wrong. You can of course then make sub-branches in git and merge them into that branch, which I guess is a recommended work flow anyway - having a local "master" for each remote branch.

    We will also setup an SVN hook later, but for now I simply have an alias in .bash_aliases to:

    ssh mygitserver 'cd /var/git/git-svn-gateway; git svn fetch; git push origin'

  10. Hi Jacob, thanks for your comment!

    I haven't used different branch names between local and remote (as far as I remember), so I can't advise there.

    We have the git svn fetch+push command running in a Hudson job which is triggered by a http-request. This makes it more transparent to all whenever the git svn fetch is run, and prevents "race conditions", say multiple people ssh'ing in to sync the repos at the same time.

  11. Joerg Rosenkranz12/9/11 17:21

    We are using a very similar setup modeled after the receipt from . Doing it that way you don't need the update-ref and can git svn dcommit directly.

    The interesting part is to use the same name ("mirror" in this case) in
    git clone -o mirror ...
    git svn init --prefix=mirror/ ...

  12. Hi, Joerg. That link is really interesting! I'm a bit disappointed that I haven't found that page before.

    This prefix thing is a very nice touch. I wish I had understood that earlier.. When/if I get the time, I'll try to incorporate that into my material.


Post a Comment

Popular posts from this blog

Open source CMS evaluations

I have now seen three more or less serious open source CMS reviews. First guy to hit the field was Matt Raible ( 1 2 3 4 ), ending up with Drupal , Joomla , Magnolia , OpenCms and MeshCMS being runner-ups. Then there is OpenAdvantage that tries out a handful ( Drupal , Exponent CMS , Lenya , Mambo , and Silva ), including Plone which they use for their own site (funny/annoying that the entire site has no RSS-feeds, nor is it possible to comment on the articles), following Matt's approach by exluding many CMS that seem not to fit the criteria. It is somewhat strange that OpenAdvantage cuts away Magnolia because it "Requires J2EE server; difficult to install and configure; more of a framework than CMS", and proceed to include Apache Lenya in the full evaluation. Magnolia does not require a J2EE server. It runs on Tomcat just like Lenya does (maybe it's an idea to bundle Magnolia with Jetty to make it seem more lightweight). I'm still sure that OpenAdvant

Considerations for JavaScript in Modern (2013) Java/Maven Projects

Disclaimer: I'm a Java developer, not a JavaScript developer. This is just what I've picked up the last years plus a little research the last days. It's just a snapshot of my current knowledge and opinions on the day of writing, apt to change over the next weeks/months. We've gone all modern in our web applications, doing MVC on the client side with AngularJS or Ember , building single-page webapps with REST backends. But how are we managing the growing amount of JavaScript in our application? Yeoman 's logo (not necessarily the conclusion of this blog post) You ain't in Kansas anymore So far we've just been doing half-random stuff. We download some version of a library and throw it into our src/main/webapp/js/lib , or we use it from a CDN , which may be down or unreachable when we want to use the application.. Some times the JS is minified, other times it's not. Some times we name the file with version number, other times without. Some

Git Stash Blooper (Could not restore untracked files from stash)

The other day I accidentally did a git stash -a , which means it stashes *everything*, including ignored output files (target, build, classes, etc). Ooooops.. What I meant to do was git stash -u , meaning stash modifications plus untracked new files. Anyhows, I ended up with a big fat stash I couldn't get back out. Each time I tried, I got something like this: .../target/temp/dozer.jar already exists, no checkout .../target/temp/core.jar already exists, no checkout .../target/temp/joda-time.jar already exists, no checkout .../target/foo.war already exists, no checkout Could not restore untracked files from stash No matter how I tried checking out different revisions (like the one where I actually made the stash), or using --force, I got the same error. Now these were one of those "keep cool for a second, there's a git way to fix this"situation. I figured: A stash is basically a commit. If we look at my recent commits using   git log --graph --

Managing dot-files with vcsh and myrepos

Say I want to get my dot-files out on a new computer. Here's what I do: # install vcsh & myrepos via apt/brew/etc vcsh clone mr mr update Done! All dot-files are ready to use and in place. No deploy command, no linking up symlinks to the files . No checking/out in my entire home directory as a Git repository. Yet, all my dot-files are neatly kept in fine-grained repositories, and any changes I make are immediately ready to be committed: config-atom.git     -> ~/.atom/* config-mr.git     -> ~/.mrconfig     -> ~/.config/mr/* config-tmuxinator.git       -> ~/.tmuxinator/* config-vim.git     -> ~/.vimrc     -> ~/.vim/* config-bin.git        -> ~/bin/* config-git.git               -> ~/.gitconfig config-tmux.git       -> ~/.tmux.conf     config-zsh.git     -> ~/.zshrc How can this be? The key here is to use vcsh to keep track of your dot-files, and its partner myrepos/mr for o

Leaving eyeo

Thirteen blog posts later, this one notes my departure from eyeo after 4 years and 3 months. I joined eyeo around the headcount of 80 employees, and now I think there's just over 250 people there. My role coming in was as operations manager, doing a mix of infrastructure engineering and technical project management. I later on took on organizational development to help the company deal with its growing pains . We introduced cross-functional teams, departments (kind of like guilds), new leadership structures, goal-setting frameworks, onboarding processes and career frameworks.  And all of this in a rapidly growing distributed company. I'm proud and happy that for a long time I knew every employee by name and got to meet every single new-hire through training them on company structure and processes.  At some point, we had enough experienced leaders and organizational developers that I could zoom back in on working in one team, consulting them on  Git and continuous integration