Skip to main content

Spring TestContext Framework

I'm somewhat surprised that there hasn't been more noise around in the blogosphere about Spring 2.1's TestContext Framework. I had the pleasure of working with Juergen Hoeller for a day after JavaZone. Among other things, I learned about the fancy @Autowired annotation that you can use to make your autowiring much safer. Normally autowiring is considered bad practice, but it was Juergen's advice that now it is okay to use. This means in practice that all your service beans will still be declared in an applicationContext.xml file, but the dependencies and object graphs will be annotated in the Java files (or atleast where you see fit).

After this I had a deeper look into the features of Spring 2.1. Note the point about "our next-generation Spring TestContext Framework with support for JUnit4". Now this is cool.

Why we need it (or something like it)

For a long while I (and many of my colleagues and peers) have been annoyed by the disability to manage tests as the amount grows. There hasn't been any good framework around for doing any kind of categorization of tests. Each project would have to invent their own set of hacks in order to control their test suites, be it a TestFactory picking up system variables, or using different Maven profiles and source folders for different categories of tests. In short, inventing some sort of convention for seperating tests: put them in different folders, use name conventions like *, and *

Have you ever been in a project where you and your team dropped automated testing because you couldn't manage the mass of tests? Well, tools are starting to appear now that can save your hiney.

TestNG solves this. Why don't you just use that?
The first solution that many have become aware of is TestNG. I really like the feauteres, but as of Alex presentation on JavaZone, Maven's surefire plugin still has issues running TestNG. Also, Eclipse (our standard IDE) has built in support for JUnit4. And finally, Spring comes with a bunch of lovely standard testbase classes that make your integration tests nicer. Oh, wait. You don't want your test to inherit from a Spring class? Read on to find out. Final reason: everyone knows JUnit and not too many know TestNG (sorry Alex..).

There's a new kid in town
Spring TestContext Framework introduces a wide range of new annotations for doing tests, my absolute favourite being the @IfProfileValue that you can use to specify testing environment (in other words implement the test categorization I spoke of above). There is only one implemented ProfileValueSource (being the logic that decides which profile it is) that checks for system variables, but it is easy to extend with your own ProfileValueSource class that contains the logic for deciding which tests to run.

As promised: avoid extending Spring's testbase classes: Use the @RunWith(SpringJUnit4ClassRunner.class annotation instead.

You got the regular nifty annotations like @Repeat (guess what it does) @Timed (fails if the tests exceeds a time limit, different from JUnit 4's timeout attribute), plus a bunch of Spring environment controlling transactions and Spring contexts. You don't need to use the @ExpectedException any more because JUnit4 supplied us with the @Test(expected=RuntimeException.class) functionality. Spring people recommend that you use JUnit's way of doing it.

"Caching between testing", and "injection of test fixtures". Sounds lovely, doesn't it? Sounds alot like the presentation of TestNG, but hey, it's JUnit :)

There is also a pretty cool @ContextConfiguration annotation for declaring which Spring bean XML files you want to use for a particular test.

Actually, instead of me repeating the whole documentation, why don't you take a look for yourself. Note that I haven't actually played around with this properly yet (well, we got as far as getting the M4 build into our maven repository) cause I left my laptop in work. I'm gonna try setting up the machine here tonight and see if I can get something done before bedtime.

By the way, some shameless company promotion: My employer is doing a course on Spring 2 with Interface21 in Oslo in the beginning of November. Per is a really friendly guy, so don't hesitate to give him a call and ask about it if you're curious :)


  1. Anonymous2/10/07 16:09

    Hi Thomas,

    I think you are actually the first person in the community to blog about the new Spring TestContext Framework. You even beat me to it. ;)

    I'm glad you like it so far, and please do let me know how it goes once you finally get a chance to try it out for yourself.

    Also, since 2.1 M4, I've added base support classes for TestNG as well. Thus you can now use the TestContext Framework with JUnit 3.8, JUnit 4.4, or TestNG out-of-the-box. In addition, note that a few other minor things have changed and/or improved since the 2.1 M4 release. So, have a look at one of the recent snapshots for 2.5 RC1.



    p.s. FYI: the link to SpringJUnit4ClassRunner should be:

  2. Anonymous4/10/07 21:59

    I have mixed feelings about how dependent Spring seems to be getting on annotations.

    I really appreciate having as much configuration as possible extracted to a central location. I hope all new features they develop continue to support xml-based configuration.


Post a Comment

Popular posts from this blog

Open source CMS evaluations

I have now seen three more or less serious open source CMS reviews. First guy to hit the field was Matt Raible ( 1 2 3 4 ), ending up with Drupal , Joomla , Magnolia , OpenCms and MeshCMS being runner-ups. Then there is OpenAdvantage that tries out a handful ( Drupal , Exponent CMS , Lenya , Mambo , and Silva ), including Plone which they use for their own site (funny/annoying that the entire site has no RSS-feeds, nor is it possible to comment on the articles), following Matt's approach by exluding many CMS that seem not to fit the criteria. It is somewhat strange that OpenAdvantage cuts away Magnolia because it "Requires J2EE server; difficult to install and configure; more of a framework than CMS", and proceed to include Apache Lenya in the full evaluation. Magnolia does not require a J2EE server. It runs on Tomcat just like Lenya does (maybe it's an idea to bundle Magnolia with Jetty to make it seem more lightweight). I'm still sure that OpenAdvant

Git Stash Blooper (Could not restore untracked files from stash)

The other day I accidentally did a git stash -a , which means it stashes *everything*, including ignored output files (target, build, classes, etc). Ooooops.. What I meant to do was git stash -u , meaning stash modifications plus untracked new files. Anyhows, I ended up with a big fat stash I couldn't get back out. Each time I tried, I got something like this: .../target/temp/dozer.jar already exists, no checkout .../target/temp/core.jar already exists, no checkout .../target/temp/joda-time.jar already exists, no checkout .../target/foo.war already exists, no checkout Could not restore untracked files from stash No matter how I tried checking out different revisions (like the one where I actually made the stash), or using --force, I got the same error. Now these were one of those "keep cool for a second, there's a git way to fix this"situation. I figured: A stash is basically a commit. If we look at my recent commits using   git log --graph --

Leaving eyeo

Thirteen blog posts later, this one notes my departure from eyeo after 4 years and 3 months. I joined eyeo around the headcount of 80 employees, and now I think there's just over 250 people there. My role coming in was as operations manager, doing a mix of infrastructure engineering and technical project management. I later on took on organizational development to help the company deal with its growing pains . We introduced cross-functional teams, departments (kind of like guilds), new leadership structures, goal-setting frameworks, onboarding processes and career frameworks.  And all of this in a rapidly growing distributed company. I'm proud and happy that for a long time I knew every employee by name and got to meet every single new-hire through training them on company structure and processes.  At some point, we had enough experienced leaders and organizational developers that I could zoom back in on working in one team, consulting them on  Git and continuous integration

Managing dot-files with vcsh and myrepos

Say I want to get my dot-files out on a new computer. Here's what I do: # install vcsh & myrepos via apt/brew/etc vcsh clone mr mr update Done! All dot-files are ready to use and in place. No deploy command, no linking up symlinks to the files . No checking/out in my entire home directory as a Git repository. Yet, all my dot-files are neatly kept in fine-grained repositories, and any changes I make are immediately ready to be committed: config-atom.git     -> ~/.atom/* config-mr.git     -> ~/.mrconfig     -> ~/.config/mr/* config-tmuxinator.git       -> ~/.tmuxinator/* config-vim.git     -> ~/.vimrc     -> ~/.vim/* config-bin.git        -> ~/bin/* config-git.git               -> ~/.gitconfig config-tmux.git       -> ~/.tmux.conf     config-zsh.git     -> ~/.zshrc How can this be? The key here is to use vcsh to keep track of your dot-files, and its partner myrepos/mr for o

Git-SVN Mirror without the annoying update-ref

This post is part of  a series on Git and Subversion . To see all the related posts, screencasts and other resources, please  click here .  So no sooner than I had done my git-svn presentation at JavaZone , I got word of a slightly different Git-SVN mirror setup that makes it a bit easier to work with: In short, my old recipe includes an annoying git update-ref step to keep the git-svn remote reference up to date with the central bare git repo. This new recipe avoids this, so we can simply use git svn dcommit   directly. So, longer version, with the details. My original recipe is laid out in five steps: Clone a fresh Git repo from Subversion. This will be our  fetching repo. Set up a  bare repo. Configure pushing from the fetching repo to bare repo In the shoes of a developer, clone the repo Set up an SVN remote in the developer's repo In the new approach, we redefine those last two steps: (See the original post for how to do the first three.) 4. Clone t