Skip to main content

Summer thoughts

Today I'm taking the train back down south for my summer vacation. Since it's been a while since my last post, I'll do some more thoughts here before I'm off. I am planning to spend a good bit of the summer reading through Bob Boiko's CMS bible, as well as putting together a scheduler-application with JSF, and perhaps glue it together with Magnolia. Will also be making some much needed custom Magnolia templates.

I was reading the CMS bible the other night, and I read something about sorting content in to binary and "normal" files. Binary files being "all 1's and 0's", like pictures and media files, but also files of a closed format. A word file or an Excel workbook is a binary file. During the course of knowledge management this spring, I learned that heaps of information which contents are allready at an atomic level (there is no easy way to aggregate elements out of a word document), we call unstructured data. An email, which content contains no tags outside the header of the mail (subject, recepient, sender, etc), is also unstructured. Unstructered data is bad for content management because you can't tell from the outside what the content is about.

But back to the proprietary formats like Word, it occured to me the risk of a file or document being unstructered, is connected to the fact whether it is based on an open standard or not. XML (depending if you use it right) is structured. PDF is structured.

I'm sure Word has some sort of structure as well, but it's hard for the outside world to enable and use this structure since it is a closed format (there are some projects evolving how to read MS' formats, I have personally been using Apache POI for reading Excel workbooks). The POI team doesn't seem to be too pleased with reverse engineering and reading closed formats, using package names like
  • HSSF - Horrible Spreadsheet Format
  • HPSF - Horrible Property Set Format
  • DDF - Dreadful Drawing Format
Amusing, but it also paints a serious picture of what it is like to deal with closed formats. A MS technical engineer might have the correct tools and frameworks available for accessing a Word file as structured data. The rest of the world, however, does not.

Closed formats automatically fall into the bag of unstructered/binary data in a CMS.

Enjoy your summer!

Comments

Popular posts from this blog

Open source CMS evaluations

I have now seen three more or less serious open source CMS reviews. First guy to hit the field was Matt Raible ( 1 2 3 4 ), ending up with Drupal , Joomla , Magnolia , OpenCms and MeshCMS being runner-ups. Then there is OpenAdvantage that tries out a handful ( Drupal , Exponent CMS , Lenya , Mambo , and Silva ), including Plone which they use for their own site (funny/annoying that the entire site has no RSS-feeds, nor is it possible to comment on the articles), following Matt's approach by exluding many CMS that seem not to fit the criteria. It is somewhat strange that OpenAdvantage cuts away Magnolia because it "Requires J2EE server; difficult to install and configure; more of a framework than CMS", and proceed to include Apache Lenya in the full evaluation. Magnolia does not require a J2EE server. It runs on Tomcat just like Lenya does (maybe it's an idea to bundle Magnolia with Jetty to make it seem more lightweight). I'm still sure that OpenAdvant

Git Stash Blooper (Could not restore untracked files from stash)

The other day I accidentally did a git stash -a , which means it stashes *everything*, including ignored output files (target, build, classes, etc). Ooooops.. What I meant to do was git stash -u , meaning stash modifications plus untracked new files. Anyhows, I ended up with a big fat stash I couldn't get back out. Each time I tried, I got something like this: .../target/temp/dozer.jar already exists, no checkout .../target/temp/core.jar already exists, no checkout .../target/temp/joda-time.jar already exists, no checkout .../target/foo.war already exists, no checkout Could not restore untracked files from stash No matter how I tried checking out different revisions (like the one where I actually made the stash), or using --force, I got the same error. Now these were one of those "keep cool for a second, there's a git way to fix this"situation. I figured: A stash is basically a commit. If we look at my recent commits using   git log --graph --

Leaving eyeo

Thirteen blog posts later, this one notes my departure from eyeo after 4 years and 3 months. I joined eyeo around the headcount of 80 employees, and now I think there's just over 250 people there. My role coming in was as operations manager, doing a mix of infrastructure engineering and technical project management. I later on took on organizational development to help the company deal with its growing pains . We introduced cross-functional teams, departments (kind of like guilds), new leadership structures, goal-setting frameworks, onboarding processes and career frameworks.  And all of this in a rapidly growing distributed company. I'm proud and happy that for a long time I knew every employee by name and got to meet every single new-hire through training them on company structure and processes.  At some point, we had enough experienced leaders and organizational developers that I could zoom back in on working in one team, consulting them on  Git and continuous integration

Managing dot-files with vcsh and myrepos

Say I want to get my dot-files out on a new computer. Here's what I do: # install vcsh & myrepos via apt/brew/etc vcsh clone https://github.com/tfnico/config-mr.git mr mr update Done! All dot-files are ready to use and in place. No deploy command, no linking up symlinks to the files . No checking/out in my entire home directory as a Git repository. Yet, all my dot-files are neatly kept in fine-grained repositories, and any changes I make are immediately ready to be committed: config-atom.git     -> ~/.atom/* config-mr.git     -> ~/.mrconfig     -> ~/.config/mr/* config-tmuxinator.git       -> ~/.tmuxinator/* config-vim.git     -> ~/.vimrc     -> ~/.vim/* config-bin.git        -> ~/bin/* config-git.git               -> ~/.gitconfig config-tmux.git       -> ~/.tmux.conf     config-zsh.git     -> ~/.zshrc How can this be? The key here is to use vcsh to keep track of your dot-files, and its partner myrepos/mr for o

Git-SVN Mirror without the annoying update-ref

This post is part of  a series on Git and Subversion . To see all the related posts, screencasts and other resources, please  click here .  So no sooner than I had done my git-svn presentation at JavaZone , I got word of a slightly different Git-SVN mirror setup that makes it a bit easier to work with: In short, my old recipe includes an annoying git update-ref step to keep the git-svn remote reference up to date with the central bare git repo. This new recipe avoids this, so we can simply use git svn dcommit   directly. So, longer version, with the details. My original recipe is laid out in five steps: Clone a fresh Git repo from Subversion. This will be our  fetching repo. Set up a  bare repo. Configure pushing from the fetching repo to bare repo In the shoes of a developer, clone the repo Set up an SVN remote in the developer's repo In the new approach, we redefine those last two steps: (See the original post for how to do the first three.) 4. Clone t