Skip to main content

Improving problem description

This week is set off for improving the chapter on problem description. Every thesis has a "problemstilling", a problem which the thesis should solve. A goal, or a challenge. Currently the chapter looks a little something like this:

Challenges

What are the challenges that have pushed forth content management. What are the problems IT-departmens suffer from today related to web content. Issues on web-management.

The issues of web content management

Content is not maneuvrable. There is too much of it, too many web pages with too many attached documents. Often a corporation will put much resource into sustainin a site map and a navigation tree, but if these are made manually, it will be a lot of work and no guarantee to be correct. Searching is a great shortcut to make all content available, but searching the right way is easier said than done. Does the search engine check if the search word was incorrectly spelled? Are there any synonyms of the search word which should be checked?


Content is useless. The web page is full of dead links. There exists many pages and documents which are not linked to at all, and therefore never will be accessed. It is safe to say that content which is not accessed and used has no value.


Content is not automatically accessible. No XML export. Recently many news-sites have offered the option of subscribing via popular RSS-feeds. By subscribing to these feeds in RSS-readers or news-aggregators, the process of collecting news from these sites is turned from a pull-protocol (actively surfing around on news-sites) into a push-protocol (content is pushed to the reader, like mail to a recipient).


Content has no meta information. There has a been a noteworthy increase in the ability to tag or label various data objects with meta data, like in the header of a HTML-page, or in the properties of a Word-document. It is difficult to force users into actually using these features manually. If the title of this document is "Content Management", why should I write in its meta-data that it is about the same topic? A possible solution to the meta-problem lies in automatically tagging content [HP, 2004].


Content is technically unaccessible. Dependancy to specific software or platform restricts the numbers of users.


-------

So I need to come up with something more completing this chapter. A good CMS doesn't produce the problems mentioned above. CMS-es like this already exist, I'm sure. And the goal of the thesis could indeed be to present a CMS solving these, by the use of open standards. To get the open source bit in, I should add something about functionality and customization (functionality is content too!, like Boiko said). A old rusty CMS, or even a modern one (but not a tidy one) can be quite hard to extend, having components which are not reusable. Content is not reusable.

Interestingly, I'm not the only one who's been asking questions about meta-data. Seth Cambridge is another blogger I just added to my bloglines. But still it remains a problem that so much of the CMS theory landscape remains opinions through blogs and online articles, mediums not really appreciated by the people who will judge my thesis. I might have to get back to basis and read up on some ancient IT-theory I can reuse in this context (but I haven't really got time to do that).

Comments

Popular posts from this blog

Open source CMS evaluations

I have now seen three more or less serious open source CMS reviews. First guy to hit the field was Matt Raible ( 1 2 3 4 ), ending up with Drupal , Joomla , Magnolia , OpenCms and MeshCMS being runner-ups. Then there is OpenAdvantage that tries out a handful ( Drupal , Exponent CMS , Lenya , Mambo , and Silva ), including Plone which they use for their own site (funny/annoying that the entire site has no RSS-feeds, nor is it possible to comment on the articles), following Matt's approach by exluding many CMS that seem not to fit the criteria. It is somewhat strange that OpenAdvantage cuts away Magnolia because it "Requires J2EE server; difficult to install and configure; more of a framework than CMS", and proceed to include Apache Lenya in the full evaluation. Magnolia does not require a J2EE server. It runs on Tomcat just like Lenya does (maybe it's an idea to bundle Magnolia with Jetty to make it seem more lightweight). I'm still sure that OpenAdvant

Git Stash Blooper (Could not restore untracked files from stash)

The other day I accidentally did a git stash -a , which means it stashes *everything*, including ignored output files (target, build, classes, etc). Ooooops.. What I meant to do was git stash -u , meaning stash modifications plus untracked new files. Anyhows, I ended up with a big fat stash I couldn't get back out. Each time I tried, I got something like this: .../target/temp/dozer.jar already exists, no checkout .../target/temp/core.jar already exists, no checkout .../target/temp/joda-time.jar already exists, no checkout .../target/foo.war already exists, no checkout Could not restore untracked files from stash No matter how I tried checking out different revisions (like the one where I actually made the stash), or using --force, I got the same error. Now these were one of those "keep cool for a second, there's a git way to fix this"situation. I figured: A stash is basically a commit. If we look at my recent commits using   git log --graph --

Leaving eyeo

Thirteen blog posts later, this one notes my departure from eyeo after 4 years and 3 months. I joined eyeo around the headcount of 80 employees, and now I think there's just over 250 people there. My role coming in was as operations manager, doing a mix of infrastructure engineering and technical project management. I later on took on organizational development to help the company deal with its growing pains . We introduced cross-functional teams, departments (kind of like guilds), new leadership structures, goal-setting frameworks, onboarding processes and career frameworks.  And all of this in a rapidly growing distributed company. I'm proud and happy that for a long time I knew every employee by name and got to meet every single new-hire through training them on company structure and processes.  At some point, we had enough experienced leaders and organizational developers that I could zoom back in on working in one team, consulting them on  Git and continuous integration

Managing dot-files with vcsh and myrepos

Say I want to get my dot-files out on a new computer. Here's what I do: # install vcsh & myrepos via apt/brew/etc vcsh clone https://github.com/tfnico/config-mr.git mr mr update Done! All dot-files are ready to use and in place. No deploy command, no linking up symlinks to the files . No checking/out in my entire home directory as a Git repository. Yet, all my dot-files are neatly kept in fine-grained repositories, and any changes I make are immediately ready to be committed: config-atom.git     -> ~/.atom/* config-mr.git     -> ~/.mrconfig     -> ~/.config/mr/* config-tmuxinator.git       -> ~/.tmuxinator/* config-vim.git     -> ~/.vimrc     -> ~/.vim/* config-bin.git        -> ~/bin/* config-git.git               -> ~/.gitconfig config-tmux.git       -> ~/.tmux.conf     config-zsh.git     -> ~/.zshrc How can this be? The key here is to use vcsh to keep track of your dot-files, and its partner myrepos/mr for o

Git tools for keeping patches on top of moving upstreams

At work, we maintain patches for some pretty large open source repositories that regularly release new versions, forcing us to update our patches to match. So far, we've been using basic Git operations to transplant our modifications from one major version of the upstream to the next. Every time we make such a transplant, we simply squash together the modifications we made in the previous version, and land it as one big commit into the next version. Those who are used to very stringent keeping of Git history may wrinkle their nose at this, but it is a pragmatic choice. Maintaining modifications on top of the rapidly changing upstream is a lot of work, and so far we haven't had the opportunity to figure out a more clever way to do it. Nor have we really suffered any consequences of not having an easy to read history of our modifications - it's a relatively small amount of patches, after all. With a recent boost in team size, we may have that opportunity. Also the need for be