Skip to main content

The Web Content Challenges

Last monday I finally presented my thesis "The Use of Open Source and Open Standards in Web Content Management Systems". Present were my student guide, a small gang of friends and colleagues, and the external examinator who was there by way of tele-conference (or Skype as it's called these days).

The presentation went fairly well, but as the examinator had audio communication only (as well as a copy of my slides), my entire theatrical focus was inside the monitor of my laptop. So the people present in the room weren't really too flabbergasted by the presentation, but the examinator liked it and that's what counts. I got some flame for not having spent too much energy on the academic method, but overall he meant it was a great thesis. So that's the official end of my 17 year long education!

Anyhow, here's another snippet about the reason we developed WCMS-es in the first place:

Web Content Challenges

The concept of content in itself seeks to solve the challenges by delivering the right content. This goal is not easily reached due to the following conditions.

Content is not maneuverable

The main problem with information is that there is too much of it [Goodwin, 2002]. There are too many web-pages with too many attached documents [McGovern, 2006b]. A company can invest resources into sustaining a site map and a navigation tree menu, but if these are constructed manually, and not generated from the content structure automatically, these navigational methods will stagnate and become more of a nuisance than helpful tools [McGovern, 2006a]. Navigating by search is a great shortcut to make all content available, but searching the right way is easier said than done [Belam, 2006] and a search-engine can not substitute conventional site navigation.

Content is useless

Stagnated web-sites quickly grow dead links which are references to other web-pages that have been moved or deleted. There might be many pages and documents in existence which are not hyper-linked at all, and thereby will never be accessed. As defined earlier on, content which is not accessed and used has no value. Maintaining value-less content takes up resources which the content managers could have spent on more useful parts of the web-site. It also confuses the visitor by polluting the web-site, making it harder to find the useful content.

Content is not automatically accessible

Two elements by which one can interpret a language are syntax (grammar) and semantics (meaning). A computer interpreting the content of a web-page first checks the syntax by parsing the page and checking whether the markup language is valid. If the syntax is incorrect, the parsing is likely to break depending on the fault-tolerance of the parser. Although incorrect use of markup causes annoyance among web developers, the main issue accessing and reusing web content is lack of semantics. A computer can automatically access a web-page and read it, but it can not decide which paragraph is the title of an embedded article, which is the abstract text and which is the main text of the article unless the semantic standard is enabled in both the web-page and in the program reading it.

Mixing content and design also reduces accessibility. A computer can not decide whether a table is used to control the layout of a page, or if the table has semantic value.

Content is not structured

This grievance is tightly connected to the one above, though it is more apparent in traditional content management. Web content has the advantage of dealing mostly with HTML, which despite its criticism is still a transparent text-based standard based on the more reliable XML. This transparency is lacking in binary files, such as multimedia assets and proprietary formats such as Microsoft Office documents and PDF-files [Martins, 2004].

Content has no meta information

There has a been a noteworthy increase in the ability to tag or label various data objects with meta data. Meta tags can be included in the header of a HTML-page, or in the properties of a Word-document. Forcing users into actually using these features manually can prove to be difficult. If the title of a document is "Content Management", it is quite tedious to label the document with meta-data that states that topic is “content management” and similar keywords. A possible solution to the meta-problem lies in automatically tagging content [Staelin, 2004].

Content is not connected

There is bound to be digital content within the organization which could have been enabled on its web-site. Databases, memos, product catalogs and other documents, which do not violate corporate confidentiality by being made available online, are typical resources which are held back by their isolation from other content. Information systems are too often designed with a single purpose in mind, and it proves difficult to integrate them as services into the web-site. The worst scenario is when the organization has grown dependent on some specific proprietary software or platform which has restrictions on how the content can be accessed.

Design is not consistent

A company will normally have one graphic profile, or one different profile for each division of the company. The profile includes names, slogans, logos, a color-scheme, text styles, document headings, footers and layout. Periodically, the profile of a company will be changed, and typically all content produced up and until then will be stuck with the old graphical profile. It is expensive to have a clerk go through each HTML-document and change each document manually. As the profile perpetually changes, the company web-site will grow into a confusing mongrel of pages using various outlooks designed throughout the lifetime of the site. As a result, the visitor of the web-site gains little image of the company's identity, and is left with the impression that the company is badly organized.



References:


Belam, M. 2006, "Fine Tuning Your Enterprise Search - How To Get The Best Results To Your Users" [http://www.currybet.net/articles/fine_tune/index.php] Retrieved 9. April, 2006

Goodwin, S., Vidgen, R. 2002, "Content, content, everywhere... time to stop and think? The process of web content management", April 2002, p. 66-70

Martins, J. 2004, "The Structured-Unstructured Information Continuum" [http://www.dmgrc.com/dmg/weblog/index.php?itemid=25] Retrieved 10. April, 2006

McGovern, G. 2006, "Web navigation is about moving forward" [http://newsweaver.ie/gerrymcgovern/e_article000558500.cfm] Retrieved 9. April, 2006

McGovern, G. 2006, "Your website is for your most important customers" [http://newsweaver.ie/gerrymcgovern/e_article000562657.cfm] Retrieved 9. April, 2006

Staelin, C., Elad, M., Greig, D., Shmueli, O., Vans, M. 2004, "Biblio: Automatic meta-data extraction" [http://www.hpl.hp.com/techreports/2004/HPL-2004-190.html] Retrieved 25. August, 2005


Comments

Post a Comment

Popular posts from this blog

Open source CMS evaluations

I have now seen three more or less serious open source CMS reviews. First guy to hit the field was Matt Raible ( 1 2 3 4 ), ending up with Drupal , Joomla , Magnolia , OpenCms and MeshCMS being runner-ups. Then there is OpenAdvantage that tries out a handful ( Drupal , Exponent CMS , Lenya , Mambo , and Silva ), including Plone which they use for their own site (funny/annoying that the entire site has no RSS-feeds, nor is it possible to comment on the articles), following Matt's approach by exluding many CMS that seem not to fit the criteria. It is somewhat strange that OpenAdvantage cuts away Magnolia because it "Requires J2EE server; difficult to install and configure; more of a framework than CMS", and proceed to include Apache Lenya in the full evaluation. Magnolia does not require a J2EE server. It runs on Tomcat just like Lenya does (maybe it's an idea to bundle Magnolia with Jetty to make it seem more lightweight). I'm still sure that OpenAdvant

Considerations for JavaScript in Modern (2013) Java/Maven Projects

Disclaimer: I'm a Java developer, not a JavaScript developer. This is just what I've picked up the last years plus a little research the last days. It's just a snapshot of my current knowledge and opinions on the day of writing, apt to change over the next weeks/months. We've gone all modern in our web applications, doing MVC on the client side with AngularJS or Ember , building single-page webapps with REST backends. But how are we managing the growing amount of JavaScript in our application? Yeoman 's logo (not necessarily the conclusion of this blog post) You ain't in Kansas anymore So far we've just been doing half-random stuff. We download some version of a library and throw it into our src/main/webapp/js/lib , or we use it from a CDN , which may be down or unreachable when we want to use the application.. Some times the JS is minified, other times it's not. Some times we name the file with version number, other times without. Some

Git Stash Blooper (Could not restore untracked files from stash)

The other day I accidentally did a git stash -a , which means it stashes *everything*, including ignored output files (target, build, classes, etc). Ooooops.. What I meant to do was git stash -u , meaning stash modifications plus untracked new files. Anyhows, I ended up with a big fat stash I couldn't get back out. Each time I tried, I got something like this: .../target/temp/dozer.jar already exists, no checkout .../target/temp/core.jar already exists, no checkout .../target/temp/joda-time.jar already exists, no checkout .../target/foo.war already exists, no checkout Could not restore untracked files from stash No matter how I tried checking out different revisions (like the one where I actually made the stash), or using --force, I got the same error. Now these were one of those "keep cool for a second, there's a git way to fix this"situation. I figured: A stash is basically a commit. If we look at my recent commits using   git log --graph --

Managing dot-files with vcsh and myrepos

Say I want to get my dot-files out on a new computer. Here's what I do: # install vcsh & myrepos via apt/brew/etc vcsh clone https://github.com/tfnico/config-mr.git mr mr update Done! All dot-files are ready to use and in place. No deploy command, no linking up symlinks to the files . No checking/out in my entire home directory as a Git repository. Yet, all my dot-files are neatly kept in fine-grained repositories, and any changes I make are immediately ready to be committed: config-atom.git     -> ~/.atom/* config-mr.git     -> ~/.mrconfig     -> ~/.config/mr/* config-tmuxinator.git       -> ~/.tmuxinator/* config-vim.git     -> ~/.vimrc     -> ~/.vim/* config-bin.git        -> ~/bin/* config-git.git               -> ~/.gitconfig config-tmux.git       -> ~/.tmux.conf     config-zsh.git     -> ~/.zshrc How can this be? The key here is to use vcsh to keep track of your dot-files, and its partner myrepos/mr for o

Leaving eyeo

Thirteen blog posts later, this one notes my departure from eyeo after 4 years and 3 months. I joined eyeo around the headcount of 80 employees, and now I think there's just over 250 people there. My role coming in was as operations manager, doing a mix of infrastructure engineering and technical project management. I later on took on organizational development to help the company deal with its growing pains . We introduced cross-functional teams, departments (kind of like guilds), new leadership structures, goal-setting frameworks, onboarding processes and career frameworks.  And all of this in a rapidly growing distributed company. I'm proud and happy that for a long time I knew every employee by name and got to meet every single new-hire through training them on company structure and processes.  At some point, we had enough experienced leaders and organizational developers that I could zoom back in on working in one team, consulting them on  Git and continuous integration