I was once in a meeting participated by a team of developers, requirement managers, test managers and project managers. The topic of discussion were how requirement changes should be handled.
The original requirements were described mostly in the form of use-case documents. The problem, and the reason for the meeting, was that the implementation was failing to follow the stuff specified in the use-cases.
The round went around the table with each and every member pledging to read the use-cases more carefully, and the requirement manager promised to clearly highlight changes and references in the documents.
When the turn came around to me I said something along the lines of:
I have read the use-cases already. I'm not going to spend time sitting scrolling through them looking for changes. There must be a better way to instruct changes into the implementation than fishing them out of some bloody Word document.
This didn't go well down with the project manager, and I was (not literally) told to stick to it, keep reading the use-cases like all the others.
Now I know use-cases are supposed to be light, user-story like, but in the hands of document-riders and management they become bloated with design descisions, business rules and interface nit-picking.
It's hard labour to track changes in these documents. I suggested putting them into wiki-format, but that proved to be too much hassle for the requirement manager. Pasting the word documents into the wiki ended up looking really ugly (considering the size of the use-case descriptions).
You can't modularize use case steps, you can't re-use them, and you can't refactor them. All in all, they're a pretty crappy form of requirement specification, especially for us developers that hate Word documents.
Then along came Selenium and showed us that Web-tests that are fun:
They're executable! We could use them as developers to speed up our own tests. We can even stuff them into our automated build/continous integration system.
It's dead easy! It's so easy even the document riders fell in love with them. We even got the testers to report bugs with them (bug traces).
The test managers could use them for acceptance testing and performance testing. So the normally huge acceptance testing phase at the end of the project was reduced to a matter of days.
Sure Selenium tests break easily, but they're easily repaired also. And most of all, when they break you (as either a developer or manager) are forced to recognize that a change has taken place, and you can decide whether this is a faulty test or a faulty implementation.
The flip side is that they're too easy to make. If the developers' test-suite gets bloated with all the tests managers want in, the suite runs too slow for the patience of a test-driven developer.
They are not refactorable, other than in the search-replace meaning of the word. If a change takes place across the entire web application you might have some work ahead of you to get the suite green again.
It can be hard to visualise the steps that will run in a web-test (but isn't it just as hard in a use-case?), so if you insist on creating the Selenium test before you do the implementation you will need some imagination and knowledge of Selenian. One way to work aorund this is to do prototyping of the webapp with static html pages, and record Selenium tests based on these.