A couple of features in the blogosphere brought to mind (indirectly and directly) a couple of the folks you can meet at Lucene Revolution, in Boston October 7-8.

First, in a clean, well lighted post about user experience design that is swimming with deliciously snappy comeback lines to people whose superficial acquaintance with the subject leads them to open their mouths too quickly:

Since everybody is a user, everybody has an opinion on how his experience should be. And many are very eager to utter their opinions really strongly. But that doesn’t mean that every user is a designer. Asking for salt doesn’t make you a cook. … You don’t need to be an engineer to find out that your car doesn’t start. But you need to be an engineer to fix it. As a user experience designer you need to know how things work. When it comes to use, all opinions are equal, but when it comes to engineering, they are not.

As rousing defense of expertise as I’ve heard. If you haven’t already can expose yourself to some expertise in a presentation by Tyler Tate of Twigkit on Designing the Search Experience; Stefan Olafsson, also of Twigkit will also be at Lucene Revolution, delivering a lightning talk.

The Beyond Search blog today features an interview with Steve Cohen, COO of Basis Technology, Platinum sponsor of Lucene Revolution. Basistech is at the vanguard of search-focused companies turning to the power of open source to opt into a virtuous cycle of innovation and, at the same time, better look after customers with their proprietary intellectual property:

The primary benefits (of open source) are to avoid vendor lock-in and flexibility. There has been many changes in the commercial vendor landscape over the fifteen years we’ve been in this business, and customers feel like they’ve been hurt by changes in ownership and whole products and companies disappearing. … Our product, Rosette, is a text analysis toolkit that plugs into search tools like Solr (or the Lucene index engine) to help make search work well in many languages. Rosette prepares tokens for the search index by segmenting the text (which is not easy in some languages, like Chinese and Japanese), using linguistic rules to normalize the terms to enhance recall, and also provide enhanced search and navigation capabilities like entity extraction and fuzzy name matching.

More on this at Lucene Revolution.