The high bar for relevancy?
A big chunk of the billions that go to search-engine marketing and search engine optimization, SEM and SEO, (mostly to you-know-who) are spent on getting to Page 1 of the results.
I won’t be the first to point out that relevance for in-house search — i.e., without using Pagerank — is a harder nut to crack. How much harder? A recent study from Aberdeen Group, publicized this week in Information Week, provides the following stat:
At top performing companies [defined as “Best in Class”, the top 20% of those surveyed], 67% of searches returned the most relevant results on the first search results page, while lower rated companies saw relevant results on the first page for only 42% of searches.
1 out of 3 searches at best don’t deliver the right search on the first results page. In other words, the best case for search=find is 67%.
Relevancy is as much art as science; the best solutions for the problem are the ones that provide a way to match the art to the science. If you need some background on relevancy, read the seminal article on relevancy and findability by Grant Ingersoll, and check out the fine presentation on the subject by Mark Bennett of New Idea Engineering delivered at the most recent SFBay Lucene/Solr Meetup we co-sponsored with the Computer History Museum in early September.
One of the best implementations of findability in Lucene and Solr I’ve come across is at Netflix. There’s a really nice discussion captured in some slides by Walter Underwood, who helped built the Solr search infrastructure at Netflix (a milestone in a very distinguished career in search). He gave a terrific presentation at that same most recent Meetup.
A key metric Walter used at Netflix to gauge finding (search relevancy effectiveness is such a mouthful) is called Mean Reciprocal Rank, or MRR. Simply put, it gives one point for a click through to the first-ranked item, 1/2 a point to the second ranked item, 1/3 of a point to the 3d ranked, etc. While it may not help find relevancy bugs, it provides a very nice aggregate picture of users’ experience finding what they look for. A good benchmark, or stretch goal, according to Walter: 0.5 MRR, with 85% of clicks on #1.
Let me be quick to say that there is much that is unique about the Netflix search use case (and much that is really, really fun). But the contrast between 85% of results selected at #1 in the results ranking, vs. 2/3 of results on the first page at best in class enterprise search implementations, leads me to wonder: what are others doing to measure relevancy and programmatically build feedback loops, automatic or otherwise? Lucene and Solr provide transparent, rich interfaces for doing this; and according to the Aberdeen study, there’s plenty of opportunity to do so.
Best of the Month. Straight to Your Inbox!
Dive into the best content with our monthly Roundup Newsletter!
Each month, we handpick the top stories, insights, and updates to keep you in the know.