Solr's New Clustering Capabilities

| By

Introduction

One of the new things in Solr 1.4 that I am particularly excited about is the new document and search results clustering capabilities.  This is an optional module that lives in Solr’s contrib/clustering directory and was added via SOLR-769.  The module is designed to allow people to either use the existing clustering capabilities, currently only search result clustering is offered via Carrot2, or to plug in their own capabilities.  While some of the public APIs still need to be hashed out via multiple implementations and as they related to whole document collections, I thought I would share a quick getting started on search result clustering using Carrot2, since it is included in Solr and easy to get up and running with rather quickly.

Background

Clustering is an unsupervised learning task that attempts to aggregate related content together without any a priori knowledge about the content.  It is very similar to faceting (some people call it dynamic faceting), but works in a less structured manner.  Clustering algorithms often look at the features of the text (important words, etc.) and use them to determine similarity.  Most implementations define a notion of distance between any two documents and then use that to determine which documents are similar to one another.  Popular algorithms include hierarchical and k-Means clustering.  For more information and to see several implementations of clustering in action, I’d encourage you to check out Apache Mahout.

Getting Started

The first thing to do to get started is get the code.  There are a number of ways to do this, but I just like SVN on the command line, as in:

Next, you can switch into the trunk directory and build everything:

This is an important step because some of Carrot2’s libraries cannot be included by default in the Apache SVN because they are LGPL.   The build-contrib Ant target will go and automatically download the necessary libraries.

Once built, I need to add the Clustering libs to my Solr Home lib directory (called solr-clustering), as in:

I also got Solr Cell (Apache Tika integration) so that I can easily load some content to cluster:

For this example, the pertinent parts of my schema are:

And my Solr config has:

Finally, I need to fire up Solr:

Now I need some documents.  In this case, I have a bunch of PDF files that I keep organized in Mekentosj’s excellent PDF library organizer Papers (Mac only) that I want to index.  The code for this (it’s just a quick little hack) is in Appendix A at the bottom of the post.  I point it at my directory and off it goes.  When I’m done, I have 91 documents in my index.  I then did some basic searches to make sure I can get some decent results for some queries.  From here, all I need to do is tell Solr to cluster the results:

http://localhost:8983/solr/select/?q=*:*&fl=title,score,id&version=2.2&start=0&rows=100&indent=on&clustering=true

Notice I added the &clustering=true parameter on the end and that I set &rows to be 100.  This turns on the clustering component which then hands off the work to Carrot2 using the parameters defined in my Request Handler.  Carrot2 is an in-memory clustering engine and the implementation is designed to cluster on only the top results, not necessarily all the results that matched.

In the case of the request above, some of my results look like:

You can see Carrot2 provides a label and then a list of the ids that fit under that label.

From here, I can play around with other options, such as trying out the STC algorithm:

http://localhost:8983/solr/select/?q=*:*&fl=title,score,id&version=2.2&start=0&rows=100&indent=on&clustering=true&clustering.engine=stc

What’s Next?

While I don’t have a specific roadmap, for clustering support, I can see a couple of things that are interesting:

  1. Whole collection clustering – Using a background process, cluster all the documents in the entire index using something like Apache Mahout.
  2. Clusters -> Filters – Take the docs in each cluster and create filters out of them and then store them in the filter cache with a name.  Then, future queries could be restricted to search 1 or more clusters only.
  3. Implement other algorithms.
  4. Take a deeper look at performance – Carrot2 is pretty fast, but maybe more profiling, etc. can be done to speed things up even more.

Appendix A

My indexing code:


Share on FacebookTweet about this on TwitterShare on Google+Pin on PinterestShare on RedditShare on LinkedIn

Your email address will not be published. Required fields are marked *

*

2 Comments

Roxana

Hi,

Thanks for the tutorial!
For me it only works after adding the “class” parameter to the solrconfig file:

Reply
Selvam

Hi,
Thanks for the write up. I would be interested to know about technical details of implementing custom mahout based incremental clustering. Any pointers ?
I saw your github code,
public class KMeansClusteringEngine extends DocumentClusteringEngine

But not sure on how to implement incremental clustering.

Reply