Solr’s New Clustering Capabilities

By on September 28, 2009
CTO and co-founder of Lucidworks


One of the new things in Solr 1.4 that I am particularly excited about is the new document and search results clustering capabilities.  This is an optional module that lives in Solr’s contrib/clustering directory and was added via SOLR-769.  The module is designed to allow people to either use the existing clustering capabilities, currently only search result clustering is offered via Carrot2, or to plug in their own capabilities.  While some of the public APIs still need to be hashed out via multiple implementations and as they related to whole document collections, I thought I would share a quick getting started on search result clustering using Carrot2, since it is included in Solr and easy to get up and running with rather quickly.


Clustering is an unsupervised learning task that attempts to aggregate related content together without any a priori knowledge about the content.  It is very similar to faceting (some people call it dynamic faceting), but works in a less structured manner.  Clustering algorithms often look at the features of the text (important words, etc.) and use them to determine similarity.  Most implementations define a notion of distance between any two documents and then use that to determine which documents are similar to one another.  Popular algorithms include hierarchical and k-Means clustering.  For more information and to see several implementations of clustering in action, I’d encourage you to check out Apache Mahout.

Getting Started

The first thing to do to get started is get the code.  There are a number of ways to do this, but I just like SVN on the command line, as in:

svn co

Next, you can switch into the trunk directory and build everything:

ant build-contrib

This is an important step because some of Carrot2’s libraries cannot be included by default in the Apache SVN because they are LGPL.   The build-contrib Ant target will go and automatically download the necessary libraries.

Once built, I need to add the Clustering libs to my Solr Home lib directory (called solr-clustering), as in:

cp <SOLR_HOME>/contrib/clustering/lib ./solr-clustering/lib/
cp <SOLR_HOME>/contrib/clustering/build/apache-solr-clustering-1.4-dev.jar ../solr-clustering/lib/.
cp <SOLR_HOME>/contrib/clustering/lib/downloads ./solr-clustering/lib/

I also got Solr Cell (Apache Tika integration) so that I can easily load some content to cluster:

 cp <SOLR_HOME>/contrib/extraction/build/apache-solr-cell-1.4-dev.jar ../solr-clustering/lib/.
cp <SOLR_HOME>/contrib/extraction/lib/* ./solr-clustering/lib/.

For this example, the pertinent parts of my schema are:

<field name="id" type="string" indexed="true" stored="true" required="true" />
 <field name="title" type="text" indexed="true" stored="true" multiValued="true"/>
 <field name="subject" type="text" indexed="true" stored="true"/>
 <field name="description" type="text" indexed="true" stored="true"/>
 <field name="comments" type="text" indexed="true" stored="true"/>
 <field name="author" type="textgen" indexed="true" stored="true"/>
 <field name="keywords" type="textgen" indexed="true" stored="true"/>
 <field name="category" type="textgen" indexed="true" stored="true"/>
 <field name="content_type" type="string" indexed="true" stored="true" multiValued="true"/>
 <field name="last_modified" type="date" indexed="true" stored="true"/>
 <field name="links" type="string" indexed="true" stored="true" multiValued="true"/>
 <field name="text" type="text" indexed="true" stored="true" multiValued="true"/>

And my Solr config has:

<requestHandler name="standard" default="true">
 <!-- default values for query parameters -->
 <lst name="defaults">
 <str name="echoParams">explicit</str>
 <int name="rows">10</int>
 <str name="fl">*</str>
 <str name="version">2.1</str>
 <!--<bool name="clustering">true</bool>-->
 <str name="clustering.engine">default</str>
 <bool name="clustering.results">true</bool>
 <!-- The title field -->
 <str name="carrot.title">title</str>
 <str name="carrot.url">id</str>
 <!-- The field to cluster on -->
 <str name="carrot.snippet">text</str>
 <!-- produce summaries -->
 <bool name="carrot.produceSummary">true</bool>
 <!-- the maximum number of labels per cluster -->
 <!--<int name="carrot.numDescriptions">5</int>-->
 <!-- produce sub clusters -->
 <bool name="carrot.outputSubClusters">false</bool>

 <arr name="last-components">

 <searchComponent name="clustering">
 <!-- Declare an engine -->
 <lst name="engine">
 <!-- The name, only one can be named "default" -->
 <str name="name">default</str>

 <str name="carrot.algorithm">org.carrot2.clustering.lingo.LingoClusteringAlgorithm</str>
 <str name="LingoClusteringAlgorithm.desiredClusterCountBase">20</str>
 <lst name="engine">
 <str name="name">stc</str>
 <str name="carrot.algorithm"></str>

Finally, I need to fire up Solr:

cd <SOLR_HOME>/example
java -Dsolr.solr.home=<PATH TO HOME>/solr-clustering<PATH TO HOME>/solr-clustering/data -jar start.jar

Now I need some documents.  In this case, I have a bunch of PDF files that I keep organized in Mekentosj’s excellent PDF library organizer Papers (Mac only) that I want to index.  The code for this (it’s just a quick little hack) is in Appendix A at the bottom of the post.  I point it at my directory and off it goes.  When I’m done, I have 91 documents in my index.  I then did some basic searches to make sure I can get some decent results for some queries.  From here, all I need to do is tell Solr to cluster the results:


Notice I added the &clustering=true parameter on the end and that I set &rows to be 100.  This turns on the clustering component which then hands off the work to Carrot2 using the parameters defined in my Request Handler.  Carrot2 is an in-memory clustering engine and the implementation is designed to cluster on only the top results, not necessarily all the results that matched.

In the case of the request above, some of my results look like:

<arr name="clusters">
  <arr name="labels">
	<str>Naive Bayesian</str>
  <arr name="docs">
	<str>/Users/grantingersoll/Documents/Papers/1996/Friedman/Proceedings of the Thirteenth National Conference on … 1996 Friedman.pdf</str>
	<str>/Users/grantingersoll/Documents/Papers/1998/McCallum/AAAI-98 Workshop on Learning for Text Categorization 1998 McCallum.pdf</str>
	<str>/Users/grantingersoll/Documents/Papers/2000/Androutsopoulos/Arxiv preprint cs.CL 2000 Androutsopoulos.pdf</str>
	<str>/Users/grantingersoll/Documents/Papers/2002/Sebastiani/ACM Computing Surveys (CSUR) 2002 Sebastiani.pdf</str>
	<str>/Users/grantingersoll/Documents/Papers/2007/Unknown/2007.pdf</str>	<str>/Users/grantingersoll/Documents/Papers/2008/Graham-Cumming/2008 Graham-Cumming.pdf</str>
        <str>/Users/grantingersoll/Documents/Papers/2008/McCullagh/Bayesian Analysis 2008 McCullagh.pdf</str>
  <arr name="labels">
	<str>Semantic Distance of the Component Nodes</str>
  <arr name="docs">
	<str>/Users/grantingersoll/Documents/Papers/1992/Kukich/ACM computing surveys 1992 Kukich.pdf</str>
	<str>/Users/grantingersoll/Documents/Papers/2002/Sebastiani/ACM Computing Surveys (CSUR) 2002 Sebastiani.pdf</str>
	<str>/Users/grantingersoll/Documents/Papers/2008/E. Bernard/2008 E. Bernard.pdf</str>

You can see Carrot2 provides a label and then a list of the ids that fit under that label.

From here, I can play around with other options, such as trying out the STC algorithm:


What’s Next?

While I don’t have a specific roadmap, for clustering support, I can see a couple of things that are interesting:

  1. Whole collection clustering – Using a background process, cluster all the documents in the entire index using something like Apache Mahout.
  2. Clusters -> Filters – Take the docs in each cluster and create filters out of them and then store them in the filter cache with a name.  Then, future queries could be restricted to search 1 or more clusters only.
  3. Implement other algorithms.
  4. Take a deeper look at performance – Carrot2 is pretty fast, but maybe more profiling, etc. can be done to speed things up even more.

Appendix A

My indexing code:

package com.grantingersoll.noodles.solr;

import org.apache.solr.client.solrj.SolrServer;
import org.apache.solr.client.solrj.SolrServerException;
import org.apache.solr.client.solrj.impl.CommonsHttpSolrServer;
import org.apache.solr.client.solrj.request.ContentStreamUpdateRequest;


public class SimpleFileIndexer {
  protected SolrServer server;

  public SimpleFileIndexer() throws MalformedURLException {
    server = new CommonsHttpSolrServer("http://localhost:8983/solr");

  public long crawl(File input) throws IOException, SolrServerException {
    long result = 0;
    if (input.isDirectory()) {

      File[] files = input.listFiles(new FilenameFilter() {
        public boolean accept(File file, String s) {
          return s.endsWith(".pdf") || s.endsWith(".doc") || file.isDirectory();
      for (int i = 0; i < files.length; i++) {
        File file = files[i];
        result += crawl(file);
    } else {
      String name = input.getName();
      if (name.endsWith(".pdf") || name.endsWith(".doc")){
        System.out.println("Adding: " + input);
        ContentStreamUpdateRequest csur = new ContentStreamUpdateRequest("/update/extract");
        csur.setParam("", input.getAbsolutePath());
        try {
        } catch (Exception e){
          System.err.println("Couldn't add: " + input);


    //autocommit is on
    return result;

  public SolrServer getServer() {
    return server;

  public static void main(String[] args) throws IOException, SolrServerException {
    File dir = new File(args[0]);
    if (dir.exists()) {
      SimpleFileIndexer idxr = new SimpleFileIndexer();
      long count = idxr.crawl(dir);
      System.out.println("Crawled: " + count + " documents.");
    } else {
      System.err.println("Input file or dir does not exist: " + args[0]);


Share on LinkedInShare on FacebookTweet about this on Twitter

Related Posts

Secure Fusion: Authentication and Authorization

Solr 5’s New ‘bin/post’ Utility

When the mapping gets tough, the tough use JavaScript

Solr Suggester

Notes on DIH Architecture: Solr’s Data Import Handler

Top Posts

Understanding Transaction Logs, Soft Commit and Commit in SolrCloud

Faceted Search with Solr

Nested Queries in Solr

Posted in Documentation, SearchHub

Your email address will not be published. Required fields are marked *





Thanks for the tutorial!
For me it only works after adding the “class” parameter to the solrconfig file:


Thanks for the write up. I would be interested to know about technical details of implementing custom mahout based incremental clustering. Any pointers ?
I saw your github code,
public class KMeansClusteringEngine extends DocumentClusteringEngine

But not sure on how to implement incremental clustering.