“…pretty soon you’re talking about real money”, goes the famous phrase attributed to the late Senator Everett Dirksen (apocryphally, it turns out).

The phrase came to mind in reading a recent post from Tom Burton-West of the Hathi Trust, on running into an index-size limitation of 2.47 billion words on a base of 555,000 documents.

When we read that the Lucene index format used by Solr has a limit of 2.1 billion unique words per index segment,  we didn’t think we had to worry.  However, a couple of weeks ago, after we optimized our indexes on each shard to one segment, we started seeing java “ArrayIndexOutOfBounds” exceptions in our logs.  After a bit of investigation we determined that indeed, most of our index shards contained over 2.1 billion unique words and some queries were triggering these exeptions.  Currently each shard indexes a little over half a million documents.

Culprits? Dirty OCR, CommonGrams, and 200 languages:

After a bit of digging in the log files, we found a query containing a Korean term which consistantly triggered the exception and used that for testing.   We re-read the index documentation and realized that the index entries are sorted lexicographically by UTF16 character code. ( Korean uses Hangul which is near the end of the Unicode BMP and therefore a very high UTF-16 character code range.)

The fix, recently committed to Lucene 2.92, 3.0.1, and 3.1 by Mike McCandless, raises the limit per index to about 274 billion unique terms. Now, you may not have 274 billion unique terms in your collection, or not yet — but that is some pretty substantial headroom for lexical and linguistic search trickery.

If you haven’t put the Hathi Project’s blog on your list, it’s a must-read: see also Tom’s excellent posts on distributed Solr search and how common words and phrases can affect Lucene/Solr performance. The library space is at the forefront of driving the limits of open source: monstrous indexes, tough metadata to whip into shape, handling OCR, deep field faceting, database integration, data types ranging a mile wide, phrase queries that will make your head spin, and unforgiving relevancy for the masses (our own Erik Hatcher hails from that space and is still quite active in it).  Chances are if you’re running into a tough search problem, the folks in the library space are already working through the thick of it.

About Lucidworks

Read more from this author

LEARN MORE

Contact us today to learn how Lucidworks can help your team create powerful search and discovery applications for your customers and employees.