Guerilla Search with Solr - How to run a 3 millions documents search on a $15/Month machine.
Join the DZone community and get the full member experience.Join For Free
gwittr a twitter search and stats site that provides an extended search of your tweets and their linked web-pages as well as profiling statistics. this article will highlight the challenges and options to run a medium large scale search (over 3 million documents) on very cheap (< $15/month) machines. the information came about building.
- throwing the problem at the cloud is neither cheap nor necessarily the solution.
- avoid overpaying for unnecessary storage space.
- understand your document fields definition and optimise for your search needs.
- craft your queries with love and care.
- master your commit strategy.
throw the problem at the cloud?
in this age of cloud computing and eaas (everything as a service) it’s very tempting for companies with products requiring search features to just use hosted search services . the cents a second sound trivial however as the system scales it can easily tally to monthly bills in the hundreds or thousands of dollars.
a way to avoid those costs is to run your own solr installation on vanilla hardware or virtual boxes. not only it will save you a great deal of money, but you will also gain valuable search engine skills and knowledge that you can leverage to limit your spendings if you move to another search platform.
at gwittr , we run on plain hand-rolled solr instances sitting on very affordable boxes and still we can output fairly advanced stats about our data without any significant sluggishness . here are a few principles we follow.
search is not storage
a search engine like solr is not a database store. it gives you indices on steroid. if you forget that and consider your search indices as your primary storage, then you risk:
- data loss. although solr does implement some data integrity techniques, persistence is not a strong property of those systems.
- for streaming data search like gwittr the storage dedicated to search will grow rapidly. if you’re using saas, that means you will end up paying significant money for storing your data in solr as well as in your primary storage.
- loss of agility. re-indexing to support new features is inevitable. if you don’t plan for this then you will lose a lot of your release agility.
optimization #1: consider your search indices to be a disposable and easily rebuildable resource as your application will definitely have to re-index everything from time to time when you introduce new features.
make all the fields in your schema non stored by default. it’s perfectly fine to use features like faceting on non-stored fields. the main valid reason to store a document field in solr is when you want to use the highlighting feature , as solr needs the original text of your document to output some highlighted snippets. you also want to store a couple more things, like your document identifier(s) as you probably will have to use that to link back your search results to your primary storage in your application code.
solr also provides quite an extensive set of field indexing options that will help you reduce your index footprint even further.
browsing vs searching.
although solr, lucene and the rest of the family are marketed as ‘search engines’, it’s probably more correct to say that they are very good browsing engines (with faceting being a strong selling point) with excellent full text search capabilities compared to what you would get from an open source database system. if you look at how your user experience is designed (and at how web crawlers will see your site), and unless you are google, you’ll probably find that most of the time, your users click around on your navigational features (facets, similar documents..) after they made their first keywords search. at least that’s the case on gwittr where visitors can see all results and drill down through them without entering any search keyword.
optimization #2: for your “browsing” related queries, it’s always better to use solr’s filters instead of stuffing everything in your “q” parameter. solr filtered document sets are cached and they stay away from any relevance scoring computations, so using them for your browsing queries will save you valuable i/o and cpu cycles.
also search engines are not meant to display pages of results very far in your matching set, as the further you go, the more temporary memory is required and the slower it gets. for instance, google doesn’t show you any result beyond the 1000th page or even earlier.
optimization #3 implement pagination limits in your application.
optimization #4 request only the fields you need to display your results, thus minimizing i/o and bandwidth.
solr commit is not rdbms commit.
with databases, we use transactions and commits concurrently all the time, because it’s the right way to enforce data integrity when an update involves more than one row or table. it means “our view of the data has reached a consistent state, please propagate this state change to the rest of the world”. in solr, “commiting” has got very different semantic.
as you most probably know by now, there is no such thing as “update”, “data integrity foreign keys” or “multiple tables” in solr. at heart, solr/lucene just manages an ever growing collection of documents in their indexed form. every time you add, update or delete a collection of documents, solr adds a new “segment” (a bunch of files) to its data directory. eventually the number of segments grows big. there a mechanism to counteract that, but that’s not the point here.
in solr, all the search queries are handled by a searcher object . a searcher is built on top of the collection of segments the index is composed of. what a commit means in this context is simply: “please solr, build a new searcher that includes the fresh new segments, and atomically replace the current searcher with it”.
don’t step on my toes.
optimization #5 avoid at all cost committing to solr in a concurrent way, as you would just keep building new searchers just to throw them away the second after. in fact, concurrently building searcher is so bad that there is an explicit setting in solr’s configuration to hard cap this number. the default is 2. so if you commit concurrently you will most likely get nice exception stack traces complaining about too many opened searchers.
optimization #6 monitor the time it takes to build a new searcher. optimising solr’s responsiveness to new/updated document (hipsters call that “real time”) boils down to minimising the time it takes to build a new searcher object. here is a tip: monitor your solr logs, greping for “event=newsearcher” and look for the qtime (query time) of those lines. your goal is to make this time as short as reasonably possible (we will see why ‘reasonable’ is important here later), as the faster it is to build a new searcher, the more often you can do it, and the more responsive your search become to insert, updates and deletes.
there’s two main strategies to issue commits in solr. the first one and probably the one you should look at as a first approach is to let solr do it at regular intervals. it’s called autocomit and it’s great as it relieves your application from managing it. in fact, if you use autocommit, then it becomes a very bad idea to let your application issue commits itself. remember the cap on overlapping searcher. this apply to autocommit searchers too, so make your autocommit intervals longer than your searcher building time. one thing against auto committing at regular intervals is that when there’s no updates on your index, then regularly building new searchers is just a waste of cpu. that points us to the second strategy about committing:
optimization #7 let your application do commits as needed when needed. just keep in mind that concurrent commits are a bad idea and implement a global locking mechanism. then you should be just fine.
blowing hot and cold.
now you might think “oh well, how slow can it be to build a new searcher with just one more segment? surely solr is written well enough for this to be very fast”. you are right. it is very fast.
the only problem is that the first few queries on this searcher will be very slow. and this is bad. in a high volume search context, a few sluggish queries is all it takes to potentially bring your product on its knees as resource starvation will kick in your application layers. the reason behind these first queries slow down is that a fresh searcher’s caches are not populated with anything useful. in solr terms, this is called a ‘cold searcher’. solr allows you to use cold searchers, but fortunately it’s only when no other searcher is registered. that means it happens only on just started instances of solr. for all the other cases, solr provides some mechanisms to warm the searchers up so they are nice and hot when they are promoted to a request-serving role.
optimization #8 there’s two sets of settings influencing the warming up of a new searcher, and you should use a combination of both.
- one is to set solr to issue queries against the warming searcher . one idea for these queries is to sample a few typical queries from your live application and make them a bit more general by removing a filter for instance. an important thing to do is to include most of the facets you will be using in your application. you can also issue a few keyword queries, as this would load the full text indices in memory if there is space enough.
- another way to warm new searchers is to set-up autowarming on caches . cache autowarming is simply a way to reuse values from old caches to pre-populate values in your warming searcher’s caches.
the key about warming searchers is to find the right balance between the time it takes to build a new searcher (remember it can be almost instant - but dangerous) and the amount of slow down you can afford when your application hits a freshly registered searcher. finding this sweet spot requires experimenting, as it all depends on what your application layer requires and is capable to stomach.
with a deep enough knowledge of a search product, and some fun experimenting, it’s perfectly possible to squeeze a lot of performance from cheap hardware, avoiding the costs involved with relying only on saas. also, knowing the inner working of a system is a great way to make the right decision about the settings and the usage strategies to apply when you move to a saas platform. saas is a great way to avoid all the headaches associated with scaling and replication. but don’t ignore these services internals altogether, or you will be exposing yourself to underperformance and overspending.
about the author:
jerome eteve is a full stack senior web application developer based in london. over his career he reviewed seminal books about solr and implemented custom search solutions in a variety of high volume products.
is an integration architect including search, big data and complex business processes
Opinions expressed by DZone contributors are their own.