Over a million developers have joined DZone.

How to Clone Wikipedia and Index it with Solr

DZone's Guide to

How to Clone Wikipedia and Index it with Solr

· Java Zone
Free Resource

Learn how to troubleshoot and diagnose some of the most common performance issues in Java today. Brought to you in partnership with AppDynamics.

It took 6 weeks, but Fred Zimmerman, a blogger for Nimblebooks.com just completed a very cool use case scenario for Solr indexing.  He cloned all of Wikipedia and then indexed it with Solr:

1.  "Hardware. I found out the hard way that 32-bit Ubuntu machines with 613 MB RAM (Amazon’s ECS “micro” instances) were not big enough—they created time out errors that disappeared when I upgraded to 1.7GB / single cores. You will also need at least 200 GB disk space, 300 is a safe figure."

2.  "Software.  You will need MediaWiki 1.17 or greater, several extensions (listed in this good page by Metachronistic), either mwimport or http://www.mediawiki.org/wiki/Manual:MWDumper, mySQL, and Apache Solr 3.4. Install the necessary MediaWiki extensions now, it will make it easier later on verify that your database import was successful."

3. "Data.  Get the latest Wikipedia dump from http://en.wikipedia.org/wiki/Wikipedia:Database_download#English-language_Wikipedia.  You probably want the pages-articles file which is ~ 8 GB compressed and ~ 33 GB uncompressed."

4....    --Nimblebooks.com

I think this is a great real world example tutorial that could help any developer get familiar with the open source search utility of Solr, or just tune their skills.  A good read.

Source: http://www.nimblebooks.com/wordpress/2011/10/how-to-clone-wikipedia-and-index-it-with-solr/

Understand the needs and benefits around implementing the right monitoring solution for a growing containerized market. Brought to you in partnership with AppDynamics.


Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}