Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

How to Clone Wikipedia and Index it with Solr

DZone's Guide to

How to Clone Wikipedia and Index it with Solr

· Java Zone ·
Free Resource

Get the Edge with a Professional Java IDE. 30-day free trial.

It took 6 weeks, but Fred Zimmerman, a blogger for Nimblebooks.com just completed a very cool use case scenario for Solr indexing.  He cloned all of Wikipedia and then indexed it with Solr:

1.  "Hardware. I found out the hard way that 32-bit Ubuntu machines with 613 MB RAM (Amazon’s ECS “micro” instances) were not big enough—they created time out errors that disappeared when I upgraded to 1.7GB / single cores. You will also need at least 200 GB disk space, 300 is a safe figure."

2.  "Software.  You will need MediaWiki 1.17 or greater, several extensions (listed in this good page by Metachronistic), either mwimport or http://www.mediawiki.org/wiki/Manual:MWDumper, mySQL, and Apache Solr 3.4. Install the necessary MediaWiki extensions now, it will make it easier later on verify that your database import was successful."

3. "Data.  Get the latest Wikipedia dump from http://en.wikipedia.org/wiki/Wikipedia:Database_download#English-language_Wikipedia.  You probably want the pages-articles file which is ~ 8 GB compressed and ~ 33 GB uncompressed."

4....    --Nimblebooks.com


I think this is a great real world example tutorial that could help any developer get familiar with the open source search utility of Solr, or just tune their skills.  A good read.

Source: http://www.nimblebooks.com/wordpress/2011/10/how-to-clone-wikipedia-and-index-it-with-solr/



Get the Java IDE that understands code & makes developing enjoyable. Level up your code with IntelliJ IDEA. Download the free trial.

Topics:

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}