Crawling the Web with Cassandra and Nutch
Join the DZone community and get the full member experience.Join For Free
So, you want to harvest a massive amount of data from the internet?
What better storage mechanism than Cassandra? This is easy to do with
Often people use Hbase behind Nutch. This works, but it may not be an ideal solution if you are (or want to be) a Cassandra shop. Fortunately, Nutch 2+ uses the Gora abstraction layer to access its data storage mechanism. Gora supports Cassandra. Thus, with a few tweaks to the configuration, you can use Nutch to harvest content directly into Cassandra.
We'll start with Nutch 2.1 ... I like to go directly from source:
$ git clone https://github.com/apache/nutch.git -b 2.1 ... $ ant
After the build, you will have a nutch/runtime/local directory, which
contains the binaries for execution. Now let's configure Nutch for
First we need to add an agent to Nutch by adding the following xml element to nutch/conf/nutch-site.xml:
<property> <name>http.agent.name</name> <value>My Nutch Spider</value> </property>
Next we need to tell Nutch to use Gora Cassandra as its persistence mechanism. For that, we add the following element to nutch/conf/nutch-site.xml:
<property> <name>storage.data.store.class</name> <value>org.apache.gora.cassandra.store.CassandraStore</value> <description>Default class for storing data</description> </property>
Next, we need to tell Gora about Cassandra. Edit the nutch/conf/gora.properties file. Comment out the SQL entries, and uncomment the following line:
Additionally, we need to add a dependency for gora-cassandra. Edit the ivy/ivy.xml file and uncomment the following line:
<dependency org="org.apache.gora" name="gora-cassandra" rev="0.2" conf="*->default" />
Finally, we want to re-generate the runtime with the new configuration and the additional dependency. Do this with the following ant command:
Now we are ready to run!
Create a directory called "urls", with a file named seed.txt that contains the following line:
Next, update the regular expression URL in conf/regex-urlfilter.txt to:
bin/nutch crawl urls -dir crawl -depth 3 -topN 5
That will harvest webpages to Cassandra!
Let's go look at the data model for a second ... you will notice that a new keyspace was created: webpage. That keyspace contains three tables: f, p, and sc.
[cqlsh 2.3.0 | Cassandra 1.2.1 | CQL spec 3.0.0 | Thrift protocol 19.35.0] Use HELP for help. cqlsh> describe keyspaces; system webpage druid system_auth system_traces cqlsh> use webpage; cqlsh:webpage> describe tables; f p sc
Each of these tables is a pure key-value store. To understand what is in each of them, take a look at the nutch/conf/gora-cassandra-mapping.xml file. I've included a snippet below:
<field name="baseUrl" family="f" qualifier="bas"/> <field name="status" family="f" qualifier="st"/> <field name="prevFetchTime" family="f" qualifier="pts"/> <field name="fetchTime" family="f" qualifier="ts"/> <field name="fetchInterval" family="f" qualifier="fi"/> <field name="retriesSinceFetch" family="f" qualifier="rsf"/>
From this mapping file, you can see what it puts in the table, but
unfortunately the schema isn't really conducive to exploration from the
CQL prompt (I think there is room for improvement here). It would be
nice if there was a CQL friendly schema in place, but that may be
difficult to achieve through gora. Alas, that is probably the price of
So, the easiest thing is to use the nutch tooling to retrieve the data. You can extract data with the following command:
runtime/local/bin/nutch readdb -dump data -content
When that completes, go into the data directory and you will see the output of the Hadoop job that was used to extract the data. We can then use this for analysis.
I really wish Nutch used a better schema for C*. It would be fantastic if that data was immediately usable from within C*. If someone makes that enhancement, please let me know!
Published at DZone with permission of Brian O' Neill, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Design Patterns for Microservices: Ambassador, Anti-Corruption Layer, and Backends for Frontends
Managing Data Residency, the Demo
Micro Frontends on Monorepo With Remote State Management
4 Expert Tips for High Availability and Disaster Recovery of Your Cloud Deployment