Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

This Week in Hadoop: NiFi, Sparkling Water, Ambari, and Spark

DZone's Guide to

This Week in Hadoop: NiFi, Sparkling Water, Ambari, and Spark

This week's round-up of interesting big data technologies from Spark to NiFi with some microservices thrown in for modern data application development.

· Big Data Zone
Free Resource

Learn best practices according to DataOps. Download the free O'Reilly eBook on building a modern Big Data platform.

H2O has released a new version of Sparkling Water 2.0.  I found a few very cool articles on their blog. Spam Detection with ML Pipelines and H2O TensorFlow on AWS GPU!

Cool Spark Article on Clickbait Clustering with Spark (GithubGithub)

Increment Fetch in Apache NiFi with QueryDatabaseTable

Awesome Article on Real Architectural Patterns for Microservices by Camille Fournier, Camille is one of the most brilliant people I have had the pleasure of speaking with. This is a must read.

Combining Agile and Spark, There's the interesting BDD-Spark library (Github).

Hortonworks has a number of interesting Demos, labs and training from their introduction to Hadoop workshop.

Cool Charting

Check out this article on Data Visualization with D3, DC, Leaflet, Python 
(For more information on DC.JS, check it out.)

Spring Boot Applications in Ambari

How to Bundle a Spring Boot Application as an Ambari Service (Github)

Scala / SBT Tip

My SBT wasn't building until I upped the memory.  Now this is in a shell script for all my builds:

export SBT_OPTS="-Xmx2G -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -XX:MaxPermSize=2G -Xss2M  -Duser.timezone=GMT"
sbt -J-Xmx4G -J-Xms4G assembly

Here is an example Spark SQL with Stanford Core NLP SBT Build File (build.sbt):

name := "Sentiment"

version := "1.0"

scalaVersion := "2.10.6"

assemblyJarName in assembly := "sentiment.jar"

libraryDependencies  += "org.apache.spark" % "spark-core_2.10" % "1.6.0" % "provided"
libraryDependencies  += "org.apache.spark" % "spark-sql_2.10" % "1.6.0" % "provided"
libraryDependencies += "org.apache.spark" %% "spark-hive" % "1.6.0" % "provided"
libraryDependencies += "edu.stanford.nlp" % "stanford-corenlp" % "3.5.1"
libraryDependencies += "edu.stanford.nlp" % "stanford-corenlp" % "3.5.1" classifier "models"

resolvers += "Akka Repository" at "http://repo.akka.io/releases/"

assemblyMergeStrategy in assembly := {
  case PathList("META-INF", xs @ _*) => MergeStrategy.discard
  case x => MergeStrategy.first
}

You will also need project/assembly.sbt

addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.12.0")

General Hadoop CLI Tips

  1. Keep an eye on logs! Check them, make sure they rotate and old ones are archived off to cold storage or deleted. Find the biggest files on your box (du -hsx * | sort -rh | head -10).
  2. If you want to see things you have run before, check out:
     /<user>/.beeline/history, /<user>/.hivehistory, /<user>/.sqlline/history, /<user>/.pig_history, /<user>/.spark_history.

    You can also run history to check on general commands you have run (remember this will return the commands used by that previous user, which may be root or whatever you are currently logged in as.
  3. What Java am I using and are there others available?   alternatives --display java  
  4. Sometimes your PATH may not be fully set, so you can miss out on great Java CLI tools like jps.

Find the perfect platform for a scalable self-service model to manage Big Data workloads in the Cloud. Download the free O'Reilly eBook to learn more.

Topics:
hortonworks ,hadoop ,nifi ,spark ,h2o ,machine learning

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}