This Week in Hadoop: NiFi, Sparkling Water, Ambari, and Spark
This week's round-up of interesting big data technologies from Spark to NiFi with some microservices thrown in for modern data application development.
Join the DZone community and get the full member experience.
Join For FreeH2O has released a new version of Sparkling Water 2.0. I found a few very cool articles on their blog. Spam Detection with ML Pipelines and H2O TensorFlow on AWS GPU!
Cool Spark Article on Clickbait Clustering with Spark (Github, Github)
Increment Fetch in Apache NiFi with QueryDatabaseTable
Awesome Article on Real Architectural Patterns for Microservices by Camille Fournier, Camille is one of the most brilliant people I have had the pleasure of speaking with. This is a must read.
Combining Agile and Spark, There's the interesting BDD-Spark library (Github).
Hortonworks has a number of interesting Demos, labs and training from their introduction to Hadoop workshop.
- Risk Analysis with Spark
- Streaming Data into HDFS
- Risk Analysis with Pig
- Data Manipulation with Hive
- Loading Data into HDFS
Cool Charting
Check out this article on Data Visualization with D3, DC, Leaflet, Python
(For more information on DC.JS, check it out.)
Spring Boot Applications in Ambari
How to Bundle a Spring Boot Application as an Ambari Service (Github)
Scala / SBT Tip
My SBT wasn't building until I upped the memory. Now this is in a shell script for all my builds:
export SBT_OPTS="-Xmx2G -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -XX:MaxPermSize=2G -Xss2M -Duser.timezone=GMT"
sbt -J-Xmx4G -J-Xms4G assembly
Here is an example Spark SQL with Stanford Core NLP SBT Build File (build.sbt):
name := "Sentiment"
version := "1.0"
scalaVersion := "2.10.6"
assemblyJarName in assembly := "sentiment.jar"
libraryDependencies += "org.apache.spark" % "spark-core_2.10" % "1.6.0" % "provided"
libraryDependencies += "org.apache.spark" % "spark-sql_2.10" % "1.6.0" % "provided"
libraryDependencies += "org.apache.spark" %% "spark-hive" % "1.6.0" % "provided"
libraryDependencies += "edu.stanford.nlp" % "stanford-corenlp" % "3.5.1"
libraryDependencies += "edu.stanford.nlp" % "stanford-corenlp" % "3.5.1" classifier "models"
resolvers += "Akka Repository" at "http://repo.akka.io/releases/"
assemblyMergeStrategy in assembly := {
case PathList("META-INF", xs @ _*) => MergeStrategy.discard
case x => MergeStrategy.first
}
You will also need project/assembly.sbt
addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.12.0")
General Hadoop CLI Tips
- Keep an eye on logs! Check them, make sure they rotate and old ones are archived off to cold storage or deleted. Find the biggest files on your box (du -hsx * | sort -rh | head -10).
- If you want to see things you have run before, check out:
/<user>/.beeline/history, /<user>/.hivehistory, /<user>/.sqlline/history, /<user>/.pig_history, /<user>/.spark_history.
You can also run history to check on general commands you have run (remember this will return the commands used by that previous user, which may be root or whatever you are currently logged in as. - What Java am I using and are there others available? alternatives --display java
- Sometimes your PATH may not be fully set, so you can miss out on great Java CLI tools like jps.
Opinions expressed by DZone contributors are their own.
Comments