Over a million developers have joined DZone.

Resource List: Machine Learning, ODPi, Deduping With Scala, OCR and More...

DZone's Guide to

Resource List: Machine Learning, ODPi, Deduping With Scala, OCR and More...

A brief article with great resources for Hadoop, Spark, and Java gurus.

· Big Data Zone ·
Free Resource

Hortonworks Sandbox for HDP and HDF is your chance to get started on learning, developing, testing and trying out new features. Each download comes preconfigured with interactive tutorials, sample data and developments from the Apache community.

ODPi for Hadoop Standards: The ODPi + ASF to consolidate Hadoop and all the versions.   Too many custom distributions with various versions of the 20 or so tools that make up Apache Big Data.   To be able to move between HDP, CDH, IBM, Pivotal and MapR seemless would be awesome.  For now HDP, Pivotal and IBM are part of the ODPi.

Structured Data: Connecting Modern Relational Database and Hadoop is always an architectural challenge that requires decisions, EnterpriseDB (Postgresql) has an interesting article on that.   It let’s you read HDFS/Hive tables from EDB with SQL.  (Github)

Semistructured Data: Using Apache NIFI with Tesseract for OCR:   HP and Google have been fine-tuning Tesseract for awhile to handle OCR.   Using dataflow technology from the NSA, you can automate OCR tasks on Mac.   Pretty Cool.  On my machine, I needed to install a few things first:

Atlas + Ranger for Tag Based Policies in Hadoop: Using these new but polished Apache projects for managing everyting around security policies in the Hadoop ecosystem.   Add to that a cool example with Apache SOLR.

Anyone who hasn’t tried Pig yet, might want to check out this cool tutorial: Using PIG for NY Exchange Data. Pig will work on Tez and Spark, so it’s a tool Data Analysts should embrace.

It’s hard to think of Modern Big Data Applications without thinking of Scala.   A number of interesting resources have come out after Scala Days NYC:

Java 8 is still in the race for developing Modern Data applications with a number of projects around Spring and CloudFoundry, including Spring Cloud Stream, which lets you connect microservices with Kafka or RabbitMQ that you can run on Apache YARN. Also, see this article.

For those of you lucky enough to have a Community Account at DataBricks cloud, you can check out the new features of Spark 2.0 on display in that platform before release. 

An interesting topic for me is Fuzzy Matching, I’ve seen a few interesting videos and GitHub pages on that:

Am I the only person trying to remove duplicates from data? CSV Data? People?    

I have also been looking for some good resources on NLP (Natural Language Processing). There’s some interesting text problems I am looking at. 

  • Machine Learning with NLP on Craig’s List Data using Sparkling Water (H20 + Spark) 
  • Word2Vec with DeepLearning4J
  • Word2Vec with Spark
  • Classifiying Documents using Naive Bayes on Spark MLlib 
  • Simple NLP Search in Scala
  • ScalaNLP
  • Simple NLP Search DataSet Creator
  • NLP on Amazon Reviews
  • Date / Time NLP Parser – intelligent parsing of dates, times and temporal concepts
  • GloVe: Global Vectors for Word Representation – Draft for GloVe on Spark (Github)
  • Spark Word2Vec Example 
  • OpenNLP
  • Stanford CoreNLP (Java Library)
  • Analyzing Text using Stanford CoreNLP 
  • Sentiment Analysis with Stanford CoreNLP
  • TweetNLP
  • ArkTweetNLP
  • Spark Wrapper of CoreNLP
  • Hortonworks Community Connection (HCC) is an online collaboration destination for developers, DevOps, customers and partners to get answers to questions, collaborate on technical articles and share code examples from GitHub.  Join the discussion.

    spark ,ocr ,nifi ,big data ,hadoop ,scala ,java

    Published at DZone with permission of

    Opinions expressed by DZone contributors are their own.

    {{ parent.title || parent.header.title}}

    {{ parent.tldr }}

    {{ parent.urlSource.name }}