Using Python for Big Data Workloads (Part 1)
Get several resources on using Python for Big Data workloads, and learn about various programming SDKs, APIs, and libraries.
Join the DZone community and get the full member experience.
Join For FreeIn Part 2, we will look at Python for Spark (PySpark), Machine Learning, and deep learning in depth. In this first part, we'll go over the basics, some examples, and some tutorials to get you started.
Get the latest Python for your environment — Linux, OSX, and even Windows are supported. There's a debate whether to finally move to Python 3.x; try it and see if it works for all your tools. Since my Hadoop installation has Python 2.7, I am going to use that for my work.
Python is great. I can run it for Machine Learning, websites, from NiFi, deep learning, and stitching together a lot of jobs. Using Apache Zeppelin, I can run Python and PySpark without installing it and tons of modules on my developer workstation.
Python Resources
Here is a simple example:
from pyhive
import hive
conn = hive.connect('myhiveserverisawesome.tim.com').cursor()
conn.execute('SELECT * FROM amwatertweetshive LIMIT 10')
print conn.fetchone()
Introduction to Machine Learning With Apache Spark and Zeppelin
Enabling Apache Zeppelin and Spark for Data Science in the Enterprise
Opinions expressed by DZone contributors are their own.
Comments