Scala vs. Python for Apache Spark
When using Apache Spark for cluster computing, you'll need to choose your language. Scala has its advantages, but see why Python is catching up fast.
Join the DZone community and get the full member experience.Join For Free
Apache Spark is a great choice for cluster computing and includes language APIs for Scala, Java, Python, and R. Apache Spark includes libraries for SQL, streaming, machine learning, and graph processing. This broad set of functionality leads many developers to start developing against Apache Spark for distributed applications.
The first big decision you have is where to run it. For most, that's a no-brainer - run it on Apache Hadoop YARN on your existing cluster. After that tough decision, the harder one for developers and enterprises is what language to develop in. Do you have to allow users to pick their own and support multiple languages? This will result in code and tool sprawl, and the R interface is not quite as rich. For most enterprises, seeing how verbose and uncommon the Java interface is, this leads them down the path to either Python or Scala. I am here to tear apart both options and rebuild them and see who is left standing.
Scala has a major advantage in that it is the language that the Apache Spark platform is written in. Scala on JVM is a very powerful language that is cleaner than Java and just as powerful. Using the JVM, your applications can scale to a massive size. For most applications, this is a big deal, but with Apache Spark already being distributed with Akka and YARN, it's not necessary. You merely set a few parameters and your Apache Spark application will be distributed for you regardless of your language. So, this is not an advantage anymore.
Python has become a first-class citizen in the Spark World. It is also a very easy language to start with and many schools are teaching it to children. There is a wealth of example code, books, articles, libraries, documentation, and help available for this language.
PySpark is the default place to be in Spark. With Apache Zeppelin's strong PySpark support, as well as Jupyter and IBM DSX using Python as a first-class language, you have many notebooks to use to develop code, test it, run queries, build visualizations and collaborate with others. Python is becoming the lingua franca for Data Scientists, Data Engineers, and Streaming developers. Python also is well integrated with Apache NiFi.
Python has the advantage of a very rich set of machine learning, processing, NLP, and deep learning libraries available. You also don't need to compile your code first and worry about complex JVM packaging. Using Anaconda or Pip is pretty standard and Apache Hadoop and Apache Spark clusters already have Python and their libraries installed for other purposes like Apache Ambari.
Some of the amazing libraries available for Python include NLTK, TensorFlow, Apache MXNet, TextBlob, SpaCY, and Numpy.
PySpark is listed in all the examples and is no longer an afterthought.
Most libraries come out with Python APIs first.
Python is a mature language.
Python usage continues to grow.
Deep learning libraries are including Python.
Included in all notebooks.
Ease of use.
Sometimes Python 2 and Sometimes Python 3
Not as fast as Scala (though Cython fixes it)
Some of the libraries are tricky to build
Strong IDEs and unit testing
Great serialization formats
Reuse Java libraries
Advanced Streaming Capabilities
Not as wide spread use or knowledge base
It's a little odd for Java people to move to
Has to compiled for Apache Spark jobs
Great Comment from Lightbend's Gerard Maas:
This article misses an important point: With the introduction of high-level abstractions in Spark SQL such as DataFrames/Datasets, the support for Scala and Python are at the same level, both in terms of API (DSL) and performance. These APIs are the currently recommended entry points to program in Spark. In the streaming department, advanced state management is currently supported only in Scala.
Opinions expressed by DZone contributors are their own.