The big data ecosystem and data science
Join the DZone community and get the full member experience.Join For Free
this article is excerpted from introducing data science . save 39% on introducing data science with code 15dzamia at manning.com.
currently there are many big data tools and frameworks, and it is easy to get lost because new technologies appear rapidly. it becomes much easier once you realize that the big data ecosystem can be grouped into technologies that have similar goals and functionalities. data scientists use many different technologies but not all of them. the mind map in figure 1 shows the components of the big data ecosystem and where the different technologies belong
figure 1 big data technologies can be classified into a few main components.
let’s take a look at the different groups of tools in this diagram and see what each does. we’ll start with distributed file systems, upon which everything else is built.
distributed file systems
a distributed file system is similar to a normal file system except that it runs on multiple servers at once. because it is a file system, you can do almost all the same things you would do on a normal file system. actions such as storing, reading, and deleting files and adding security to files are at the core of every file system, also the distributed one. distributed file systems have some significant advantages:
they can contain files larger than any one computer disk.
files get automatically replicated across multiple servers for redundancy or parallel operations while hiding the complexity of doing so from the user.
the system scales easily, you are no longer bound by the memory or storage restrictions of a single server.
in the past, scale was increased by moving everything to a server with more memory, storage and a better cpu. nowadays you simply add another small server, this principle makes the scaling potential virtually unlimited.
the best known distributed file system at this moment is the hadoop file system (hdfs). it is an open-source implementation of the google file system. however, there are many other distributed file systems out there: red hat cluster fs, ceph file system, and tachyon file system, to name but three.
distributed programming framework
once you have the data stored on the distributed file system, you want to exploit it. an important aspect of working on a distributed hard disk is that you will not move your data to your program, but rather you will move your program to the data. when you start from scratch with a normal general-purpose programming language such as c, python, or java, you need to deal with the complexities that come with distributed programming such as restarting jobs that have failed, tracking the results from the different subprocesses, and so on. luckily, the open-source community has developed many frameworks to handle this for you and give you a much better experience working with distributed data and dealing with many of the challenges it carries.
data integration framework
once you have a distributed file system in place, you need to add some data. this means that you need to move data from one source to another, and this is where the data integration frameworks such as apache sqoop and apache flume excel. the process is similar to an extract, transform, and load process in a traditional data warehouse.
machine learning frameworks
when you have the data in place, it’s time to extract the coveted insights. this is where you rely on the fields of machine learning, statistics, and applied mathematics. before world war ii, everything needed to be calculated by hand, which severely limited the possibilities of data analysis. after world war ii computers and scientific computing were developed. a single computer could do all the counting and calculations and a world of opportunities opened. ever since this breakthrough, people just need to derive the mathematical formulas, write them in an algorithm, and load their data.
with the enormous amount of data available nowadays, one computer can no longer handle the workload by itself. in fact, some of the algorithms developed in the previous millennium would never terminate before the end of the universe, even if you could use every computer available on earth.
one of the biggest issues with the old algorithms is that they do not scale well. with the amount of data we need to analyze today, this becomes problematic, and specialized frameworks and libraries are required in order to deal with this amount of data. the most popular machine learning library for python is scikit-learn, it’s a great machine learning toolbox and we will be using it later in the book. there are off course other python libraries:
pybrain for neural networks. neural networks are learning algorithms that essentially mimic the human brain in learning mechanic and complexity. neural networks are often regarded advanced and black box.
nltk or natural language toolkit. as the name suggest its focus is working with natural language. it’s an extensive library that comes bundled with a number of text corpuses to help you model your own data.
pylearn2. another machine learning toolbox but a bit less mature than scikit-learn.
the landscape doesn’t end with python libraries of course. spark is a new apache-licensed machine learning engine, specialized at real learn-time machine learning. it’s worth taking a look at and you can read more about it here: http://spark.apache.org/
if you need to store huge amounts of data, you require software that is specialized in managing and querying this data. traditionally this has been the playing field of relational databases such as oracle sql, mysql, sybase iq, and others. while they still are the go-to technology for many use cases, new types of databases have emerged under the grouping of nosql databases.
the name of this group can be misleading as “no” in this context stands for “not only.” a lack of functionality in sql is not the biggest reason for the paradigm shift, and many of the nosql databases have implemented a version of sql themselves. but traditional databases had some shortcomings that did not allow them to scale well. by solving some of the problems of traditional databases, nosql databases allow for a virtually endless growth of data.
many different types of databases have arisen, but they can be categorized into the following types:
s—data is stored in columns, which allows some algorithms to perform much faster queries. newer technologies use cell-wise storage. table-like structures are still very important.
—document stores no longer use tables but store every observation in a document. this allows for a much more flexible data scheme.
—data is collected, transformed, and aggregated not in batches but in real time. although we have categorized it here as a database to help you in tool selection, it is more a particular type of problem that drove creation of technologies like storm.
—data is not stored in a table; rather you assign a key for every value such as org.marketing.sales.2015: 20000. this scales very well but places almost all the implementation on the developer.
sql on hadoop
—batch queries on hadoop are in a sql-like language that uses the map-reduce framework in the background.
—this class combines the scalability of nosql databases with the advantages of a relational database. they all have a sql interface and a relational data model.
—not every problem is best stored in a table. some problems are more naturally translated into graph theory and stored in graph databases. a classic example of this is a social network.
scheduling tools help you to automate repetitive tasks and trigger jobs based on events such as adding a new file to a folder. these are similar to tools like cron on linux but specifically developed for big data. you can use them, for instance, to start a map-reduce task whenever a new dataset is available in a directory.
this class of tools was developed to optimize your big data installation by providing standardized profiling suites. a profiling suite is taken from a representative set of big data jobs. benchmarking and optimizing the big data infrastructure and configuration are not often jobs for data scientists themselves but for a professional specialized in setting up it infrastructure. using an optimized infrastructure can make a big cost difference. for example, if you can gain 10% on a cluster of 100 servers, you save the cost of 10 servers.
setting up a big data infrastructure is not an easy task and assisting engineers in deploying new applications into the big data cluster is where system deployment tools shine. they largely automate the installation and configuration of big data components. this is not a core task of a data scientist.
suppose that you have made a world-class soccer prediction application on hadoop, and you want to allow others to use the predictions made by your application. however, you have no idea of the architecture or technology of everyone keen on using your predictions. service tools excel here by exposing big data applications to other applications as a service. data scientists sometimes need to expose their models through services. the best known example is the rest service were rest stands for representational state transfer. it is often used to feed websites with data.
do you want everybody to have access to all of your data? if so, you probably need to have fine-grained control over the access to data but don’t want to manage this on an application-by-application basis. big data security tools allow you to have central and fine-grained control over access to the data. big data security has become a topic in its own right, and data scientists will usually only be confronted with it as a data consumer, seldom will they implement the security themselves.
Opinions expressed by DZone contributors are their own.