Hadoop Hive Meets Excel: Your Flight Is Boarding Now
Hadoop Hive Meets Excel: Your Flight Is Boarding Now
Learn how to blend data from a modern Hadoop platform with a traditional Excel file to see how flight departures at U.S. airports impacted by changing weather patterns.
Join the DZone community and get the full member experience.Join For Free
Hortonworks Sandbox for HDP and HDF is your chance to get started on learning, developing, testing and trying out new features. Each download comes preconfigured with interactive tutorials, sample data and developments from the Apache community.
In this blog series, we’ll be experimenting with the most interesting blends of data and tools. Whether it’s mixing traditional sources with modern data lakes, open-source DevOps on the cloud with protected internal legacy tools, SQL with NoSQL, web-wisdom-of-the-crowd with in-house handwritten notes, or IoT sensor data with idle chatting, we’re curious to find out: Will they blend? Want to find out what happens when IBM Watson meets Google News, Hadoop Hive meets Excel, R meets Python, or MS Word meets MongoDB?
Today’s challenge is weather-based — and something we’ve all experienced ourselves while traveling. How are flight departures at U.S. airports impacted by changing weather patterns? What role do weather and temperature fluctuations play in delaying flight patterns?
We’re sure the big data geeks at the big airlines have their own stash of secret, predictive algorithms, but we can also try to figure this out ourselves. To do that, we first need to combine weather information with flight departure data.
On the one hand, we have a whole archive of U.S. flights over the years, something in the order of millions of records, which we have saved on a big data platform, such as Hadoop Hive. On the other, we have daily U.S. weather information in the form of Excel files. So, a Hadoop parallel platform on one side and traditional Excel spreadsheets on the other. Will they blend?
Topic: Exploring correlations between flights delays and weather variables.
Challenge: Blend data from Hadoop Hive and Excel files.
Access mode: Connection to Hive with in-database processing and Excel file reading.
We have data for the years 2007 and 2008 of the “airline data set” already stored on an Apache Hive platform in a cluster of the AWS (Amazon Web Services) cloud. The “airline data set” has been made available and maintained over the years by the U.S. Department of Transportation’s Bureau of Transportation Statistics and it tracks the on-time performance of U.S. domestic flights operated by large air carriers.
To access the data in the Apache Hive platform, we use the KNIME nodes for in-database processing. That is, we start with a Hive Connector node to connect to the platform; then we use a Database Table Selector node to select the airline data set table and a Database Row Filter node to extract only Chicago O’Hare (ORD) as the origin airport. The Hive Connector node, like all the other big data connector nodes for Impala, MapR, Hortonworks, etc., is part of the KNIME Performance Extensions and requires a license.
The goal here is to assess the dependency of flight departure delays on local weather. Daily weather data for all U.S. major locations are available in the form of Excel Weather data for ORD airport only has to be downloaded to be joined with the flight records extracted from the Apache Hive data set in the previous step.
At this point, we have two options: we can upload the climate data to Apache Hive and perform an in-database join on the big data platform or we can extract the flight records from Apache Hive into KNIME Analytics platform and perform the join operation in KNIME. Both options are viable — the only difference being in the execution time of the joining operation.
Option 1: In-Database Join
First, Hive Loader node imports the weather data from Excel into Hive. For this operation, you can use any of the protocols supported by the file handling nodes, for example, SSH/SCP or FTP. We chose SSH and used an SSH Connection node.
Next, the Database Joiner node now joins the weather data for Chicago International Airport with the flight records of the airline dataset by date.
Now that we have all of the data in the same table, the Hive to Spark node transfers the data from Hive to Spark. There we run some pre-processing to normalize the data, to remove rows with missing values, and to transform categories into numbers.
Finally, the Spark Statistics node calculates a number of statistical measures, and the Spark Correlation Matrix node computes the correlation matrix for the selected input columns on the Spark cluster. The resulting table is transferred back for further analysis into KNIME Analytics Platform.
Option 2: In-KNIME Join
First, selected flight data are imported from Hive into KNIME Analytics Platform using a Database Connection Table Reader node. The Database Connection Table Reader node executes in-database the SQL query at its input port and imports the results into KNIME Analytics Platform.
Next, a Joiner node joins the flight data for Chicago Airport with the weather data from the Excel file by date.
At this point, the same preprocessing procedure is applied as in the third step in Option 1.
Finally, the Statistics node and the Linear Correlation node calculates the same statistical measures and correlations between departure delays and weather-related variables.
The KNIME workflow including a branch for each one of the two options is shown in the figure below.
While the approach described in Option 1 is faster — it takes advantage of parallel computation — the approach described in Option 2 is simpler to implement.
To conclude, you can always choose the best mix of Hadoop, Spark, and KNIME nodes to suit the problem at hand!
Yes, they blend!
The correlation matrix shows some correlation (0.12) between snow alert codes and departure delays in Chicago. The correlation level is not so high since snow is not the only cause of flight departure delays. Rain, for instance, is also correlated. This is hardly surprising.
Figure 2: Correlation matrix
However, what was surprising in this experiment was how easy it is to blend data from a modern Hadoop platform (any really) with a traditional Excel file!
Indeed, by just changing the connector node at the beginning of the workflow, the experiment could have run on any other big data platform. Notice that in this case, only the connector node would change; the in-database processing part would remain unaltered.
Another surprising conclusion from this experiment is the flexibility of KNIME Analytics Platform and its Performance Extensions. Indeed, mix-and-match is allowed and it can take you to the optimal degree of Hadoop, Spark, and KNIME for the best compromise between execution complexity and execution performance.
So, even for this experiment, involving Hadoop Hive on one side and Excel files on the other side, we can conclude that… yes, they blend!
Published at DZone with permission of Vincenzo Tursi , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.