Kunal Agarwal, CEO of Unravel Data shared his predictions on three big data trends in 2017:
“In 2017 Big Data will begin to cross a chasm into the mainstream, in large part resulting from the popularity of Hadoop and Spark,” said . “Companies will use Big Data for mission-critical needs when running their data stacks. These are the same companies that once had issues with the security threat propaganda that plagued Hadoop and Spark; that’s now in the past. We have only touched the tip of the iceberg for what Hadoop and Spark are capable of offering when running mission-critical jobs on a high-performance Big Data platform.”
“In 2017 we will see more Big Data workloads moving to the cloud, while a large number of customers who traditionally have run their operations on-premises will move to a hybrid cloud/on-premises model. We can also expect to see companies using the cloud not just for data storage, but for data processing. And we’ll see mainstream adoption of the cloud, which will give companies confidence in running their Big Data clusters in the cloud, and not just on-premises."
“As Hadoop and Spark enter the mainstream, we can expect consumers to demand comprehensive Big Data solutions – not just piece parts – in 2017. Even in 2016, many companies have seen platforms running just Hadoop and Spark as unstable. But those platforms will be tasked to run a multitude of apps, and those platforms will be expected to become the cornerstone for companies’ Big Data initiatives. On the supplier side, we can expect to see more companies selling prebuilt Big Data solutions that meet a variety of needs, while delivering stable high-performance as well as the ability to “foresee” and head off performance issues before they arise.”
While Tom Phelan, Co-founder and Chief Architect of BlueData had an additional consideration with regards to Big Data as a Service:
Big Data continued to see rising adoption throughout 2016, and we’ve observed an increasing number of organizations transitioning from experimental projects to large-scale deployments in production. However, the complexity and cost associated with traditional Big Data infrastructure has also prevented a number of enterprises from moving forward. Until recently, most enterprise Hadoop deployments were implemented the traditional way: on bare-metal physical servers with direct attached storage. Big-Data-as-a-Service (BDaaS) has emerged as a simpler and more cost-effective option for deploying Hadoop as well as Spark, Kafka, Cassandra, and other Big Data frameworks.
As the public cloud becomes a more common deployment model for Big Data, we anticipate many of these deployments shifting to BDaaS offerings in 2017. In addition to solutions offered by newer BDaaS vendors like BlueData and Qubole, we’ll see more initiatives from established public cloud players like AWS, Google, IBM, and Microsoft. We can also expect a range of other announcements that will further validate the trend toward BDaaS, including both major partnerships (such as VMware’s recent embrace of AWS) and acquisitions (SAP buying Altiscale). As the ecosystem expands, customers will have the flexibility to choose from a range of BDaaS solutions, including public cloud as well as on-premises and even hybrid options (e.g. compute in the cloud and data stored on-premises).
What are some big data trends you see for 2017?