Everyone loves a trend. Trends break us out of the ordinary and expected. They give us a chance to be on the front edge of something that everyone’s talking about. It’s like sitting at the cool kids’ table back in school…it’s simply the place to be.
The Big Data trend is no different. It has every element of a great bandwagon:
- Mystery about all of those hidden insights that are just waiting to be discovered
- Fabulous business success once enough data is amassed that can be ‘monetized’
- Untold riches from a career as a data scientist
- Inevitable breakthroughs in science, health and human happiness
So there’s plenty of solid rationale for chasing yellow elephants across the landscape.
But what if being at the cool kids table wasn’t ultimately as beneficial as being in the Algebra Club? What if the things that make Big Data work aren’t as sexy as yellow elephants, bees and other logos? What if Big Data required the same infrastructure and disciplines that make small data work, just with distributed computing and better performance added to the mix?
Hadoop itself covers a fairly narrow use case in the grand scheme of Big Data. It’s a critical use case, to be sure…the ability to farm out, track and then map reduce enormous amounts of data. For LinkedIn, Facebook, Google and others, massive data is their stock-in-trade but for others that run businesses that do things like manufacture, sell, transport, and service, Big Data is a just a part of what they do, if they can do it at all.
Data of any size and shape
Those everyday businesses have rapidly increasing volumes of data available but too often haven’t created the infrastructure that will allow them to participate in the Big Data story. They haven’t integrated their applications, optimized their processes or brought their awareness up to real-time. For them, rather than chasing elephants, there’s an enormous opportunity to build an architecture that can manage data of any size and shape.