Developers today have a lot of storage options for building strategic new apps, including JSON databases like Elasticsearch. These new technologies allow development teams to more easily and efficiently iterate on features.
Using agile methodologies, teams work in sprints that last just a few weeks, getting new features to market fast. Compared to relational databases, Elasticsearch is far less demanding in terms of modeling and structuring data, and this is a big advantage in terms of development speed.
If the data from these strategic apps have value, it will eventually be analyzed. The virtue of schema flexibility that is so compelling in Elasticsearch turns into a massive challenge when it comes to data analytics.
The basic data structure and APIs of Elasticsearch are fundamentally incompatible with most companies’ existing approach to data pipelines. And this lurking issue isn’t limited to Elasticsearch — it applies to third-party apps like Salesforce.com and Workday, MongoDB, Cassandra, Hadoop, Amazon S3, Azure Blob Store, and more. Companies are putting massive amounts of essential data in systems that are incompatible with how they make sense of their data. It’s a Big Data Problem.
Tomer Shiran, CEO and co-founder of Dremio, took on the challenge of using traditional business intelligence tools like Tableau, Qlik, and Power BI with Elasticsearch. Dremio unlocks data in Elasticsearch and other systems while preserving governance and security. He and his team built a data fabric for analytics that does not compromise the data in Elasticsearch, nor does it limit the queries that can be made using any SQL-based tool. With sophisticated query execution, a self-service, model, and data acceleration, people are able to use the full breadth of SQL.
This unlocks the islands of data that reside in Elasticsearch and other third-party apps so that BI, data analysts, and data scientists can access the data and build the reports they need while developers are free to develop apps to drive the business forward.