Big Data Exploration With Microqueries and Data Sharpening
We take a look at how the architecture of the Zoomdata platform allows for effective data exploration efforts by big data teams.
Join the DZone community and get the full member experience.
Join For FreeOur last post covered features of the Zoomdata Query Engine and how push-down processing helps deliver the speed-to-insight users want and need. Microqueries (not to be confused with microservices) and Data Sharpening™ are important — and patented — features of Zoomdata's architecture. Let's take a look.
Microqueries and Data Sharpening™
Microqueries and Data Sharpening are patented technologies that work together to allow users to interact with big data. The Zoomdata Query Engine invokes them based on criteria such as the type of aggregate values requested and anticipated query run time. Microqueries and Data Sharpening are ideal for big data that is partitioned by date and that run on a cluster with many processing cores. This functionality is optional and can be disabled at the data source definition level.
Microqueries run in batches to sample data across database partitions. The Query Engine submits a full long-running query that runs with the first set of microqueries and a progress indicator estimates the progress of the full query. The full query and the microqueries run until the full query runs to completion or the user changes direction (the user changing direction idea is the important part, stay with us to learn why). If the user changes direction, the long-running query and the microqueries are canceled to conserve processing and network resources.
Data Sharpening analyzes the cumulative sample data and streams estimated results to the your browser (or other client) over a websocket connection. Data Sharpening's estimates may fluctuate a bit up or down until the final query is reported. Nevertheless, the relative values of each group usually remain consistent as the data sharpens. For example, the tallest bar in a chart at 10 percent completion will almost always remain the tallest bar at 100 percent completion. You can be confident exploring data even as it streams live to the dashboard.
Ad-Hoc Exploration Versus Reporting
You can zoom in, filter, re-group, rearrange, change, and even create new metrics and attributes — or take any other action — while you watch the data load. Why would you do that? Because a lot of data exploration and discovery is about identifying outliers or data that doesn't conform to expectations. With a visual analytics application like Zoomdata, you can see it. Immediately. The shape of the data takes form surprisingly quickly using our patented technology, so you don't need to wait for an excruciatingly long query to resolve before you can get on with it, as they say.
Contrast dynamic, stream-of-thought exploration with reporting. Reporting is retrospective and reports have a finality to them that conform with snapshots representing a day, a quarter, a year, a population, geography, a product line, and certain expectations and assumptions that are laid out in a report (Hint: "pixel-perfection" is about reporting, not data exploration). Exploration can go as broad and deep as the data allows.
Push-Down Processing Redux
Remember how Zoomdata performs push-down processing? Importantly, when you make a change that requires another trip to the data source, Zoomdata cancels the full long-running query and microqueries to free it up for the next sequence of queries. But, canceling active queries is not trivial, and many JDBC and ODBC drivers do not support it. In these cases, even if a Zoomdata Smart Data Connector primarily uses JDBC with SQL, it can issue native API calls to complete tasks not supported by the driver, such as query cancellation. It's pretty cool.
Published at DZone with permission of Ruhollah Farchtchi, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments