Four Use Cases Driving Adoption of Time Series Platforms Across Industries

DZone 's Guide to

Four Use Cases Driving Adoption of Time Series Platforms Across Industries

Let's take a look at an article from the new Database Guide that includes four use cases that are driving the adoption of time series platforms across industries.

· Database Zone ·
Free Resource

This article is featured in the new DZone Guide to Databases: Relational and Beyond. Get your free copy for more insightful articles, industry statistics, and more!

Time series databases are growing fast, and as per DB Engines, they are the fastest growing database category, ahead of even the Hadoop and NoSQL data stores. This rapid growth of time series data platforms is due to several recent technology trends that are getting traction.

One of the main reasons is that modern application architectures are spitting out a huge volume of metrics compared to legacy application architectures. The newer applications today are being powered by microservices and instrumented by DevOps toolchains. This underlying architecture generates a lot of metrics and events data, which requires special handling. On top of that, these newer apps are being hosted on cloud platforms using VMs, Docker, and others that use orchestration tools like Kubernetes.

These factors have led to a deluge of metrics and events which now require storage, queries, and analytics. After trying to use different data stores, such as traditional RDBMS or Hadoop/NoSQL, users finally realized that such use cases require purpose-built storage and processing and finally adopted time series data platforms.

Let's take a deeper look into four use cases behind this tremendous growth in time series platforms.


DevOps is not a new concept, but continues to gain traction and has become the biggest enabler of continuous delivery. Companies of all sizes are now adopting DevOps for faster delivery of their IT applications. The combination of app-dev, QA, and Ops teams makes natural sense, and every industry is adopting DevOps wholeheartedly.

Every organization uses a different set of tools for DevOps, based on what kind of technology they use, but it is fairly common for these toolchains to be comprised of several dozen technologies for deployment automation.Tools such as Chef, Puppet, Jira, and Git are used to instrument deployments using virtual machines and orchestration technologies like Docker, Kubernetes, and Mesosphere. The resulting orchestration ends up having dozens of moving parts and requires advanced monitoring to make sure that all the components are performant and meeting SLAs.

The good news is that all the tools and technologies used in a DevOps toolchain emit time series data, which can then be used to monitor, track, and analyze deployment speed and build performance. However, this requires a time series data platform which can ingest these metrics at a very fast rate, enable queries across the data set, and perform analytics in time to detect and fix any deployment or build issues. By using a purpose-built platform for metrics and events, DevOps toolchains can be monitored, tweaked, and optimized for multiple deployments per day.


Sensors and devices are enabling the instrumentation of the physical world. Every industry from manufacturing and automotive to healthcare and IT is racing towards implementing IoT systems. This move is resulting in these sensors and devices being embedded in everything from light bulbs to solar panels, which means petabytes of data are being generated every second. This data is all-time series data, which needs much more than simple storage. In fact, IoT has three unique requirements from data platforms.

1. Monitoring and Tracking

Data generated by sensors and devices provide information that can be translated into meaningful insights, such as battery performance, turbine production, shipment delivery status, and other information, which is usually mission-critical to business and improves productivity. Being able to quickly ingest and process this large volume of time series IoT data is the first step in any successful IoT project. Why is time series data becoming so important?

2. Analytics

Learn about the use cases that are creating a huge demand for time series data platforms.03. What is the future of time series data platforms in the context of future AI/MLdrivenapps?Four Use Cases DrivingAdoption of Time SeriesPlatforms Across Industries The next step is generating analytics from IoT data. Historical data from sensors is used to gain insights that can be applied to the current situation to create a major competitive advantage. Predictive maintenance, optimized traffic routing, improved churn management, and enhanced water conservation are all possible with IoT analytics.

3. Action and Control

Last but not least, IoT data can be woven into business processes to trigger intelligent actions. With the speed and velocity of events being generated by sensors, businesses want to act on this data in real time with no human intervention. For example, actions such as automatically shutting down a pump in case of a leak or changing a wind turbine's direction with wind speed all create an immediate business advantage. Time series platforms with built-in action frameworks enable such operations.


Microservices are incredibly helpful when building scalable and reliable architectures. Everything related to how IT services are delivered and consumed is undergoing tremendous change. Monolithic architectures are being replaced by microservices-driven apps, and cloud-based infrastructures are being tied together to deliver microservices-based architectures, which are delivering high-performing scalable apps 24/7.

To implement a microservices-style architecture, there are a few fundamental requirements.

Scalable Web Architecture

One of the first requirements of such an architecture is to ensure a high level of scalability, so these microservices are often deployed on the cloud so that they can scale on-demand. The cloud and container architectures powering these microservices require sub-second monitoring to trigger more capacity as needed, and that is usually achieved by analyzing the time series metrics generated by the stack.

Cross-Domain Visibility

A microservices-powered application can be built by different teamsspanning multiple IT domains. One team could be responsible for "customer account creation" and another team could be responsible for a microservice for "order creation." When new customers complain that it is taking a long time to place an order, the problem could be on either side.By using time series data generated by the microservices stack, performance metrics such as response times can be measured across multiple microservice domains.

Distributed Architecture

A distributed architecture is required either for technical reasons, such as scalability and reducing latency, or reasons such as location, jurisdiction, or disaster management needs. The distributed nature makes a lot of sense but creates a monitoring challenge. Different versions of the same microservice could be deployed in different areas — how do you track the performance against benchmarks and SLAs at a sub-second level? Again, time series data platforms come to the rescue and use metrics and events to track performance.

Real-Time Analytics

Due to reasons including the ones outlined above, the volume of metrics and events generated by modern apps is beyond what any human is able to realistically interpret and act on. How do we learn what's going on and take the best action possible? Machine learning with real-time analytics is crucial in finding the "signal from the noise." Whether the organization needs real-time analytics to buy and sell equities, perform predictive maintenance on a machine before it fails, or adjust prices based on customer behavior, processing, analyzing, and acting on time series data in real time is the problem to solve.


Observing systems in real time is critical as complex systems, by nature, behave and fail in very unpredictable ways. Observing different metrics, such as boiler temperature, order volume, unique visitors, and even infrastructure metrics such as system stats during the holiday season, a real part of understanding application and user behavior.


The learning process is all about teaching the system what data means and how to make decisions automatically. AI/ML is so powerful in terms of what it can do, but it needs tons of data to learn. Time series data platforms bring metrics and events from mission-critical apps and infrastructure so that the algorithms can learn what's going on.


After the analytics are generated, the next step is to automate the decision. The decision could simply be predicting when to request more resources for the app or a decision to give a customer a special discount.Such decisions could be made by the app itself or by another AI/ML app.In some cases, it is simply alerting another app to take action.

In Summary

IT apps are undergoing a sea change. Across industries, there is a realization that organizations should turn to AI/ML-driven smart apps, which can drive automation with the help of IoT and advanced instrumentation. Such app architectures are powered by microservices and built usingDevOps toolchains, with the ultimate goal of rapid upgrades to provide an excellent customer experience. As the mechanisms to deliver these apps have evolved, so has the need to monitor and instrument them. Time series data platforms that can handle a large number of metrics and events will continue to grow with the requirements of AI, predictive analytics, and newer app architectures.

This article is featured in the new DZone Guide to Databases: Relational and Beyond. Get your free copy for more insightful articles, industry statistics, and more!

database ,dzone guide ,database guide ,devops ,iot ,microservices ,real-time analytics

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}