Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

How to Incrementally Feed the Right Kind of Data Into Your Intelligence Applications

DZone's Guide to

How to Incrementally Feed the Right Kind of Data Into Your Intelligence Applications

Having the courage to go against the flow and adopt an iterative approach — instead of diving into everything at once — can help you quickly achieve AI nirvana.

· AI Zone
Free Resource

Insight for I&O leaders on deploying AIOps platforms to enhance performance monitoring today. Read the Guide.

Wouldn’t it be nice to reach artificial intelligence (AI) nirvana? To have a system that provides real-time, context-aware decisions? One that monitors and auto-corrects manufacturing processes, helps CXOs make strategic decisions, or help retailers determine which deals they should offer?

Sounds great — but not so fast! Though AI-driven decision-making may be achievable (and certainly desirable), your goal shouldn’t be to get there right away. An all-in approach is not only costly; it’s a risky endeavor. Instead, consider an iterative approach — one that generates value throughout the process, sometimes even in a matter of months.

Where Do You Start?

AI starts with data — but not just any data. You hear a lot about big data, but data doesn’t even have to be big to yield results. Of course, you’ll need a critical mass, but data diversity is just as important to AI, if not more important than volume. Billions of transactional data sets may not lead to real insights. Though you’ll certainly learn something new, your AI engine won’t really understand the context. Context can generally only be provided through data diversity.

What Is Data Diversity?

Data diversity means a mix of transactional, emotional, static, real-time, structured, as well as unstructured data. A broad spectrum of data will better help any algorithm understand the context in which you are making decisions.

How to Apply an Iterative Approach to AI Nirvana

Iteration is about taking baby steps and incrementally feeding the right kind of data into your cognitive services or intelligence applications.

Here’s how you do it:

  1. Start with accessible, yet valuable data sets: Compile the easiest to assemble data that also has the greatest potential of providing value. This will likely be transaction data and not very diverse. For example, if you want to predict what kind of deals will appeal to customers, pull data on the success rate of past offers and who redeemed them. You may find basic correlations on which you can base your decisions. By leveraging historical data, you’ll also get valuable results in a very short timeframe.
  2. Feed your AI engine: Next, feed enriched data into your AI engine. For instance, add another type of data sets, such as emotional, real-time, or external data. In our example, this could be where the buyer made their purchase (in-store, mobile device, desktop) — and even at what time. You can also increase your data size at this stage.
  3. Savor the results: Now, here come the results. Once you start feeding your engine with voluminous, highly enriched data, your AI engine will start to tell a story about your customer and make recommendations you can act on. Who they are, when they buy, and why?
  4. Add the big stuff: At this point, you can feed your system big data. Big in terms of quality, diversity, internal, and external data, as well as volume (such as multiple years’ worth of offers that your customers have responded to). This is when you’ll start to receive insights to make context-aware decisions. Ultimately, your AI engine will give you recommendations on which to base your decisions.

Here’s an example of how it all comes together. An online store has an inventory bloat of red sweaters. To shift the garments, the store runs a campaign and offers those sweaters at a steep discount — a rather traditional approach.

But with the help analytics and historical data, the store could run a more diversified campaign, perhaps offering a small discount to those who are more likely to buy that sweater (targeting frequent buyers or those who have purchased similar brands). In this model, the higher discount is reserved as an incentive for those who are less likely to purchase it (infrequent buyers or those who haven’t purchased similar goods in the past).

But wait! As the store feeds the system more rich and diverse data, it slowly turns into an AI-based recommendation engine. Let’s say it detects an unexpected demand for green sweaters. It’s time to shift gears and follow the system’s recommendation. Instead of the traditional knee-jerk reaction of focusing on selling the excess inventory of red sweaters, even if they’re filling warehouse space, the ROI of investing into a green sweater campaign will outweigh the loss of not promoting the hard to sell red sweaters.

Having the courage to go against the flow and adopt an iterative approach, rather than diving into your big data all at once, can pay dividends and put AI-driven decision-making quickly within your reach. We call it minimal viable prediction (MVP). And it works.

TrueSight is an AIOps platform, powered by machine learning and analytics, that elevates IT operations to address multi-cloud complexity and the speed of digital transformation.

Topics:
ai ,big data analytics ,ai adoption ,ai applications

Published at DZone with permission of Wolf Ruzicka. See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}