How Has AI Changed in the Past Year?

DZone 's Guide to

How Has AI Changed in the Past Year?

Here's what 22 executives who are familiar with AI and all its variants said when we asked them, "How has AI changed in the past year or so?"

· AI Zone ·
Free Resource

To gather insights on the state of artificial intelligence (AI), and all its variant — machine learning (ML), deep learning (DL), natural language processing (NLP), predictive analytics, and multiple neural networks — we spoke with 22 executives who are familiar with AI.

We asked them, "How has AI changed in the past year or so?"

Here's what they told us.

It's Being Used to Solve Business Problems

  • The last two years have seen more change than the previous five. There’s been a lot of churn as companies are beginning to understand how germane AI is to their business and identify bite-sized chunks. My 16-year-old daughter has four apps on her smartphone: Snapchat, WhatsApp, Facebook Messenger, and Instagram. She talks to Macy’s, Nordstrom’s, and all other bots via the messaging apps. She’ll probably add Slack when she starts working.
  • AI has been around a long time but the buzzword has become more popular of late. Can it be incorporated into the real world? A lot of new players and investors. I see it evolving like the cloud eight or nine years ago. Today, the cloud is a commodity and I see the same thing occurring with AI, except faster, as consumers adopt autonomous cars and other things AI provides to improve the quality of life — or at least make it simpler and easier.
  • Predictive and prescriptive analytics. In the last five to ten years, AI was focused on learning to recognize faces and trees. No need to reason out what it was recognizing. When you apply AI to security, it’s important to explain why things are. An explainable ML algorithm is very important in the field of security. AI must be able to explain itself.
  • People are beginning to apply AI versus just talking about concepts. TensorFlow and neural networks can’t run on regular GPUs. We are working with NVidia, Intel, and others.
  • The most important shift is towards deep learning. Neural networks with additional computing power. The last few years shift allowed us to approach neural networks with more than three layers for deep learning that enables better image processing and analysis of text streams. Identify the useful features in data. Enabled new areas without features engineering since machines will do this now. Using recurring neural networks for speech analysis. Voice assistants are the most disruptive. Alexa was great but not very good at discerning context. Google Home is better at contextual understanding.
  • I think the market is maturing and is becoming more attuned to the reality of what is possible today versus the future hype. The technology is a tool for humans rather than a replacement for humans. Clients have dabbled and are now more ready to commit on a broader basis as the technology has become proven. While there are plenty of new vendors entering the fray on a daily basis, the first-generation vendors are cementing and becoming established within their chosen target markets.
  • Investment is growing and there’s a race for patents and intellectual property, especially in ML. Amazon spent $1 billion on Kiva robotics, enabling them to reduce shipment cycle times from one hour to 15 minutes — a 75% reduction is huge with regards to customer satisfaction and cost reduction. Netflix is using AI to personalize viewing recommendations and increase customer satisfaction and is saving more than $1 billion in lost subscription revenue.
  • Overall, we’re noticing more sophisticated AI technology being developed beyond machine learning, specifically as it relates to cyber-security protection. We ourselves are putting more resources into augmenting many of our existing end-point solutions and replacing those that are failing the test of new threats. With the onset of new malware, it’s imperative that AI is keeping pace.
  • AI/ML have become more mainstream: the new big data. It’s catching on thanks to more computing power, more data, and more investment, given the money that can be made. ML is a reality now.

Tools and Libraries Have Improved

  • The ability to see more AI/ML libraries that are more mature and more scalable. The Fortune 50 are able to handle data at scale as they have platforms that enable them to use larger libraries. Google and Facebook are now using AI/ML for more diverse problem solving, fraud detection, and risk exposure. Enterprises are innovating around AI/ML themselves. Larger neural networks are deeper in nature with more data resulting in more knowledge and greater accuracy.
  • The time it takes to get algorithms up and running has declined with better tools. Black boxes must be trained. Data science is a bottleneck. Smart domain experts get data into formation for analysis.
  • It used to be about replacing what humans can do but has changed to practical computational statistics — improving accuracy in facial recognition. If it’s 99% correct, it’s more accurate than humans and will get better with more data. It used to be about algorithms but that’s not where the differentiation comes from. You can use Python and Open CV without knowing the algorithms. Use APIs to access the algorithms and leverage domain expertise. Use automation to leverage open algorithms and to apply domain knowledge to refine and train.
  • I think the big word in the past year is democratization. We are starting to see companies of all sizes making AI visibly available to the masses. Additionally, in the past, a lot of the market relied on having really smart data scientists to help clients make heads or tails of their technology. But more recently — and this is the trend I expect to emerge over the next few years — I think we are seeing companies creating friendlier end-user tools to harness the power of AI.

Cloud Enables Scale 

  • It has become easier to apply with the tools and infrastructure. Cloud infrastructure is better-suited to run learning algorithms at massive scale. A lot of tools are built into AWS and Google Cloud that makes it easy to get started.
  • The field of AI has dramatically improved over the past few years, especially with major advancements like machines becoming increasingly capable of processing a large amount of data, better algorithms, and easily available cloud infrastructure. For example, Google’s Cloud TPU is accessible to everyone to improve training and prediction. Apple is reported to be working on AI chip that developers can leverage, as well.
  • While the business problems are the same, the dynamics have changed with more data and more devices thanks to IoT. The cloud is enabling computing of huge amounts of data. There are advancements in data science and it is becoming easier and faster to write algorithms.


  • The past year or two, we’ve moved from machine learning and deep learning to AI. This is disconcerting because of the implicit promise made by Star Trek and Star Wars. The science fiction industry has made a promise that’s a recipe for disaster. AI is the poster child for the hype cycle. AI hasn’t made it past the trough of disillusionment. Kalman filtering is a way for the system to estimate response and the state of a physical system. It is considered to be signal processing, not AI. Regression is a form of AI but is now considered to be significantly distinct. ML, traffic planning, and logistics are all forms of AI but avoid the calamitous collapse of the term AI. Being willing to label something as AI is a potentially dangerous trend since it gives the listener permission to make up any expectations.
  • You can look at systems and interact with the world to act in an intelligent way. When you make a prediction or take action, you are changing the state and that affects the next action. You have the ability to tackle problems as you are taking action with the world in constant flux. Everything is fundamentally different and AI enables you to respond accordingly. You cannot rely on a static data set. We live in a stateful environment that’s always changing.
  • People want to take insights to action and embed into real-time infrastructure. Taking to action, not just academic reports. What, why, how — go from observation to diagnostic, predictive, prescriptive, to automated action.
  • In the last four years, we have transitioned from ML algorithms to DL algorithms. The implication is DL can adapt to cognitive scenarios much quicker. Learn by themselves with DL algorithms and more generic data. Pre-trained domain knowledge, linguistics, axioms; more data from customers enables faster and better performance of the algorithm.

What are the biggest changes you've seen in AI?

Here’s who we talked to:

  • Gaurav Banga, CEO, and Dr. Vinay Sridhara, CTO, Balbix
  • Abhinav Sharma, Digital Servicing Group Lead, Barclaycard US
  • Pedro Arellano, VP Product Strategy, Birst
  • Matt Jackson, VP and National General Manager, BlueMetal
  • Mark Hammond, CEO, Bonsai
  • Ashok Reddy, General Manager, Mainframe, CA Technologies
  • Sundeep Sanghavi, Co-founder and CEO, DataRPM, a Progress Company
  • Eli David, Co-Founder and Chief Technology Officer, Deep Instinct
  • Ali Din, GM and CMO, and Mark Millar, Director of Research and Development, dinCloud
  • Sastry Malladi, CTO, FogHorn Systems
  • Flavio Villanustre, VP Technology LexisNexis Risk Solutions, HPCC Systems
  • Rob High, CTO Watson, IBM
  • Jan Van Hoecke, CTO, iManage
  • Eldar Sadikov, CEO and Co-founder, Jetlore
  • Amit Vij, CEO and Co-Founder, Kinetica
  • Ted Dunning, PhD., Chief Application Architect, MapR
  • Bob Friday, CTO and Co-founder, and Jeff Aaron, VP of Marketing, Mist
  • Sri Ramanathan, Group VP AI Bots and Mobile, Oracle
  • Scott Parker, Senior Product Marketing Manager, Sinequa
  • Michael O’Connell, Chief Analytics Officer, TIBCO
ai, algorithms, deep learning, machine learning, neural networks

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}