2017 AI/ML Surprises
2017 AI/ML Surprises
Google's AlphaGo seems to have shown just how quickly AI/ML can learn and win. Check out what surprised even IT executives in the world of AI and ML this year.
Join the DZone community and get the full member experience.Join For Free
Given how fast technology is changing, we thought it would be interesting to ask IT executives to share their thoughts on the biggest surprises in 2017 and their predictions for 2018.
Here's what they told us about the biggest surprises about artificial intelligence and machine learning. We'll cover predictions for 2018 in several other articles.
The biggest surprise I saw out of 2017 was the Oracle announcement of the Oracle Autonomous Database Cloud. It’s great to see AI being put to use to solve redundant system administrative tasks the way Oracle has applied it to their database platform.
In 2017, AI platforms such as Teneo developed a hybrid approach, combining linguistic and machine learning in one environment. This enables the platform to start working from day one, whether training data is available or not. If not, it can learn from sample data generated by the users. When data is available, the platform can use it for even more refined training. This hybrid approach ensures the enterprise always has complete control over the AI platform from day one. This was an important event for 2017 because using this "best of both" approach meant no enterprise was forced into unsuitable implementation processes or a white elephant scenario, where they end up with a solution that simply does not work the way their business does.
No surprises here, but the award for the biggest event (not so much surprise) was when Google’s AlphaGo stepped up to the plate and taught itself how to master the game of Go, having been given nothing more than the basic rules. Why such a major milestone? Because up until now, machines have needed people to teach them, to feed them data, and to help them learn in a supervised way until they’re ready to take things to the next level through consumption of massive datasets. In contrast, AlphaGo Zero was able to gain mastery by playing itself, then updating itself based on what it had learned from the game. Play this over millions and millions of times, and the result was a machine that could beat the previous AlphaGo 90% of the time — an impressive feat given that AlphaGo was able to beat the 18 times world champion 100-nil. So where to from here? Well, I’m not the paranoid type so I don’t think Skynet just lit up, but at the same time, I believe we are fast approaching the singularity and a massive change in the pace of technological advancement and associated societal impacts. To that end, I believe that all of us working in technology need to take a long, hard look at where we want this to go.
In 2017, AI/ML is being touted as a panacea for nearly everything, including cybersecurity. It feels like it's the buzzword du jour, much like big data was a few years back.
I have been surprised over the last year by the how magical the thinking is around AI and machine learning. Even as everybody and their dog (or chicken) seems to have adopted various machine learning solutions for various problems, the fairly obvious fact that domain knowledge still matters enormously seems to have been missed by many.
The biggest surprise in AI and machine learning in 2017 was the broad consumer adoption of conversational agents and digital assistants. Over the past decade, we’ve seen consumer trends drive enterprise adoption (think mobile, BYOD, etc.). Like most people, I initially viewed digital assistants as a gimmick, but Alexa changed everything. Now, virtually every enterprise is looking at a bot strategy for customer-facing apps/devices as well as workforce applications. We have clients that are answering customer inquiries, helpdesk requests, and even trading stocks via chatbots. These start out as simple scripted question/response algorithms but learn and adapt over time. We are now integrating more complex activities and simulating true human interactions. The speed at which global enterprises are deploying these AI bots is faster than any recent technology adoption that I can remember.
I am consistently surprised at how much AI is actually out there already and how quickly it is progressing. While some of these use cases may seem simple, there are also a lot of clever ideas being tested or even implemented. I think this pragmatic approach can unlock real value from using AI to leverage insights and actually make something happen — autonomously, and possibly in real time.
The low barrier to entry for companies and teams to add artificial intelligence and machine learning (AI/ML) into their product was the big highlight of the year (and rightly so). There is absolutely no excuse to not leverage AI/ML now that AWS and other platforms have made it so easy to build and deploy sophisticated performance algorithms. The fact that AI/ML have moved more into the mainstream this year is a strong validation that we truly are on the cusp of the golden age of machine learning.
This year, the rapid growth and uptake of machine learning within enterprise software was impressive, to say the least. From research to implementation, the adoption of machine learning has accelerated in speed. Although many individuals predicted machine learning would be impactful in 2017, the speed of innovation was most surprising in 2017. For example, through multiple cloud consumption releases, companies were able to benefit from and see tangible value using the technology.
The biggest surprise in 2017 was that quantum computing started to become a real thing. There are actual startups that are being funded to deliver cloud quantum computing-as-a-service. What I thought was a Discovery Channel theory is closer to reality than I suspected. Looking at where AI is headed in both Google and TensorFlow, we now have the ability to do things we haven’t been able to do in the past, making neural networking and deep thinking a reality. The next quantum leap, however, will lead to having enough processing power to do the unimaginable.
For me, watching the various developments in deep learning was very exciting. Early in the year my attention was turned to Libratus and DeepStack, two engines that proved AI can best humans at Texas Hold’Em poker. Then, Google DeepMind unveiled its latest Go-playing engine, AlphaGo Zero. What makes AlphaGo Zero significant is that it learned its strategy entirely by playing itself. And yet, in 40 days, it proved able to beat all previous iterations of AlphaGo, including the version that beat a world champion. This new version was a significant demonstration of the power of machine learning to accurately extract, identify, and act on data.
In 2017, AI and machine learning adoption failed to meet the hype. This isn’t necessarily a surprise, but it is noteworthy due to the fierce debate over the last year. Prominent CTOs, CIOs, and others, such as Elon Musk and Stephen Hawking, wondered if AI would “take over,” and discussion raged on how robots could uproot jobs en-masse and how people and companies can protect themselves against the “dangers of AI.” Despite this, progress and development in AI did not materialize as expected. In the enterprise, AI and machine learning began adoption, but not at the scale and extent predicted.
I think the biggest surprise in 2017 was the way organizations began to publicly support and voice their opinions on social causes and issues — even when controversial. This was evident in their content from blogs to advertising and they used an array of technologies to reach their audiences across channels to accomplish this. As a tech company that offers AI solutions to understand buyer beliefs (narratives), it’s interesting how this has changed and evolved over time and become a natural part of brand identity for some organizations.”
Kris Boyd, Field CTO, Tintri
How quickly mobile machine learning became a reality has been the biggest takeaway from 2017. Google announced TensorFlow, and it changed the game in terms of how mobile apps get developed. This is important because the number of users is hundreds of millions, if not billions. This could make big data even bigger. The insights from this type of learning will have a real impact on how devices and apps are developed in the very near future.
Opinions expressed by DZone contributors are their own.