2018 AI/ML Predictions (Part 3)
2018 AI/ML Predictions (Part 3)
While artificial intelligence is adopted, data accuracy, quality, and preconceived notions may affect the quality of machine learning.
Join the DZone community and get the full member experience.Join For Free
Given how fast technology is changing, we thought it would be interesting to ask IT executives to share their thoughts on the biggest surprises in 2017 and their predictions for 2018.
Here's the third of three articles sharing what they told us about their predictions for artificial intelligence/machine learning.
In 2018, advanced machine learning will be more accessible to organizations. Packages such as Tensorflow, Caffe, and PyTorch are rapidly maturing, and developments like the GPU DataFrame collaboration unveiled in September promise to make it far easier to exploit the power of GPU acceleration. The more developments like these enable organizations to effortlessly access machine learning, the more people will think of AI as technology rather than magic. This demystification will drive more adoption.
I’d be remiss if I left out the popularity of Python and how it is helping power AI and machine learning adoption across the enterprise. I believe we will see more organizations willing to bypass the Java stack and move data science assets built with Python and R directly into production. This will break down traditional data infrastructure silos, accelerating the model iteration cycle and reducing time to deployment.
In 2018, AI will help companies scale and will take on a higher percentage of work. In 2018, business leaders will push to make business run more efficiently and will turn to AI and machine learning to help. Companies will also turn to AI to help scale and do jobs instead of adding headcount. We will see AI developments and research move from the scientific/abstract concept phase to more practical. As a result, enterprises will use AI and machine learning to push the limits of maximum efficiency — more work will be completed by AI or machine learning.
2018 will prove to be the year that sees AI for what it is: an opportunity to enhance human capabilities and further insights into data-rich fields, and not a job-stealer. As more organizations adopt AI technology and break down the barriers to entry, the more we can advance in across all industries and verticals.
In 2018, machine learning will be the ultimate weapon in the cloud wars. We are one major announcement away from the technology industry being pushed into a Machine Learning Renaissance. 2018 will be a year of search and exploration of machine learning to determine how best to use it, and what things can be automated that you never thought possible before. Cloud will make machine learning pervasive and soon enough, it will be built into every application — used by everyone either directly or indirectly.
Pervasiveness of flawed data will lead to machine learning instability. 2018 will usher in organizations refining their AI, machine learning, and deep learning algorithms to leverage company and third-party data for the improvement of the broader customer experience. However, only 3% of companies are working with acceptably accurate data. Unless companies get a handle on their data and ensure 100% accuracy, machine learning could be learning from flawed data, resulting in inaccurate analytics and predicted outcomes, leading to poor business decisions.
In 2018, the transparency of machine learning will prove critical. We need to make the data and machine learning’s “thinking” transparent. Companies will emerge that are dedicated to the monitoring of machine learning processes. In the future, machine learning will be used to make decisions directly affecting consumers and it will be imperative that the decision-making process is highly predictive and reviewable.
The rapid growth in machine learning will continue as companies strive to get the most business value and competitive advantage from their existing data. As machine learning goes more mainstream, we will see an increasing number of data marts as different teams interpret data from the data lake differently and generate new data from that perspective.
Matthew Farrallee, Emerging Technology and Strategy, CTO Office, Red Hat
In the artificial intelligence and machine learning space in 2018, we will see the rise of the intelligent application. A surprise from 2017 is the realization that while these applications are beginning to become more mainstream, they are actually already among us and are only going to continue to get more prominent in our workflows. Also in 2018, there will be widespread enterprise adoption of a standard workflow for building artificial intelligence applications. This will continue to push AI enabled applications into the mainstream.
One of the biggest surprises of 2017 is that the consolidation of machine learning frameworks has already started. This will continue into 2018 but the groundwork for consolidation has already been built.
“One of the biggest turns of events was the long-anticipated emergence of Artificial Intelligence and Machine Learning in real applications that solve real-world business problems. The algorithms have been around for decades but now we have the massive compute power in the cloud to run the algorithms and vast volumes of data on which to apply them. 2017 saw these three things come together to make AI a real and viable business tool.” — Dan Juengst, Principal Technology Evangelist, OutSystems
As we saw AI and ML take off in 2017, we will continue to see the use of these techniques grow as they are applied to more business problems resulting in solutions that haven’t even been conceived of before. For example, applying machine learning to the massive amounts of data being gathered about end-user interactions on mobile applications will allow organizations to better understand their customers and be proactive and prescriptive with how they market to those users. We will also see AI begin to figure prominently in the world of IT operations where machine learning algorithms will be able to better process the large amounts of system alerts, performance metrics, and log files that emanate from the infrastructure and applications within the IT department.
While we have extensive knowledge about architecting and hosting scalable software services, very few people know how to design and host a scalable insight-as-a-service offering. In addition to the traditional DevOps, a new discipline must emerge: AIOps. In the next couple of years, I expect this knowledge beginning to crystallize in the form of architecture patterns and best practices for AI model design, operationalization, and management.
AI doesn’t go mainstream, but businesses lay AI groundwork.Today AI is more of a trendy buzzword than practical reality, and it’s difficult to execute because AI is only as good as its data. While data integrity still varies within the enterprise, true implementation of AI is still a concept that will not come to fruition for a few years. However, we’ve seen early stages of machine learning applications in verticals such as advertising and retail. In the years ahead, we’ll see more industries, including industrial IoT, digital health, and digital finance, begin taking advantage of machine learning within applications to provide more meaningful user experiences. Throughout this transformation, the database will play an instrumental role by accommodating rapidly-changing data at scale while keeping big data sets reliable and secure.
Bias in training datasets dominates the AI conversation. Everywhere you turn, companies are adding AI to their products to make them smarter, more efficient, and even autonomous. In 2017 we heard competing arguments for whether AI would create jobs or eliminate them, with some even proposing the end of the human race. What has started to emerge as a key part of the conversation is how training datasets shape the behavior of these models. It turns out a model is only as good as the training data and developing a representative, an effective training dataset is very challenging. As a trivial example consider the example tweeted by a Facebook engineer of a soap dispenser that works for white people but not those with darker skin. Humans are hopelessly biased, and the question for AI will become whether we can do better in terms of bias or will we do worse. This debate will center around data ownership - what data we own about ourselves, and the companies like Google, Facebook, Amazon, Uber, etc - who have amassed enormous datasets that will feed our models.
Developers will confront the question of open sourcing their AI/ML datasets. It is no secret that companies like Facebook, Google and Amazon currently have a monopoly on our data. In 2018, developers will need to make a decision: band together and open-source their AI/ML datasets in hopes of standing up to these monopolies, or give in and resign themselves to a future where Mark Zuckerberg and Sundar Pichai remain the keeper of the keys to AI innovation. One technology that will make these developer-led, open-source initiatives possible is homomorphic encryption. Through homomorphic encryption, AI/ML models can be developed and verified on a blockchain before being shared, in turn liberating them from today's limited and highly-centralized data sets. This approach paves the way for a more democratic and collaborative AI future while at the same time skirting any concerns with privacy and proprietary data.
Kris Boyd, Field CTO, Tintri
In 2018 there will be further development of apps and devices that use this learning, and companies will try to find new and exciting ways to include AI. There will always be a philosophical debate when it comes to AI, but there are ways to expand its uses without giving it too much control. Automation will make self-driving data centers a reality. There will be guaranteed, real-time predictable performance without IT intervention. IT folks will be able to concentrate on more important tasks that add value to the company rather than keeping the engine running. This will be achieved when companies ensure a clear swim lane for very every VM.
In 2018, cutting-edge research in AI will finally do the first step, from detection and recognition, understanding what we see/hear, to reasoning — understanding what it means, and solving problems. At long last, the field will actually justify its name — intelligence!
Over the next year, interest in AI will grow across every industry. By 2020, the AI market will grow to $47 billion. But how will these investments pay off for the enterprise? Equipped with AI and cognitive systems, big data analytics, and machine learning, the insights-driven Intelligent Enterprise will outpace its competition. Better data will mean better algorithms, and better algorithms will mean better data, and so on. We will become much more productive as we offload collecting and processing data to AI systems. The Intelligent Enterprise will leverage agile development to build apps in the Cloud, automate processes and menial tasks to optimize efficiency, and explore data lakes for sophisticated insights and better decision making.
The future of business requires artificial intelligence. In 2018, I expect AI techniques to be applied to solve more of the complex engineering problems organizations face in design, testing, and certification of engineering products. By utilizing knowledge management platforms to amplify and augment human decision making, AI can take historical data to make sense of problems that otherwise may not have been solved with traditional engineering.
Moreover, while neural networks have existed for decades, only now is massive computing power available at a reasonable cost, which in turn has helped increase the number of layers in these networks. Each layer adds more intelligence but also consumes enormous computing power, which used to be prohibitively expensive. More layers mean better outcomes. Over time, AI and machine learning will become smarter about analyzing data and making discoveries quickly that can positively affect businesses’ bottom lines.
In 2018, we’ll see the rise of machine learning and augmented reality Machine learning is already present with Apple to an extent, as Siri provides suggestions based on your search history, but the possibilities of machine learning could be virtually endless in the future by leveraging the entire Apple ecosystem. Augmented reality (AR) is going to be monumental in education and healthcare. In education, AR will bring students to another place, culture, time and allow them a learning experience unlike any other. Real “field trips” done in real time, right from the classroom. And, as most of us know, a hospital stay is not very pleasant. While iPad at the patient bedside is drastically improving the healthcare experience, imagine a future where a patient can virtually check out of their room and take a break from the stress of treatment.
Opinions expressed by DZone contributors are their own.