Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

AI/ML 2019 Predictions (Part 5)

DZone's Guide to

AI/ML 2019 Predictions (Part 5)

2019 will be the year of the data engineer.

· AI Zone ·
Free Resource

Did you know that 50- 80% of your enterprise business processes can be automated with AssistEdge?  Identify processes, deploy bots and scale effortlessly with AssistEdge.

Given the speed with which technology evolves, we thought it would be interesting to ask IT executives to share their predictions for the coming year. Here are more of their thoughts on Artificial Intelligence (AI), Machine Learning (ML), and the other aspects of data science in 2019:

Nima Negahban, CTO and Co-founder, Kinetica

The rise of the data engineer brings AI to the forefront of the enterprise. Last year was the year of the data scientist. Enterprises focused heavily on hiring and empowering data scientists to create advanced analytics and ML models. 2019 is the year of the data engineer. Data engineers will find themselves in high demand - they specialize in translating the work of data scientists into hardened, data-driven software solutions for the business. This involves creating in-depth AI development, testing, DevOps and auditing processes that enable a company to incorporate AI and data pipelines at scale across the enterprise.

Human and ML form a symbiotic relationship to drive real-time business decisions. In 2019, the world of AI and analytics will need to converge in order to drive more meaningful business decisions. This will require a common approach for combining historical batch analytics, streaming analytics, location intelligence, graph analytics, and artificial intelligence in a single platform for complex analysis. The end result is a new model for combining ad-hoc analysis and machine learning to provide better insight faster than ever before.

Rick Spencer, V.P. of Engineering, Bitnami

Developer tools will emerge, which will analyze source code and find optimizations, bugs, add tests, and even add features.

Monte Zweben, CEO, Splice Machine

In 2019, ML will enter more of an operational phase. Instead of being implemented in backroom experiments, machine learning will be used in real-time, mission-critical, enterprise applications, in industries ranging from manufacturing and healthcare to finance.

Erez Yalon, Head of Security Research, Checkmarx

Artificial Intelligence (AI) and Machine Learning (ML) have changed from just buzzwords to actual tools to work with. From an InfoSec point of view, we see AI/ML already being used in defense tools to detect anomalies and potential threats, and there are a lot of discussions of malicious actors trying to disrupt these algorithms. The use of AI/ML in hacking tools is currently budding, and we can predict that AI/ML-based or -assisted attacks will become more and more frequent.

Andrew Howard, Chief Technology Officer, Kudelski Security 

In 2019, the hype around automation, ML, and AI will move to reality, with capabilities introduced to augment security teams and equip them to detect and respond faster to the proliferation and sophistication of threats. However, AI will be a double-edged sword. It will be used by cybersecurity professionals to predict and identify cybersecurity threats, but hackers will also take advantage of it to launch even more sophisticated cyber-attacks.

Ambuj Kumar, Co-founder and CEO, Fortanix

Many say AI is the new electricity, something so fundamental that it’ll impact almost everything we do. In the same way, data can be called the engine that powers this new type of electricity. New types of privacy-aware frameworks will allow healthcare businesses to use AI and ML with the most sensitive patient data, providing new insights for diseases such as cancer and helping with highly personalized drug discovery.

Beth Long, Senior Engineer and Technical Product Manager, New Relic

As systems become ever more complex, both automation and ML are critical tools. Organizations need people who understand how to fit these tools to the humans who keep the systems alive.

Brian Brinkmann, Vice President of Product, Logi Analytics

As demand for data scientist skyrockets, BI vendors who are fighting to differentiate themselves will use this opportunity to pivot their offerings and create tools that can enable application developers and product managers to build AI-aware applications. This will further help the creation and adoption of AI inside applications and change the technical expertise required to provide intelligent data analysis. In the next year, the transition to AI-aware applications will also enhance the end-user experience and will become the benchmark for end-user quality expectations for the future.

Ben Schrauwen, CTO and Co-founder, Oqton

The biggest surprise was the advances made in addressing the need for large training datasets. AlphaZero beat all previous versions and reached superhuman level without examples of human play. And generative adversarial networks (GANs) are being successfully applied to yield more robust models. Also, we’re seeing now that AIs can become so good at very specific tasks that humans can no longer tell the difference, e.g. Google Duplex effectively crossed the uncanny valley in speech synthesis, producing natural sounding conversations for specific, narrow domains.

I anticipate we’ll rapidly see AlphaZero’s approach applied to hard problems with large search spaces, to equal or even surpass human expertise. And advances in vision and 3D deep learning will lead to more and more solutions to help increase the productivity of humans at very specific tasks or even fully automate them.

Andreas Pettersson, CEO, Arcules

This past year, everyone was buzzing about AI and enterprises began to build the framework for this technology. Now that companies have the parts – IoT devices, video cameras, infrared sensors – in place, organizations can use this to find true value in the masses of unstructured data that has been flowing in for years. Data from smart thermostats, security feeds, and building traffic will actually be used to generate previously untapped business insights. For example, retailers will use this structured data to monitor foot traffic.

Alan Conboy, Office of the CTO, Scale Computing

In 2019 AI and ML will nearly reach its full potential by connecting and processing data faster over a global distribution of edge computing platforms. AI and ML insights have always been available but possibly leveraged a bit slower than needed over cloud platforms or traditional data centers. Now we can move the compute and storage capabilities closer to where data is retrieved and processed, enabling companies, organizations, and government agencies to make wiser and faster decisions. We’re already seeing this in the way airlines build and service airplanes, government defense agencies respond to hackers and how personal assistants make recommendations for future online purchases. This year, thanks to AI and ML, someone will finally know if that special someone really wants a fruitcake or power washer.

Bob Davis, CMO, Plutora

A top tech trend of 2019 will be the impact ML/AI have on the quality of software. In the past, we’ve designed delivery processes to be lean and reduce or eliminate waste but to me, that’s an outdated, glass-half-empty way of viewing the process. In 2019, if we want to fully leverage ML/AI, we need to understand that the opposite of waste is valued and take a glass-half-full view that becoming more efficient means increasing value, rather than reducing waste.

Nikita Shamgunov, CEO, MemSQL

Prediction #1: Modern workload demands will command a shift from NoSQL to NewSQL databases. Due to the constant surge of data-driven by ML,  AI,  and edge compute workloads, traditional NoSQL databases are no longer enough to satisfy market demands for better performance and scalability without adding new complexities to existing databases. Relational databases have evolved into more scalable and fast-operating NewSQL databases that are able to meet the requirements of these modern workloads that demand higher data processing power, by integrating  transactional and analytical processing capabilities  into a single database.

Prediction #2: Successful AI and machine learning initiatives will require CEOs to better understand their data infrastructures. The race to AI and ML is becoming more mainstream than ever and will require a more all-hands-on-deck approach. For businesses to successfully deploy AI and ML to maximize business opportunities and mitigate risks, CEOs and other C-level leaders will need to understand the maturity of their data infrastructures, including how their data is being stored and processed, to determine which technologies and talents are needed to drive transformation.

Prediction #3: AI will enable employees to minimize labor-intensive tasks and evolve their roles. AI adoption is expected to fuel the introduction of new roles and job opportunities in line with company strategies to become more data-driven. Instead of replacing humans to perform jobs, AI will help perform tasks that are normally time-consuming and labor-intensive tasks for employees, enabling employees to focus on more meaningful responsibilities such as analyzing insights and applying quick data-driven decision-making skills to solve problems, developing strategies and people, and more.

Ryan Ferrier, VP of Sales, Figure Eight

We’ll see AI adoption across all industries. All companies have multiple types of data that can be leveraged to build in efficiencies, lower cost, and increase revenue. It’s just about recognizing it’s potential. As we see more examples of how internal data is powering AI in the real world, companies will begin to recognize similarities within their own organization and begin to experiment or productize AI.

Dale Brown, VP of Business Development, Figure Eight

Alongside the increase in demand for AI within companies, we’ve also seen a continued shortage of trained data scientists. To increase the adoption of AI, AI platforms will need to empower traditional developers with tools to enable them to create machine learning models faster, as well as ensure they have an integrated platform that will allow developers to annotate and label the data needed to improve the accuracy of their models.

Nikita Ivanov, Founder and CTO, GridGain Systems

Gartner defined the ‘in-memory computing platform’ category in December 2017. They recognized the emergence of a new product category which includes in-memory data grids, in-memory databases, streaming analytics platforms, and other in-memory technologies. Since that time, the category has rapidly expanded and evolved and I see it continuing to evolve in 2019 as in-memory computing platform vendors respond to the massive interest in machine-driven decision making by adding ML and deep learning capabilities to in-memory computing platforms. These new ML capabilities will allow companies to increasingly deploy in-process HTAP (hybrid transactional/analytical processing) solutions which update their ML model in real-time based on new operational data. In-process HTAP enables optimal decision making based on real-time data for mission-critical applications such as fraud detection, credit approvals, and vehicle and package routing decisions. In addition, new integrations between in-memory computing platforms and deep learning systems will allow companies to more easily access and analyze their operational data using artificial intelligence solutions.

Uri Sarid, CTO, MuleSoft

In 2019, we’ll see more cases of unwanted bias in machine learning and AI. Rather than feeding human decisions to machines and programming them to replicate those decisions, businesses should provide machines with desired outcomes and allow them to identify the patterns more objectively to achieve those outcomes. To do this, machines and AI will need to access a wider range of data from different sources, exposed through an application network to achieve a more statistically correct and unbiased outcome.

Josh Feast, CEO, Cogito

The year of demystifying AI. In 2019, society will push for the demystification of AI and demand a better understanding of what technology is being built, and greater transparency into how it is being used. In recent years, there has been an apparent shift in mindset across our society when it comes to AI, especially regarding privacy concerns. As a result, technology creators will have to embrace full transparency and responsibility to ensure privacy rights are respected and that the technology is being used in a valuable and ethical way. In the end, this will lead to a clearer division between AI’s purpose, whether it’s AI leveraged to automate simple tasks or used to better augment human’s natural abilities. As transparency increases people will better understand that AI is not an all-encompassing term for machines that can replicate and act like a complete human, but rather a more explicit set of functionalities that can better automate simple tasks and augment people executing more complex actions. This will result in less fear of a machine takeover and greater acceptance of new innovation.

AI will help build more human organizations. In the next 12 months, organizations will increasingly turn to AI to augment humans in areas previously not considered possible. Previously, the majority of organizations have leveraged AI to eliminate simple tasks and not to actually help humans be better humans. By taking “humanness” — emotion, fatigue, stress, etc. — into account when adopting AI technologies, organizations will be able to foster more empathetic and human-centric organizations. This will help increase productivity, employee happiness, and engagement. AI will help people become better versions of themselves and better realize their potential.

Laurent Bride, CTO, Talend

Questions around data morality will slow innovation in AI/ML: The past year has seen the hype around AI/ML explode, and data ethics, trust, bias, and fairness have all surfaced to combat inequalities in the process to make everything intelligent. There are many layers to data morality, and while ML advancements won’t cease — they’ll slow down in 2019 as researchers try to hash out a fair, balanced approach to machine-made decisions.

The black box of algorithms becomes less opaque: Part of the issue with data morality with AI and ML is that numbers and scenarios are crunched without insight into subsequent answers came to be. Even researchers can have a hard time sorting it out after the fact. But in the coming years, while it won’t lead to complete transparency with proprietary algorithms, the black box will still become less opaque as end users become increasingly educated about data and how it’s used.

Consuming AI in byte sized applications is the best way to transform digitally. #BuiltOnAI, EdgeVerve’s business application, provides you with everything you need to plug & play AI into your enterprise.  Learn more.

Topics:
artificial intelligence ,ai ,machine learning ,ml ,data science ,ai research ,ai in 2019 ,data engineer

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}