Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Cybersecurity and AI

DZone's Guide to

Cybersecurity and AI

Get ready — according to research, the widespread adoption of machine learning will likewise result in a slew of new attacks.

· AI Zone
Free Resource

Find out how AI-Fueled APIs from Neura can make interesting products more exciting and engaging. 

Various machine learning algorithms are revolutionizing software development. Today, machine learning and big data analysis is no longer a fringe area populated by specialists. Rather, it's rapidly becoming another mainstream software engineering area that we're just expected to understand. Of course, you'll have specialists, just like we have user experience and data management specialists today. But everybody will need to have some understanding of what machine learning is, how you can use it, and when it's appropriate.

It's rapidly becoming a core feature of many different applications today.

Just like widespread database use led to SQL injection attacks and widespread HTTP use led to things like cross-site scripting, the widespread adoption of machine learning will likewise result in a slew of new attacks — and we've already started to see these attacks emerge in academia. In the past month, teams from NYU and the University of Washington have published papers that show you how to do this kind of thing, and why it works.

The University of Washington paper is called Robust Physical-World Attacks on Machine Learning Models. We've known for a while that deep networks are susceptible to small changes that can cause misclassification, or that certain images that to us look like static can look like, say, flowers to a deep neural classifier. Until now, these kinds of attacks weren't much of a threat, as they seemed to work only under very constrained conditions. The University of Washington team was able to develop a novel attack algorithm that could cause road signs to be misclassified every time. They tested this on stop signs and right-turn signs and were able to misclassify these signs as speed limit or added lane signs.

The second group, from NYU (BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain), was able to show how you can manipulate training data to achieve similar goals. Today, many engineers who train systems do so with external training systems. Google, Amazon, and Microsoft all offer APIs that are commonly used today to train classifiers. The NYU team was able to show how you can manipulate training data to create hidden activation nodes in the trained network — essentially inserting mathematical back doors into the classifiers.

These examples were demonstrated with mundane tasks like handwriting or street sign recognition. Keep in mind that these kinds of classifiers are widely used for things like fraud detection, as well. These attacks are just the beginning and are a new approach in system compromise. They have the potential for wide ranging impact. Get ready.

To find out how AI-Fueled APIs can increase engagement and retention, download Six Ways to Boost Engagement for Your IoT Device or App with AI today.

Topics:
ai ,cybersecurity ,machine learning

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}