DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones AWS Cloud
by AWS Developer Relations
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones
AWS Cloud
by AWS Developer Relations
The Latest "Software Integration: The Intersection of APIs, Microservices, and Cloud-Based Systems" Trend Report
Get the report
  1. DZone
  2. Data Engineering
  3. AI/ML
  4. How to Protect the IP Of AI

How to Protect the IP Of AI

Let's take a look at one way to protect the IP of Artificial Intelligence: watermarking, which is done in two stages.

Adi Gaskell user avatar by
Adi Gaskell
·
Aug. 31, 18 · Opinion
Like (2)
Save
Tweet
Share
3.45K Views

Join the DZone community and get the full member experience.

Join For Free

As AI has taken on ever greater importance in the priority of organizations around the world, it is understandable that efforts are underway to protect the intellectual property of algorithms that have strategic importance.

A recent paper from IBM Research highlights one strategy being worked on to provide this protection. Their approach takes inspiration from the digital watermarking that helps to protect video, audio and photos.

Watermarking is typically done in two stages. The first is an embedding stage where a word, usually "COPYRIGHT" is placed on top of the photo to allow people to detect whether it's been used illegally or not. The second stage is then the detection stage, where owners can extract this watermark to use it as legal evidence of ownership.

Embedding Watermarks

The IBM team believes a similar approach can be used with Deep Neural Network (DNN) systems. They suggest that by embedding watermarks into such systems, it can help to verify their ownership and therefore prevent theft. However, doing so requires fundamentally different methods to those used when embedding watermarks into other digital assets.

The paper describes an approach used to do just that, whilst also outlining a remote verification mechanism that uses API calls to determine the ownership of the AI system. The team developed three distinct methods for generating different kinds of watermarks:

  1. Embedding meaningful content together with the original training data as watermarks into the protected DNNs
  2. Embedding irrelevant data samples as watermarks into the protected DNNs
  3. Embedding noise as watermarks into the protected DNNs.

These were tested on two large, public datasets that allowed them to provoke an unexpected but controlled response if a system had been watermarked.

The authors admit that watermarking is a method that has been tried before, but previous efforts have been hampered by needing access to model parameters. This doesn't work in the real world as stolen models will usually be deployed remotely, with the IP thieves not tending to publicize the parameters of the models they've stolen.

The system isn't perfect, as the authors themselves admit. For instance, if the stolen system is deployed internally it's largely impossible to detect, so policing things requires the system to be deployed online. Similarly, the watermarking method cannot prevent systems being stolen via prediction APIs, but the researchers say that such approaches have enough limitations not to worry about this being a sizeable loophole.

They're currently looking to deploy the system internally at IBM before then scaling it up and deploying it with clients.

AI

Published at DZone with permission of Adi Gaskell, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Popular on DZone

  • Introduction to Container Orchestration
  • NoSQL vs SQL: What, Where, and How
  • Solving the Kubernetes Security Puzzle
  • Stop Using Spring Profiles Per Environment

Comments

Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends: