Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

The Risk of Virtual Personal Assistants as Gatekeepers

DZone's Guide to

The Risk of Virtual Personal Assistants as Gatekeepers

We’re entering a new world. A VPA that has all these abilities can be a huge asset to us and to those around us. But are there dangers lurking behind this seemingly positive future scenario?

· Big Data Zone ·
Free Resource

Hortonworks Sandbox for HDP and HDF is your chance to get started on learning, developing, testing and trying out new features. Each download comes preconfigured with interactive tutorials, sample data and developments from the Apache community.

Virtual personal assistants (VPAs) are software applications that understand written and spoken text, that speak, answer questions, provide useful information, and perform tasks for us. VPAs rely on capabilities ranging from speech recognition to predictive analytics and machine learning algorithms. Siri and Google Now are the most widely used VPAs. As the technologies advance, our VPAs are expected to become increasingly capable.

Neural NetworksIn a recent article, Tom Pullar-Strecker speculates that VPAs will have deep knowledge of us and our preferences. We will depend on these smart assistants to help us plan and organize our lives and even carry out basic tasks. Our VPA will determine if we’re available to meet with friends and then arrange the entire evening for us, from inviting our guests to making the restaurant reservations. The VPA will also act as gatekeeper to block unwanted corporate advertisements from reaching us. It will filter ads and only show us products that it believes we’ll be interested in, based on its knowledge of us.

We’re entering a new world. A VPA that has all these abilities can be a huge asset to us and to those around us. But are there dangers lurking behind this seemingly positive future scenario? Most concerns that are voiced seem to be around privacy. For the VPA to be truly effective, it will require deep insights into my personality, habits, and health. It will need to know who my family members are, as well as my friends and co-workers. Many are worried about the implications of providing so much data to a VPA.

But there are other risks that aren’t discussed as often as the topic of privacy. A risk that hasn’t been addressed much is the risk of what I’ll call unfair VPA filtering. If my VPA protects me from unwanted ads or solicitations, and if it has my permission to make purchases on my behalf, it will wield a lot of power. Companies are going to want the VPA to approve their products, instead of filtering them out. How will the VPA decide which pair of shoes it should buy for me, when it feels that I’d be happy with any of 5 different selections? The VPA, or whoever controls the VPA, could just buy the shoes from the company that pays the most to have me as their customer. It gets a kickback from every transaction it executes on my behalf.

I explore this risk in a recent guest post on Opus Research entitled: Virtual Personal Assistants: Future Gatekeeper to Your Attention? You can also join the conversation on Opus Research’s LinkedIn Group for Intelligent Assistants Developers and Implementers.

Hortonworks Community Connection (HCC) is an online collaboration destination for developers, DevOps, customers and partners to get answers to questions, collaborate on technical articles and share code examples from GitHub.  Join the discussion.

Topics:
virtual assistant

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}