Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Building Ethically Responsible AI-Powered Environments

DZone's Guide to

Building Ethically Responsible AI-Powered Environments

Imagine how biased your Alexa would be if she relied on paid campaigns to make personalized suggestions. We must consider the lines between personalized content and commercial gain.

· AI Zone
Free Resource

Insight for I&O leaders on deploying AIOps platforms to enhance performance monitoring today. Read the Guide.

Perhaps you’ve encountered before the entreaty for the “democratization of AI.” Generally speaking, democracy has a good reputation — but what does this mean in the context of technology and artificial intelligence, and what are the dangers of the alternative? Consider the words of Andrew Ng, Chief Scientist at Baidu research, who has called AI “the new electricity,” and you’ll begin to get a sense of what we mean.

At the moment, AI (and machine learning, and deep learning) tools are mostly in the hands of researchers, industry, large enterprises, colleges, and labs. AI presents distinct advantages to businesses who have access to incorporating it into their systems or products, improving everything from logistical efficiency to positive user experience and everything in between; by definition, those who can’t access or incorporate AI/ML are at a disadvantage. The democratization of AI is a term that, in a nutshell, means leveling the playing field by making the advantages of AI available to all, and it’s an important aspect of the larger project of the ethical development of technology.

Google.ai has recognized the importance of this kind of democratization, which is why they’ve made it a part of their mission “to organize the world’s information and make it universally accessible and useful.” Increasingly, companies that work in the field are recognizing their own responsibility for supporting the ethical development of artificial intelligence, which is why our team is looking forward to participating in the upcoming Forum on the Socially Responsible Development of AI, hosted by Université de Montréal and Les Fonds de recherche du Québec at the Palais des congrès de Montréal November 2-3, 2017.

Designing Ethical AI Spaces

AI brings a host of new considerations and possibilities for designing with user experience in mind. Aided by artificial intelligence, user interfaces are beginning to extend far beyond the traditional aspects of visual and special design. People can now interact with their devices through zero user interface environments and conversational spaces. And, these devices include a whole host of connected components, like wearables, appliances, cars, and home devices, in addition to your usual tech gadgets and online accounts and services. Furthermore, AI can use the data collected on patterns of user engagement in order to fine-tune user experience to a degree never before possible.

Design intersects with ethics due to the extent to which UI and zero user interface environments can be used to influence (or at worst, manipulate) behavior. As our interaction with technology grows to take a larger and larger place in all of our lives, we want this relationship to be healthy. That means meaningful and proportionally weighted interactions, not algorithms designed to make interactions addictive for the sake of short-term, one-sided goals. Is there any technology in your life with which you may have an unhealthy dependence? If so, does this ever impact how you feel about that brand? What happens when zero UI environments become so discreet we forget they’re even around?

There are more potential dangers than game apps you just can’t quit, however. Masha Krol, Lead Designer at Element AI and author of AI-First Design asserts that “AI will open up problem spaces that are unimaginable within our current solutions-based model. […] We don’t want to only solve incremental problems, but rather address paradigm shifting, disruptive ones.”

Collaboration with users, such as keeping the communication channels open and incorporating their feedback, will be fundamental to maintaining principled, human-centered designs that foster healthy, responsible, and ethical environments.

Responsible Data Collection

Because AI is data-driven, it creates even more incentive and opportunity for businesses to mine, store, and exploit data, which leads to its own special set of ethical concerns. We at Arcbees feel strongly that stewardship over user information must be respected as a sacred responsibility, held in trust for the user rather than exploited for the sake of upselling product.

This has implications for marketing and sales. For example, people are very sensitive as to when their online activities or conversations are being monitored, which is why it’s crucial to remain transparent and open about data collection. Furthermore, setting the user’s expectations and being clear about how the data is going to be used is just as important. Failing to do so will feel intrusive and consequently lead to a sentiment of distrust and reluctance towards the intelligent agent.

Imagine just how biased your Amazon Alexa would be if she relied on paid AdWord campaigns to make “personalized” suggestions. How would you feel if marketing and sales teams had access to what was being said in your home? What if you discovered that they were using this data to retarget you via other online channels, like Google, Facebook, or even Spotify? These issues must be addressed if we’re to create enjoyable experiences that respect privacy despite the fact that they’re integral to our intimate spaces.

Keeping AI Environments Safe

Of course, privacy concerns are also closely related to cybersecurity, which is one of the biggest ethical concerns surrounding AI and data collection. This one’s pretty self-explanatory, especially in the wake of the Equifax and Deloitte disasters, only the two most recent high-profile data breaches. The shoddy and irresponsible practices of Equifax, recently exposed in a congressional hearing, practically read like a “How Not To” manual in terms of playing fast and loose with security. It bears repeating that stewardship over user information must be respected as a sacred responsibility. Yes, this requires a lot of extra work and vigilance, but this is a fundamental pillar of the AI and internet of the future that we want to build. Is it enough to rely on the individual businesses or developers to do the right thing, however? Or is this an area we need to police with government regulation?

Bias in data sets is another major area of ethical concern for data collection, and one that is rightly beginning to get more and more attention. It may seem as though nothing could be more objective than data points in a database, but this is deceptive. As Sergey Brin and Larry Page once wrote in their article “The Anatomy of a Large-Scale Hypertextual Web Search Engine” (April 1988), “[a]nyone who has used a search engine recently, can readily testify that the completeness of the index is not the only factor in the quality of search results.” In other words, more data does not simply equal smarter AI. In fact, too much information can make for an unstable model.

Conversely, incomplete information can result in unfortunate episodes of “artificial stupidity.” What is needed is a subjective human that can use their experience and perspective to select which variables are necessary and relevant, and jettison the rest. Furthermore, that same human insight is required to establish the directives of algorithms in the first place. As Maxime Dion puts it, “Data science is the art of finding and choosing hidden information [such as the] relationships and correlations that can be found between seemingly independent variables.” However, any time you have humans making subjective choices, bias can creep in.

(Still skeptical? Check out Angela Bassa’s fantastic article on Medium on the potential pitfalls of bias when “sciencing on data.”)

But no human is without bias, so what to do? Here’s where the democratization of AI comes in. The concept of democracy is that decision-making is strengthened and balanced when more people are involved in the process; a multiplicity of inputs, perspectives, and feedback loops will overcome any one person’s skew. In the context of AI and data, democracy can be fostered by measures that allow and encourage as much transparency and collaboration as possible. This could include open source APIs and AI tools, open source datasets, plus forums for discussion groups and free educational materials that are accessible for all skill levels.

What other strategies could we put in place to support the democratization of AI? Start the feedback loop now! Share your comments and ideas with us below, or send me a tweet! Also, if you enjoyed this article, feel free to share it with your colleagues. Thanks again for reading. I look forward to reading your comments!

TrueSight is an AIOps platform, powered by machine learning and analytics, that elevates IT operations to address multi-cloud complexity and the speed of digital transformation.

Topics:
ethics ,ai ,machine learning ,personalization ,ethical ai ,democratization

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}