Injecting Moral Code Into AI
Realize there are true economic benefits to having a more diverse, well-educated, and involved society.
Join the DZone community and get the full member experience.
Join For FreeThought-provoking keynote at Lucidworks' Activate search and AI conference. Beena Ammanath, Global VP, AI/Data/Innovation at HPE and Founder and CEO at Humans for AI spoke on AI and the need for developers, and the IT industry, to take responsibility for doing good rather than enabling evil.
The term "artificial intelligence" (AI) was introduced in 1956 by John McCarthy. The accepted definition is, “The capability of a machine to imitate intelligent human behavior.”
According to Ms. Ammanath, there are three types of AI:
- Artificial narrow intelligence — most of what we see today
- Artificial general intelligence — the intelligence of a machine that could successfully perform any intellectual task that a human being can
- Artificial superintelligence — where most of the buzz is
If the building catches on fire, AI would continue to play chess.
What the steam engine did for human muscle power, AI will do for human brain power!
Change is constant:
- In 1917 the literacy rate was 23% versus 86% in 2017
- In 1917 the average price of a new car in 1917 was $400 versus $35,000 in 2017
- In 1917, 8% of homes had phones versus 80% in 2017
- In 1917, the global population was 2 billion versus 8 billion in 2017
- In 1917, the maximum speed limit was 10 mph versus 70 mph in 2017
- In 1917 the major technical invention was the toggle light switch versus gene editing in 2017
Humans need to build AI to help everyone improve their quality of life.
AI can improve the quality of work. The quality of work is a function of 1) what you love; 2) what comes easily to you; and, 3) what pays you well. However, it's not easy to find all three at the same time. What comes easily and what you love is not sustainable. What you love and what pays you well has the potential to make you miserable. What pays you well and what comes easily to you will likely not be your passion. The sweet spot is finding what pays you well, what you love, and what comes easily to you. AI has the potential to help you get there
Will Hayes, CEO of Lucidworks joined Beena to discuss how to reach the sweet spot building AI that is complimenting humans. Use AI to remove the things humans not good at or don’t enjoy. For example, a doctor taking notes. Imagine this information being captured automatically and loaded to EMRs so doctors can spend more time talking to, and listening to patients, becoming more empathetic.
How do we do this transforms into bias and leads to implementation? We are all biased. Our biases are shaped by everything in our life. When code bias into a program, biases get carried over and scaled. Developers need to make sure biases are kept in check when they creep in. Diversity of thought — humans from different economic, education, business backgrounds — helps prevent bias. Diverse groups of people can help to identify bias.
How do we think about broader opportunities for a broader group of people? There is a lack of women in technology. We need to proactively train more women and minorities. Get non-technical people involved in AI as product managers, project managers, QA, not everyone can be a developer or engineer, yet everyone can understand what a good UX/CX is.
Do you see a responsibility in the industry to drive this? Where should we be putting our effort and focus? More diversity = better business. We get too narrowly focused on being a software engineer to be a part of this. To build a good AI product you need more than technologists. It requires an up-front investment to train more non-technical people to understand this world. Leaders need to be more aware of the bias.
In open source, there’s a bias toward GitHub accounts and open source contributions demonstrating skill. Just because someone is not an open source contributor doesn’t mean they cannot make huge contributions to AI projects.
How should we approach capturing user feedback? Transparency about how data is being captured and used is key. As a developer, you know that information can be used to influence buyer behavior. You need to be aware of how information can be used in a negative way. Put thought into how tech can be used in a negative way by bad actors. AI is aggravating the situation. Put guardrails in place to prevent the bad actors from using the data in nefarious ways
Are we making progress as a collective, are we improving the quality of life for all of society? Work across several domains. Financial services, healthcare, retail, and e-commerce are doing must aggressively. Manufacturing is solving a specific problem but not making as much progress on behalf of society.
The culture of the industry if crucial — change is hard. You can build the best predictive model but a person who’s been using their gut for 35 years finds it very difficult to use the model — it becomes a failed project. The AI model you build is useless if you are not able to drive adoption by the end user.
Transparency is important as we go deeper into tracking and extracting information. Amazon used AI to review resumes and it resulted in monochromatic results. Amazon did the right thing by pulling the algorithm offline, but it took a year before they saw it was a problem. Companies have to take responsibility for what they build. Put in the guard rails to prevent the algorithm from going rogue.
Humans for AI is interacting with the world to bring the others along with us. Are we doing enough to bring other up with us and educate them? Each one us needs to educate one non-technical person.
How do we explain what is happening in non-technical ways to non-technical people? We can train non-technical people about AI. Do it for the greater good. Realize there are true economic benefits to having a more diverse, well-educated, and involved society.
Are we creating monopolies given the importance of data sources to drive AI product development? We need to remove PII from IoT and healthcare data for open data projects.
How can industry drive more collaboration and access? Think about the type of data your organization is collecting and how it can be used by others
What are others doing? Companies want ideas. They want to hear about how others are using AI?ML to solve business problems and drive more value.
How to promote diversity? Should we offer diversity as a service? Should the government mandate diversity? Get more diverse people excited from an early age. Promote role models. Approach from the top and the bottom to solve. Companies need to be more thoughtful in their approach. Challenge your thinking. Meritocracy doesn’t include personal experiences. We need to put more value on these experiences. Look at other dimensions of what creates talent and experience. What are we not valuing in people?
Opinions expressed by DZone contributors are their own.
Comments