Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Making AI That Is Fair and Verifiable

DZone's Guide to

Making AI That Is Fair and Verifiable

When it comes to AI that affects people's fate, we need to know the algorithms and data sets used. Copyright law, patent law, and business secrets can't get in the way.

· AI Zone
Free Resource

Insight for I&O leaders on deploying AIOps platforms to enhance performance monitoring today. Read the Guide.

Many of us have watched the movie Minority Report, or we at least know the story. The premise of the movie is predicting crime and taking action before it happens. While the story may have sounded futuristic when it was released in 2002, recent computing technology advances have allowed businesses and governments to try to put in systems that collect a lot of data from everyday citizen interaction, to perform analytics on the data, and to predict future behavior.

Analytics Is Simple AI

Advances in computing technology — like big data and the application of algorithms in cognitive computing, AI, and our day-to-day activities — are becoming quite common. For example, using analytics, a shop in the U.S. was able to identify that when a cyclone warning was issued, not only did standard grocery items like milk and bread leave the counters but beer also disappeared quickly. Using this understanding, the grocery chain was able to sufficiently stock high-demand items. What a nice application of analytics!

One more example of analytics stands out. The store Target mapped customers' pregnancy testing purchases and sent promotional and educational material to their homes. In a few cases, this led to unforeseen situations, as the families of the ladies who made the purchase were caught unaware. Such suggestions are very good from the perspective of the enterprise, but they may feel like an invasion of privacy from the perspective of the customer.

While we do not feel surprised when Amazon suggest similar products when we purchase items, we may feel surprised if someone confronted us about a crime we haven't done... whether it's littering on the road or robbing a bank. Most will attest that robbing a bank definitely needs some "pre-crime action" (to borrow a term from the movie), while most will agree that littering can be corrected by a simple social message rather than stern police action.

While the above example from Target is usually classified as "analytics," it is a simple form of AI where association algorithms, user preferences, and product similarity are applied to identify suggestions that can be used for cross-selling and upselling.

Let's consider a few more examples that can have a bigger impact on our lives.

Making Decisions

Some U.S. courts are using software to predict the propensity of a person to commit a crime in the future. Based on this probability, judges are awarding sentences for the undergoing trial. Sound like Minitory Report? So it seems to be. 

Such predictive software tools obviously use many AI and cognitive computing algorithms to identify patterns in existing crimes, to look through the behavior patterns of the person(s) involved, and to compare them to other "similar" crimes and find a "match." While we may not completely discount the use of such software, the worrying fact is that most of the work behind "matching" is kept a secret and not made public. The need for secrecy is cited by enterprises claiming "business secrets" and is enforced using copyright and patent laws.

When such software is being used to decide the fate of people or to make decisions that affect a large population, however, it is important that the algorithms used, the data used to train the algorithms, and the attributes used for comparison along with the weight associated with each attribute be made available for scrutiny.

Even if this scrutiny is not performed by the common public, it needs to be available for examination by a team of impartial experts. The most important reason for examination and verification is that most machine learning, AI, and cognitive computing algorithms depend on high-quality data, and most people working in these fields know that data is notorious for containing abnormalities that can skew any algorithm off track — however robust.

If criminality is something that most of will (hopefully) be far from, let's look at another example. Consider that you regularly drive an SUV to the office. While most of your commute is without incidents, one day, you are involved in an accident with an autonomous car that crashed into your vehicle.

An important consideration here is: Are you being penalized by the autonomous vehicle because it made the decision to hit a "safer" vehicle?

SUVs are known to be safer vehicles as compared to sedans. Thus, when the autonomous vehicle found itself in a situation where an accident with another vehicle could not be avoided, it decided to crash into your SUV rather than into another, less-safe vehicle. In this case, with no fault being attributed to you, you are involved in a crash. While you may discount one such incident, what will be your reaction if more autonomous vehicles are on the road? Will you feel safe while driving your SUV, irrespective of being involved in another accident? Will the fear of such incidents make you sell your SUV and go for a sedan? Let's suppose that you do go and purchase a sedan — and, understandably, one with a superior safety record. Even in this situation, an autonomous vehicle may decide to ram into your sedan simply because you have a "safer" car.

Conclusion

As it stands today — and given the fact that AI is being applied to varied problem domains — it is difficult to predict how the field will evolve. As we go along, we will come across many situations that will make us reconsider our assumptions. We may have to make new choices and/or change our assumptions and norms of society. When difficult or unprecedented situations arise, it should be possible for a chosen panel of experts to examine the assumptions made, the algorithms used, as well as the data sets used to train the models, without letting the considerations of copyright law, patent law, or "business secrets" get in the way. We need to make an effort to ensure that such software tools do not discriminate without reason and go against the norms set by society. If need be, such software should be changed — even if means that the business that is making the software has to see a small dip in its business. When software starts to monitor/control/dictate a lot of what we do in our society, it can no longer use the cloak of "secrecy" and hide from scrutiny.

References

  1. The Robot Car of Tomorrow May Just Be Programmed to Hit You

  2. Proprietary Algorithms Are Being Used to Enhance Criminal Sentences and Preventing Defendants From Challenging Them

  3. Predict Crime Like You Predict the Weather

  4. State Supreme Court Says Secret Software Used in Sentencing Determinations Not a Violation of Due Process Rights

  5. Chicago PD Believes It Can See the Future, Starts Warning Citizens About Crimes They Might Commit

  6. 'Predictive Policing' Company Uses Bad Stats, Contractually-Obligated Shills to Tout Unproven 'Successes'

  7. Predicting Crimes Before They Happen: Future of Policing

  8. Dubai Police Joins Worldwide Movement to Predict Crime by Computer — RIGHT!

  9. Algorithmic Prediction in Policing: Assumptions, Evaluation, and Accountability

  10. Militarize or Disarm? Reforming the Police for the 21st Century: Future of Policing

  11. Automated Policing Within the Surveillance State: Future of Policing

  12. AI Police Crush the Cyber Underworld: Future of Policing

TrueSight is an AIOps platform, powered by machine learning and analytics, that elevates IT operations to address multi-cloud complexity and the speed of digital transformation.

Topics:
ai ,cognitive computing ,machine learning ,algorithms ,predictive analytics

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}