Is It Time to Reset the AI Application?
As standards and compliance are still evolving in the AI world, we should start designing systems that let users decide how to use the application and when to reset it.
Join the DZone community and get the full member experience.
Join For FreeDo you think AI is changing your thinking ability? From applications recommending what movies to watch, what songs to listen to, what to buy, what to eat, what ads you see, and the list goes on… all are driven by applications learning from you or delivering information through collective intelligence (i.e., people like you, location-based, etc.).
But are you sure the right recommendation is being provided to you or are you consuming the information as-is and adapting to it? Have you ever thought, would you have reached the same conclusion by applying your research and mental knowledge?
To add on, with information being readily available, less time and mental ability are spent on problem-solving and more effort is spent on searching the solutions online.
As we build smarter applications in the future, which keeps learning everything about you, do you think this would change our thinking patterns even further?
Apart from AI systems trying to learn, there can be other ethical issues around trust and bias and how do you design and validate systems that provide recommendations which can be consumed by humans to provide unbiased decisions. I have covered this as part of my earlier article.
As we are creators and validators of the AI system, the onus lies on us (humans) to ensure any technology is used for good.
As standards and compliance are still evolving in the AI world, we should start designing systems that let users decide how to use the application and when to reset it.
I am suggesting few approaches below to drive discussions in this area, which needs contribution from everyone to help deliver smart and transparent AI applications in the future.
The Uber Persona Model
All applications build some kind of semantic user profiles incrementally to understand more about the user and provide recommendations. Making this transparent to the user should be the first step.
Your application can have various semantic user profiles — one about you, one about your community (similar to you, location-based, etc.) and how this has been derived over a period of time. Finally, your application should have a Reset Profile, that lets you reset your profile or a 'Private AI' profile that enables you to use the application without knowing anything about you and let you discover the required information. Leaving the choice to the end-users on which profile to use, should lead to better control and transparency and making users build trust in the system.
Explainability and Auditability
Designing applications with explainability in mind should be a key design principle. If the user receives an output from an AI algorithm, providing information as to why this output was presented and how relevant it is, should be built into the algorithm. This would empower users to understand why a particular information is being presented and turn on/off any preferences associated with an AI algorithm for future recommendations/suggestions.
For instance, take the example of server auditing, where you have tools that log every request and response, track changes in the environment, assess access controls and risk, and provide end-to-end transparency.
The same level of auditing is required when AI delivers an output — what was the input, what version of the model was used, what features were evaluated, what data was used for evaluation, what was the confidence score, what was the threshold, what output was delivered and what was the feedback.
Gamifying the Knowledge Discovery
As information is readily available, how do you make it consumable in a way where you can nudge users to use their mental ability to find solutions, rather than giving all the information in one go. This would be particularly useful on how education in general (especially for schools/universities), would be delivered to everyone in the future.
How about a Google-like smart search engine, which delivers information that lets you test your skills. As mentioned earlier, in the Uber Persona Model section, the choice is up to the user to switch on/off this recommendation.
I hope this article, gave you enough insights into this important area.
To conclude, I would say the only difference between AI and us all in the future would be our ability to think wisely and build the future we want.
Opinions expressed by DZone contributors are their own.
Comments