The Right Way To Govern AI
The Right Way To Govern AI
As AI has become equal parts more powerful and pervasive, thinkers have devoted considerable attention to how it can evolve.
Join the DZone community and get the full member experience.Join For Free
As AI has become equal parts more powerful and pervasive, thinkers have devoted considerable attention to how it can evolve not only in an ethical way but one that benefits humanity as a whole. The latest of these comes by way of a new report from the University of Southern California's Mark Latonero.
The paper takes inspiration from the world of human rights, and Latonero argues that this can serve as a north star to help guide the future development and governance of AI.
"A human rights-based frame could provide those developing AI with the aspirational, normative, and legal guidance to uphold human dignity and the inherent worth of every individual regardless of country or jurisdiction," Latonero says.
How to Govern AI
Latonero goes on to make a number of recommendations for the various stakeholders in AI governance. For instance, he urges technology companies to find effective channels of communication with local civil society groups, with this being especially important in areas where human rights are low. This might ultimately culminate in a Human Rights Impact Assessment being conducted at all stages of the AI system's development.
From a government perspective, a good first step is for states to acknowledge the human rights obligations they have towards citizens. This can then lead on to incorporating a duty to protect these fundamental rights in all national AI policies, guidelines and regulations. Governments can also work together in bodies such as the UN to ensure that AI is developed in a way that respects human rights.
To achieve this realignment will require considerable work from all stakeholders, as human rights principles weren't written as technical specifications. So human rights lawyers, policymakers, computer scientists, engineers, and social scientists should work together to try and ensure human rights are incorporated into business models and product design.
Academia meanwhile, should explore the value, limitations, and interactions between human rights law, humanitarian law, and ethics in relation to AI technology. They should work to explore the various trade-offs between the various human rights in specific AI circumstances, whilst social scientists should explore the impact AI is having on the ground.
With the field of ethics in AI growing, this attempt to align things with human rights is an interesting stance and one that certainly adds to the debate.
Published at DZone with permission of Adi Gaskell , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.