[ad_1]
If you’re taking a long-term approach to artificial intelligence (AI), you’re likely thinking about how to make your AI systems ethical. Building ethical AI is the right thing to do. Not only do your corporate values demand it, it’s also one of the ideal ways to help minimise risks that range from compliance failures to brand damage. But building ethical AI is hard.
The difficulty starts with a question: what is ethical AI? The answer depends on defining ethical AI principles — and there are many related initiatives, all around the world. Our team has identified over 90 organisations that have attempted to define ethical AI principles, collectively coming up with more than 200 principles. These organisations include governments,1 multilateral organisations,2 non-governmental organisations3 and companies.4 Even the Vatican has a plan.5
How can you make sense of it all and come up with tangible rules to follow? After reviewing these initiatives, we’ve identified ten core principles. Together, they help define ethical AI. Based on our own work, both internally and with clients, we also have a few ideas for how to put these principles into practice.
Knowledge and behaviour: the 10 principles of ethical AI
The ten core principles of ethical AI enjoy broad consensus for a reason: they align with globally recognized definitions of fundamental human rights, as well as with multiple international declarations, conventions and treaties. The first two principles can help you acquire the knowledge that can allow you to make ethical decisions for your AI. The next eight can help guide those decisions.
-
Interpretability. AI models should be able to explain their overall decision-making process and, in high-risk cases, explain how they made specific predictions or chose certain actions. Organisations should be transparent about what algorithms are making what decisions on individuals using their own data.
-
Reliability and robustness. AI systems should operate within design parameters and make consistent, repeatable predictions and decisions.
-
Security. AI systems and the data they contain should be protected from cyber threats — including AI tools that operate through third parties or are cloud-based.
-
Accountability. Someone (or some group) should be clearly assigned responsibility for the ethical implications of AI models’ use — or misuse.
-
Beneficiality. Consider the common good as you develop AI, with particular attention to sustainability, cooperation and openness.
-
Privacy. When you use people’s data to design and operate AI solutions, inform individuals about what data is being collected and how that data is being used, take precautions to protect data privacy, provide opportunities for redress and give the choice to manage how it’s used.
-
Human agency. For higher levels of ethical risk, enable more human oversight over and intervention in your AI models’ operations.
-
Lawfulness. All stakeholders, at every stage of an AI system’s life cycle, must obey the law and comply with all relevant regulations.
-
Fairness. Design and operate your AI so that it will not show bias against groups or individuals.
-
Safety. Build AI that is not a threat to people’s physical safety or mental integrity.
These principles are general enough to be widely accepted — and hard to put into practice without more specificity. Every company will have to navigate its own path, but we’ve identified two other guidelines that may help.
To turn ethical AI principles into action: context and traceability
A top challenge to navigating these ten principles is that they often mean different things in different places — and to different people. The laws a company has to follow in the US, for example, are likely different than those in China. In the US they may also differ from one state to another. How your employees, customers and local communities define the common good (or privacy, safety, reliability or most of the ethical AI principles) may also differ.
To put these ten principles into practice, then, you may want to start by contextualising them: Identify your AI systems’ various stakeholders, then find out their values and discover any tensions and conflicts that your AI may provoke.6 You may then need discussions to reconcile conflicting ideas and needs.
When all your decisions are underpinned by human rights and your values, regulators, employees, consumers, investors and communities may be more likely to support you — and give you the benefit of the doubt if something goes wrong.
To help resolve these possible conflicts, consider explicitly linking the ten principles to fundamental human rights and to your own organisational values. The idea is to create traceability in the AI design process: for every decision with ethical implications that you make, you can trace that decision back to specific, widely accepted human rights and your declared corporate principles. That may sound tricky, but there are toolkits (such as this practical guide to Responsible AI) that can help.
None of this is easy, because AI isn’t easy. But given the speed at which AI is spreading, making your AI responsible and ethical could be a big step toward giving your company — and the world — a sustainable future.
[ad_2]
Source link
More Stories
5 Ways to Boost Data Security on the Cloud
5 Tips for Mobile Google Slides
Apps You Can Use to Organize Your Estate Plan » JaypeeOnline