Technology

Mathematical principles help AI’s unethical choice effortlessly

Mathematical principles help AI’s unethical choice effortlessly
A new mathematical policy has been developed to combat AI bias towards making unethical and costly commercial choices. Researchers at the University of Warwick, Imperial College London, EPFL (Lausanne), and SkyTeb Ltd. have sought
to help regulators and businesses find ways to manage the bias of the artificial intelligence (AI) system in making unethical and potentially extremely expensive and harmful, commercial choices.

AI is deployed in increasingly commercial situations, for example, pricing insurance products for sale to specific customers. AI will choose from a number of possible strategies, some of which may be discriminatory or otherwise use customer data in a way that could later lead to severe penalties for the company. For example, regulators can impose significant fines and customers can boycott the company. Ideally, unethical practices like these would be removed from the pool of potential strategies beforehand, but since AI does not have an ethical concept, it cannot distinguish between ethical and unethical strategies.

In the case of insurance, for example, AI can be used to set different prices for different individuals. While there are legitimate reasons to do so, an “optimized” system can benefit people out of interest in shopping nearby. By choosing from a host of different strategies, AI may come to the conclusion that it is “unethical” to see that the company faces serious penalties from stakeholders, regulators, and ultimately customers. To navigate the field of ethical mining, an international team of researchers has introduced a mathematical principle that can help businesses find questionable strategies that can adapt to their AI systems.

Robert Mack, a professor at the Institute of Mathematics at the University of Warwick in the United Kingdom, said the proposed “unethical optimization policy” could help regulators, compliance staff, and others find solutions to problems that could be hidden in a larger strategic space. A number of unethical strategies can be expected to be chosen through optimization, which should be inspected where problems are likely to occur and thus explain how changes should be made to avoid future AI search algorithms.

The policy, published in the Royal Society Open Science, is defined as If an AI aims to maximize risk-adjusted returns, it is likely to choose unethical strategies under mild conditions unless the intended work allows sufficient for this risk. This argument can help to determine the amount of risk of an unethical strategy with a formula sent in front of the team and to understand its impact. In the future, it can also be used to help eliminate risk completely.

The policy also suggests that the way AI operates in very large strategic areas needs to be reconsidered, so that unethical results in the optimization/learning process are clearly rejected, Mack added. At a time when human intervention in decision-making is increasingly disappearing, we must ensure that we “keep an ethical eye on AI”.