Technology

Machine Learning has the potential to be Fair and Accurate

Machine Learning has the potential to be Fair and Accurate

Machine learning can be both fair and accurate, but it requires careful consideration and mitigation of biases that can be introduced during the development and deployment of machine learning models. Machine learning algorithms are trained on large amounts of data, and if that data is biased, then the algorithm will learn and perpetuate those biases. This can result in unfair and inaccurate predictions, particularly for marginalized groups who are already underrepresented in the data.

Researchers at Carnegie Mellon University are challenging a long-held belief that when using machine learning to make public policy decisions, there is a trade-off between accuracy and fairness.

As the use of machine learning in areas such as criminal justice, hiring, health care delivery, and social service interventions has grown, there has been growing concerned about whether such applications introduce new or amplify existing inequities, particularly among racial minorities and people with low income. Adjustments are made to the data, labels, model training, scoring systems, and other aspects of the machine learning system to mitigate this bias. The underlying theoretical assumption is that these adjustments reduce the accuracy of the system.

We want the artificial intelligence, computer science, and machine learning communities to stop accepting the assumption that there is a trade-off between accuracy and fairness and to start designing systems that intentionally maximize both. We hope that policymakers will use machine learning as a decision-making tool to help them achieve equitable outcomes.

Kit Rodolfa

A CMU team aims to dispel that assumption in a new study, recently published in Nature Machine Intelligence. Rayid Ghani, a professor in the School of Computer Science’s Machine Learning Department (MLD) and the Heinz College of Information Systems and Public Policy; Kit Rodolfa, a research scientist in MLD; and Hemank Lamba, a post-doctoral researcher in SCS, tested that assumption in real-world applications and found the trade-off was negligible in practice across a range of policy domains.

“You actually can get both. You don’t have to sacrifice accuracy to build systems that are fair and equitable,” Ghani said. “But it does require you to deliberately design systems to be fair and equitable. Off-the-shelf systems won’t work.”

Machine-Learning-has-the-Potential-to-be-Fair-and-Accurate-1
Machine learning can be fair and accurate

Ghani and Rodolfa focused on situations where in-demand resources are limited, and machine learning systems are used to help allocate those resources. The researchers examined four systems: prioritizing limited mental health care outreach based on a person’s risk of reincarceration to reduce reincarceration; predicting serious safety violations to better deploy a city’s limited housing inspectors; modeling the risk of students not graduating from high school on time to identify those most in need of additional support; and assisting teachers in reaching crowdfunding goals for classroom needs.

In each case, the researchers discovered that models optimized for accuracy – standard practice in machine learning – could effectively predict the outcomes of interest but had significant disparities in intervention recommendations. However, when the researchers made changes to the models’ outputs to improve their fairness, they discovered that disparities based on race, age, or income – depending on the situation – could be removed without sacrificing accuracy.

Ghani and Rodolfa hope that their findings will influence the thinking of other researchers and policymakers who are considering the use of machine learning in decision making.

“We want the artificial intelligence, computer science, and machine learning communities to stop accepting the assumption that there is a trade-off between accuracy and fairness and to start designing systems that intentionally maximize both,” Rodolfa said. “We hope that policymakers will use machine learning as a decision-making tool to help them achieve equitable outcomes.”