AI Research & Innovation

Arthur Research: Equalizing Credit Opportunity in Algorithms

Arthur Research: Equalizing Credit Opportunity in Algorithms

For decades, financial institutions and lenders have used statistical models to make credit-related decisions. While these models improve efficiency and reduce variability, they have the potential to simultaneously perpetuate and accelerate historical patterns of discrimination. 

In the U.S., legislation like the Equal Credit Opportunity Act specifically bans discrimination in lending by forbidding credit score systems from using information like sex, race, marital status, national origin, and religion—and agencies are charged with enforcing this. However, credit invisibility and historical injustice mean that labeled credit data is limited on protected groups, which could negatively impact the accuracy of models trained on that data. Additionally, a vast body of research has demonstrated that even with sufficient training data, machine learning algorithms can encode many different versions of “unfairness.” This means that financial institutions could—potentially unwittingly—engage in illegal discrimination through the use of this technology.

Two conversations exist in parallel here: one about U.S. discrimination law/policy, and the other about machine learning fairness research. Yet, policymakers and researchers in this space seem to talk past each other when it comes to data access, usage of input features, and the definition of “discrimination” (intent-based vs. outcome-based).

Next Tuesday, at the AAAI/ACM conference on Artificial Intelligence, Ethics, and Society (AIES), we will be presenting a paper called Equalizing Credit Opportunity in Algorithms: Aligning Algorithmic Fairness Research with U.S. Fair Lending Regulation.

The paper provides an overview of the following:

  • The current landscape of credit-specific U.S. anti-discrimination law as it pertains to algorithms for fair lending researchers
  • Fair ML research results, contextualized to the realities of credit data to identify “discrimination risks” in the credit setting
  • Regulatory opportunities to address those risks

The areas of lending regulation and of ML research are constantly evolving. We hope this paper is a useful tool for ML practitioners to understand the landscape and potential future directions.

Interested in attending AIES? Check out the conference website. Also, learn more about Arthur’s R&D team here.