Robert C. Weaver Federal Building, headquarters of HUD, the U.S. Department of Housing and Urban Development, Washington, D.C. (Photo: Carol M. Highsmith/Library of Congress)

Leading scientists and legal scholars are weighing in on a national debate about fair housing laws. On October 18, a group of ten computer scientists, social scientists, and legal scholars from the Santa Fe Institute and The University of New Mexico submitted a formal response to the U.S. Department of Housing and Urban Development’s (HUD) proposal to dramatically revise the Fair Housing Act.

Key amendments in the HUD's new legislation would absolve landlords and lenders from any legal responsibility for discrimination that results from a third-party computer algorithm. Such algorithms are already widespread in our society and are used to automate decisions about who gets a credit card, a lease, or a mortgage. As the proposal is written, landlords and lenders would be protected from charges of “disparate impact” (unintentional discrimination that nonetheless leads to wide disparities) so long as their algorithms don’t overtly factor in protected characteristics like race, gender, religion, or disability status, or rely on proxy variables for those characteristics.

According to the experts, the HUD amendments related to algorithms are based on a fundamental "failure to recognize how modern algorithms can result in disparate impact… and how subtle the process for auditing algorithms for bias can be.” Modern machine-learning algorithms are poorly understood, and often draw highly complex correlations that even their designers may not be aware of. Any combination of factors, from location data to purchase history to musical preference, could be correlated as a proxy for race or another protected characteristic, with devastating consequences for protected groups.

These algorithms are often opaque, but without transparency, there is no way to confirm that they are fair. In their letter, the SFI and UNM experts demand transparency, recommending that designers of decision-making algorithms allow independent auditors a minimal level of access where they could test the algorithms for bias by feeding them various inputs and observing how they respond. The authors also demand transparency for individual applicants, allowing them to view their own data and “contest, update, or refute that data if it is inaccurate.”

In their letter, the experts lay out four arguments against the proposed legislation:

  1. To ensure that an algorithm does not have disparate impact, it is not enough to show that individual input factors are not “substitutes or close proxies” for protected characteristics. 
  2. It is impossible to audit an algorithm for bias without an adequate level of transparency or access to the algorithm. 
  3. Allowing defendants to deflect responsibility to proprietary third-party algorithms effectively destroys disparate-impact liability. 
  4. The proposed regulation fails to take into account the cumulative impact of multiple users of algorithms that result in disparate impact on protected classes where no individual user has liability under the proposed regulation. 

Their full response is posted on the Federal Register, along with 3,686 other public comments as of October 23, 2019. 

The co-signatories are members of The Interdisciplinary Working Group for Algorithmic Justice and are available to provide thoughts and expertise to policymakers around the use of algorithms in society. They are:

Cristopher Moore, Professor, Santa Fe Institute

Alfred Mathewson, Professor Emeritus and former Dean, The University of New Mexico School of Law

Elizabeth Bradley, Professor, Computer Science Department, University of Colorado, Boulder, and the Santa Fe Institute

G. Matthew Fricke, Research Assistant Professor, The University of New Mexico Department of Computer Science and Center for Advanced Research Computing

Mirta Galesic, Professor, Santa Fe Institute

Joshua Garland, Postdoctoral Fellow, Santa Fe Institute

Melanie Moses, Professor, Department of Computer Science, The University of New Mexico, and Santa Fe Institute

Kathy Powers, Associate Professor, Department of Political Science, Senior Fellow, Center for Social Policy, The University of New Mexico

Sonia M. Gipson Rankin, Professor, The University of New Mexico School of Law and

Gabriel R. Sanchez, Professor, Department of Political Science, Director, Center for Social Policy, The University of New Mexico

Read the article, Will Machine Learning Algorithms Erase the Progress of the Fair Housing Act? in Forbes (November 17, 2019)