Algorithms have come to dominate today’s modern life. From advertisements and consumer lending to college admissions and hiring, our day to day is increasingly moderated by technology. At the same time, complex algorithms are also routinely violating the basic rights of individual citizens. How we choose to address the issue of misbehaving algorithms will have widespread implications, not just for the business and technology fields, but for society as a whole.
In The Ethical Algorithm: The Science of Socially Aware Algorithm Design, leading experts Michael Kearns, an SFI External Professor based at the University of Pennsylvania, and his Penn colleague Aaron Roth offer a set of principled solutions based on the emerging science of socially aware algorithm design. While most of the discussion to date has focused on traditional fixes like laws, regulations, and watchdog groups, these approaches have proven woefully inadequate on their own. Kearns and Roth instead propose fixing the technology from the inside, by building better algorithms that have precise definitions of fairness, accuracy, transparency, and ethics embedded within their design.
As theoretical computer scientists, Kearns and Roth argue “it’s essential that the scientific and research communities who work on machine learning be engaged and centrally involved in the ethical debates around algorithmic decision-making.”
Addressing critics who might call out computer scientists as the source of our algorithmic problems, the authors recall an example from Kearns’ 2017 SFI Community Lecture. After World War II, many Manhattan Project scientists worked tirelessly to curb the use of the atomic weapons they had invented. In the case of algorithms, “the harms are more diffuse and harder to detect” than in the case of the nuclear bombs, but both are examples of irreversible technologies that can be controlled, but not undone. Those who design machine learning algorithms can play a critical role in identifying the inherent limits of algorithms and designing them to balance predictive power with social values like fairness and privacy.
Kearns and Roth present technological solutions to real-life issues like leaked sensitive personal information; algorithmic models that reflect racial and gender bias; or users that “game” search engines, spam filters, and navigation apps — and show how we can better protect humans from the unintended consequences of technology. Weaving in fascinating real-life examples from the business, legal, and medical fields, Kearns and Roth demonstrate how we can instill human principles into machine code, without halting the advance of data-driven scientific exploration.
[Source: Oxford University Press]