Thursday, March 12, 2020

The algorithmic trade-off between accuracy and ethics

strategy+business, March 12, 2020

by Theodore Kinni



Photograph by Yuichiro Chino

Strava, a San Francisco–based fitness website whose users upload data from their Fitbits and other devices to track their exercise routines and routes, didn’t set out to endanger U.S. military personnel. But in November 2017, when the company released a data visualization of the aggregate activity of its users, that’s what it did.

Strava’s idea was to provide its users with a map of the most popular running routes, wherever they happened to be located. As it turns out, the resulting visualization, which was composed from three trillion GPS coordinates, also showed routes in areas, such as Afghanistan’s Helmand Province, where the few Strava users were located almost exclusively on military bases. Their running routes inadvertently revealed the regular movements of soldiers in a hot zone of insurgency.

The problem, explain University of Pennsylvania computer and information scientists Michael Kearns and Aaron Roth, authors of The Ethical Algorithm: The Science of Socially Aware Algorithm Design, is “that blind, data-driven algorithmic optimization of a seemingly sensible objective can lead to unexpected and undesirable side effects.” The solution, which they explore for nontechnical leaders and other lay readers in this slim book, is embodied in the emerging science of ethical algorithm design.

“Instead of people regulating and monitoring algorithms from the outside,” the authors say, “the idea is to fix them from the inside.” To achieve this, companies need to consider the fairness, accuracy, transparency, and ethics — the so-called FATE — of algorithm design.

Kearns and Roth don’t deal with the FATE traits in a sequential manner. Instead, they describe the pitfalls associated with algorithms and discuss the ever-evolving set of solutions for avoiding them. Read the rest here.

No comments: