Bias detectives: the researchers striving to make algorithms fair

Citation metadata

Author: Rachel Courtland
Date: June 2018
From: Nature(Vol. 558, Issue 7710)
Publisher: Nature Publishing Group
Document Type: Article
Length: 3,731 words

Main content

Article Preview :

Author(s): Rachel Courtland

Author Affiliations:

Bias detectives: the researchers striving to make algorithms fair

In 2015, a worried father asked Rhema Vaithianathan a question that still weighs on her mind. A small crowd had gathered in a basement room in Pittsburgh, Pennsylvania, to hear her explain how software might tackle child abuse. Each day, the area's hotline receives dozens of calls from people who suspect that a child is in danger; some of these are then flagged by call-centre staff for investigation. But the system does not catch all cases of abuse. Vaithianathan and her colleagues had just won a half-million-dollar contract to build an algorithm to help.

Vaithianathan, a health economist who co-directs the Centre for Social Data Analytics at the Auckland University of Technology in New Zealand, told the crowd how the algorithm might work. For example, a tool trained on reams of data -- including family backgrounds and criminal records -- could generate risk scores when calls come in. That could help call screeners to flag which families to investigate.

After Vaithianathan invited questions from her audience, the father stood up to speak. He had struggled with drug addiction, he said, and social workers had removed a child from his home in the past. But he had been clean for some time. With a computer assessing his records, would the effort he'd made to turn his life around count for nothing? In other words: would algorithms judge him unfairly?

https://www.nature.com/news/can-we-open-the-black-box-of-ai-1.20731

Vaithianathan assured him that a human would always be in the loop, so his efforts would not be overlooked. But now that the automated tool has been deployed, she still thinks about his question. Computer calculations are increasingly being used to steer potentially life-changing decisions, including which people to detain after they have been charged with a crime; which families to investigate for potential child abuse, and -- in a trend called 'predictive policing' -- which neighbourhoods police should focus on. These tools promise to make decisions more consistent, accurate and rigorous. But oversight is limited: no one knows how many are in use. And their potential for unfairness is raising alarm. In 2016, for instance, US journalists argued that a system used to assess the risk of future criminal activity discriminates against black defendants.

"What concerns me most is the idea that we're coming up with systems that are supposed to ameliorate problems [but] that might end up exacerbating them," says Kate Crawford, co-founder of the AI Now Institute, a research centre at New York University that studies the social implications of artificial intelligence.

With Crawford and others waving red flags, governments are trying to make software more accountable. Last December, the New York City Council passed a bill to set up a task force that will recommend how to publicly share information about algorithms and investigate them for bias. This year, France's president, Emmanuel Macron, has said that the country will https://www.wired.com/story/emmanuel-macron-talks-to-wired-about-frances-ai-strategy/ . And in guidance issued this month, the UK government called for those working...

Source Citation

Source Citation
Courtland, Rachel. "Bias detectives: the researchers striving to make algorithms fair." Nature, vol. 558, no. 7710, 2018, p. 357. Accessed 26 Nov. 2020.
  

Gale Document Number: GALE|A572642939