Biased bots: why A.I. might lead to injustice, and how to avoid it

networks.png

Photo by William Bout on Unsplash

Rather than worrying about some far-fetched doomsday scenario akin to Terminator or I, Robot, artificial intelligence researchers are concerned with the real-world dangers presented by modern A.I.

In recent years, eminent scientists and intellectuals, including Stephen Hawking and Elon Musk, have warned of the existential risk entailed by developing advanced artificial intelligence.

Artificial intelligence researcher Jennifer Golbeck is highly skeptical of these apocalyptic prophecies. “You don’t have anything to worry about,” she said. “A.I. is nowhere near as good as the people who are freaked out about it think it is.” However, she maintains that there are problems associated with A.I. as it currently exists.

Golbeck outlined some of the actual issues that have arisen from implementing AI systems in a variety of contexts, as well as potential strategies to solve them, during her session on Feb. 16, at the 2018 American Association for the Advancement of Science conference in Austin, Texas. She is the Director of the Human-Computer Interaction Lab at the University of Maryland. The lab is dedicated to understanding how various technologies affect human behavior, and the unintended consequences of their widespread use. A.I. is one of the most powerful—and therefore critical—innovations studied there.

One of Golbeck’s biggest concerns is the implementation of biased algorithms in job evaluations and loan approvals. Although an A.I. ranking program seems very mathematical and objective, that the process is only as impartial as the data fed into it.

“We humans are flawed; our systems are flawed and biased and unfair,” Golbeck said.  “Artificial intelligence can take those systems, reflect them back in a way that seems objective.”

Kathy O’Neil’s recent book, Weapons of Math Destruction, details many of the ways that A.I. can be implemented unfairly. One poignant example was the firing of a D.C.-area schoolteacher based on a algorithmic evaluation of her performance. Although the teacher was well-liked by her students and administrators, the system determined she was underperforming, and she was dismissed.

greencode.png

Photo by Markus Spiske on Unsplash

Other programs have shown a propensity to discriminate against women and minorities, due to the bias-laden data that is fed into them. Even if variables like gender and race are not included as inputs, the algorithm can still produce discriminatory results, as they are likely correlated with other inputs, according to O’Neil .

blackboxmotherboard

Photo by Chris Ried on Unsplash

Golbeck is especially particularly concerned with ‘black-box’ artificial intelligence—algorithms which function is completely opaque even to those designing it. These systems present obstacles to determining if their processes are, in fact, discriminatory, and to pursuing legal recourse for those harmed by them.

In order to begin to reduce the possibility of this discrimination in the future, we must first understand exactly how they are operating. “We have to have it be transparent, open, auditable, and accountable, so it is not increasing unfairness, but it’s actually able to make us better as a humanity,” Golbeck said.  

In order to ensure this technology is used to promote a more just society, Golbeck is adamant about passing regulations on artificial intelligence. She advocates government agencies such as the FTC Bureau of Consumer Protections to design and enforce guidelines on how A.I. is applied in the economy.

Golbeck closed her talk by urging the crowd to “ask for this kind of regulation and transparency, to allow us to challenge those algorithms—to see what they are doing when making these decisions.”

Leave a comment