Your browser is unsupported

We recommend using the latest version of IE11, Edge, Chrome, Firefox or Safari.

Serve us right

Algorithms confront the chasm between bias and fairness

By Andrea Poet

Portrait of Abolfazl Asudeh

People tend to trust numbers. Consider this: would you be likely to believe a result that is grounded in historical data? Many people would be inclined to say yes.

The computer algorithms that affect so many aspects of our daily lives, from whether we qualify for a credit card to which product advertisements we see online, rely on existing information. These algorithms crunch vast pools of data about us—and about human behavior on a grand scale—before spitting out their recommendations.

Now, think about this: what assumptions about people are bound up in the data that inform algorithms or that train machine-learning software? The machines themselves may have no concept of race, sex, nationality, or other factors, but they learn from existing data that can be rife with related biases.

A common example is facial-recognition software. Researchers have shown that some versions of this software were trained using photographs of mostly white men. That was what the machines were shown as they “learned” to identify human beings. When the software was put into practice by law-enforcement agencies, it was far less adept at processing and matching the faces of women and people from other races, producing multiple wrong identifications—sometimes leading to people being arrested or charged for crimes they did not commit. The skewed pool of data used to train the machine had produced major flaws in its outcomes, with serious consequences for people’s lives.

UIC researchers in machine learning and artificial intelligence are working to find and solve issues of unfairness in computer algorithms and to teach students to understand the implications of these tools.

Abolfazl Asudeh, an assistant professor of computer science who focuses on responsible data science and algorithmic fairness, explained that even if the data used to create an algorithm aren’t biased, there is a chance the algorithm will create bias. It comes down to how you define or formulate the problem and how you correct for approximations that can create unfairness.

He used the example of an optimization problem: a bike-sharing program that allocates bike stations across the city of Chicago based on population density. The result is fewer bike stations in regions that are marginalized, while in more affluent areas, a station can be found within a couple of minutes. People who need more services will have less access to them, an outcome that Asudeh called discriminatory.

“We usually care about the statistics,” he said. “But the statistic is determined by the majority. And you can totally ignore the minority and maximize for the majority.”

Associate Professor Ian Kash is examining another Chicago dataset from a fairness perspective: a re-analysis of an algorithm used for restaurant inspections. The algorithm was designed to prioritize the detection of critical violations of the food code by sending sanitarians back to offending restaurants more frequently. Kash and his team found wildly differing rates of violations detected by individual inspectors, from 2 percent in one region of the city to 40 percent in another.

“The algorithms optimize whatever they are asked to optimize—they don’t have the inherent smarts to know one of the sanitarians has very different rates for finding critical violations,” Kash said. The re-inspections, therefore, were “distributed unfairly as a result.”

Emanuelle Burton, a lecturer who teaches a required undergraduate computer science ethics course, takes students through several well-known algorithmic fairness cases, including a 2016 ProPublica report on a criminal-justice software tool called COMPAS. It was designed by a private company, Northpointe, to help overworked judges make parole decisions by predicting which convicted individuals would be likely to commit another crime. It is now used widely, and outside its intended purpose, in sentencing decisions, by providing judges with a numerical score for each defendant—but with no information on how that score is reached. It’s a “black box.” ProPublica’s analysis revealed that the software often incorrectly raised flags about Black defendants while mislabeling white defendants as low risk.

In class, Burton presents a hypothetical situation in which the American Civil Liberties Union sues Northpointe for information about the algorithm. Half of the students argue why the algorithm should be made public; half must defend why it’s in the public interest that the data remain private to the company.

“It’s always an interesting day,” she said. “Occasionally there will be a classroom where everyone comes down on one side, and the next hour there is a class that is passionately divided.”

“Professional ethics education in computer science needs to go beyond issues literacy, but issues literacy is still an essential part of it,” Burton added. “The courtroom exercise puts students in a position of having to take responsibility for making an argument about how it does or doesn’t matter.”

Hadis Anahideh, a research assistant professor in mechanical and industrial engineering, recently looked at whether limited vaccines and supplies were distributed equitably during the COVID-19 pandemic. Good models must account for various tradeoffs, she said.

“If you base distribution on population size, that may mean you send support to more people, but not necessarily the people who need it most,” she explained. “Some regions may be less populated overall, but they may have greater concentrations of people who are vulnerable to the effects of COVID-19 based on their health, financial stability, and access to healthcare services. You need the algorithm to learn the tradeoffs between geographical diversity and social-group fairness.”

On this note, Burton pointed out that equity and fairness aren’t always the same—and what’s fair isn’t always what’s equal.

Asudeh said UIC is uniquely positioned to be a pioneer in the thorny but pressing issues related to algorithmic fairness.

“We are the largest-minority serving institution in the city of Chicago and very active in research,” he said. “We have very diverse students and faculty on campus, and the problems are right in front of us.”

“We are motivated to work on this problem and take it seriously.”