The Human Fight Against A.I. Bias

The 2021 Project DeBIAS team (from left to right): Benjamin Bral, Kyle Truong, Soham Nagaokar, Rachel Anthony, Philip Mathew, Johnny Rajala, Joanna Ihm, Seth Gleason, Aarushi Malhotra, Daniel Zhu; @umddebias Instagram

The 2021 Project DeBIAS team (from left to right): Benjamin Bral, Kyle Truong, Soham Nagaokar, Rachel Anthony, Philip Mathew, Johnny Rajala, Joanna Ihm, Seth Gleason, Aarushi Malhotra, Daniel Zhu; @umddebias Instagram

When society thinks of what the future will look like in mankind’s technological advancements, race may not be the first thing that comes to mind.

 However, an award-winning Gemstone Honors research team here at the University of Maryland is researching the programs that may perpetuate biases in a person’s likelihood of getting hired. 

Project DeBIAS is a group of 10 university undergraduates studying bias in artificial intelligence (AI) hiring systems. These algorithms determine whether a job applicant is suitable for a position. Currently, they are active on multiple job-seeking sites such as Indeed and Linkedin. 

For these systems to work, the algorithms train on data to predict which job applicants will be the best candidate for the position. However, these algorithms do not always present fair outcomes as Philip Mathew, a junior computer science and mathematics double major points out.

“AI in general, people assume it’s objective, but one of the main points of our project is that it’s completely not objective. It’s entirely based on historic data and historic data can often be biased,” Mathew says. “All these biases that we know exist if humans are doing the hiring will also exist if you apply AI to it.”

As a result, these systems can have blind spots when reviewing applications from people of color, women, and gender non-conforming individuals.

This team is specifically interested in looking at how biases in these systems relate to the concept of redlining. It was a practice of active housing discrimination against Black people up until the late 1960s. Homes in Black neighborhoods were labeled as a risky investment. Banks would be reluctant, or outright refuse, to give loans to Black homeowners based on race.

“We wanted to explore if a person lives in a certain neighborhood are they less likely to get a job opportunity,” says junior public health science major Aarushi Malhotra.

Where someone lives can have an immense impact on the opportunities that a person receives. Their professional career can be affected if they did not have access to the educational or professional resources to succeed. The combination of biased algorithms and unfair housing practices could have detrimental impacts on those seeking employment. However, an applicant is not really able to tell what factors may impact their application.

“Some companies may have an initial AI component that looks through your resume and then you get called in for an interview. Others may be fully autonomous or fully in person,” says Malhotra.

“As an employee, you won’t be able to really notice it,” Mathew says. “You can't know for certain why a corporation didn’t take you unless you get the decision-making process from the corporation.”

In the meantime, developers and development teams can take steps to reduce bias in AI by doing educating themselves on this issue.

“At a minimum, developers should be mindful that these things are problems. When you’re creating your datasets make sure to do a thorough evaluation to see if you need to find a way to diversify your data,” Mathew said.

Solving this problem will not be an easy task, but if researchers like Project DeBIAS continue to amplify these issues in technology, the future will be a lot more inclusive for everyone.