Technological injustice
What happens when we try to create unbiased tech in a biased society

“Technology is often spoken about as if it were a force separate from human influence […] Yet ‘human beings are behind the screen: our values, our ideologies, our biases and assumptions,” says Ruha Benjamin, a sociologist and professor of African American studies at Princeton University.

Amidst the rise of movements such as #BlackLivesMatter, the problem of racism in artificial intelligence (AI) and technology has been brought to light. Most followers of the movements have a general understanding of the fact that technological “advancements” have disproportionately negative effects on the Black community, especially in North America. However, to solidify this argument and push toward technological reform, it is important that we understand how and why this discrimination occurs. 

For starters, technology is not a saving grace that will enter a community and remove all flaws. This idea is rooted in technological solutionism where large technology companies try to convince society that by spending more money in the technology sector, we can achieve world peace. Some may truly believe that their creation is fair and equitable, but unconscious bias and deeply embedded oppressive values get reproduced. The complex AI systems we use are a mere extension of our current society that follow the rules that the ones creating it set. Technology is not unbiased.

Let us imagine that the Toronto Police department wants to create a system that will identify people who may be threats or deemed as a risk as they walk into a bank. To create this system, programmers will have to create an algorithm for the machine to understand what a threat generally means and looks like. They will have to feed the program a dataset which includes photos of past threats and criminals to further the understanding of the program. Where will the data come from? Existing police records. 

This assumes that all police records are accurate, free from racial bias, and that the police department has convicted every single threat equally. Given the fact that the data is most likely tainted, a flawed vision is now imprinted into a machine that is seen as “unbiased.” If the data includes a disproportionate amount of Black men as “threats” then the machine will also naturally detect features of Black men as threatening. 

Unfortunately, this isn’t just our imagination. The use of biometrics and technological point systems is being adapted around the world. North American police departments are making special use of these systems to fight against racism accusations. In many instances, instead of spending the money and time towards training and consequences, they resource the funding toward technology and see it as the saving grace. 

To forecast when a crime will take place, police departments across the U.S. have implemented data driven forecasting often referred to as predictive policing. By determining which areas will have the most crime, the police know where to increase their visibility so that people are discouraged from committing it. The Los Angeles Police Department (LAPD) had introduced a point-based program called Operation Laser in 2011. Someone on probation or a gang member would have five points on their record but even a simple stop by the police can get you a point. 

This means that if a racist police officer were to casually stop a Black man and that man happened to have a preexisting criminal record, even for petty crime committed years ago, that Black man would get a point which would increase the amount of police around him at all times, increasing the risk of another racist police officer stopping him—and the cycle continues. While the LAPD decided to end Operation Laser in 2019, other predictive policing and technological solutions to problems within the police are being developed. 

Technology is often the facilitator of abuse against marginalized communities disguised as the remedy for all of society’s issues. While I admit, technology can foster a better environment, it cannot start unless we tackle the foundation. A prejudiced society will create a prejudiced system. It is our responsibility to be aware of the bias that is around us, not be fooled by baseless claims of improvement, and push toward technological reform and foundational accountability.

If you are interested in learning more about this topic, CCT320: Communication, Technology, and Social Change taught by Dr. Negin Dahya dives further into technological impacts on marginalized communities.

Staff Writer (Volume 48), Contributor (Volume 49) — Hamna is in her fourth year at UTM specializing in Digital Enterprise Management. Her love of reading and writing is only paralleled by her interest in random Space News and impromptu discussions about society, ethics, and technology. She writes for The Medium because she believes that one of the most beautiful elements of humanity is discourse, which she is given the opportunity to encourage through her work. In her free time she likes drinking chai, reorganizing her bookshelf, and reading complex technical books to her nieces and nephews.

Leave a Reply

Your email address will not be published. Required fields are marked *