REPORTING

Programmed to Exclude

The Myth of Objective Technology

Annabel McDonald

June 7, 2025

REPORTING

Programmed to Exclude

The Myth of Objective Technology

Annabel McDonald

June 7, 2025

Illustration by Sudeepti Tammana

Automated technology has moved us towards a world that values efficiency over humanity. Efficiency is valuable because automation in certain places undeniably adds to the quality of life for everyone much more than it detracts. However, removing human judgment from complex mathematical calculations is not nearly as consequential as doing so for criminal justice or border control. Sure, the argument can be made that replacing people with technology in situations as sensitive as these might solve problems created by human nature like bias. Though, this ignores the fact that technology is created within the context of our society, one that uses race as a tool of exclusion and oppression and codes inequity into practically everything it creates. 

Technological determinism is a theory that suggests technology is the driving force for social change because of the way it “alters the way we think and act, the way we conduct our interpersonal relationships, our values, and the way we learn,” according to Lexie Beiro from Medium. This theory definitely has some strengths. It’s undeniable that the internet has completely altered the way that we live our lives because it mediates just about everything we do. However, this suggests that new technologies are objective tools that have the ability to fix what humans can’t on their own. This is a common assumption that people have about new tech, especially those that are difficult to understand, like artificial intelligence. It’s important to recognize, though, that new technologies are imagined, designed, and built by people for people. Creators have their own motives for making their inventions behave in certain ways, not all of which are for the greater good.

Racial biases are coded into technologies, sometimes inexplicitly yet effectively. In her book, Race After Technology: Abolitionist Tools for The New Jim Code, Ruha Benjamin defines race as a tool “designed to stratify and sanctify social injustice as part of the architecture of everyday life.” She describes how modern technology has brought about “The New Jim Code” — “the employment of new technologies that reflect and reproduce existing inequities but that are promoted and perceived as more objective or progressive than the discriminatory systems of a previous era.” Essentially, technology claims to be objective and morally superior to humans but isn’t, even though most people believe that it is. This means that racism is embedded into tech, but not explicitly, and therefore is less likely to change or be challenged by prominent forces such as legislation. This is when removing human judgment from the implementation of technology becomes especially problematic. When automated tech treats people of color differently and causes them harm, there needs to be someone in the loop capable of correcting and preventing it. 

One place where this is particularly true is at the United States’ borders. Iván Chaar López, an assistant professor in the Department of American Studies at the University of Texas in Austin, examines “the history and politics of computing and information infrastructures, and their relation to racial formation.” In his book, The Cybernetic Border: Drones, Technology, and Intrusion, he describes how borders simulate war through differentiation and the construction of an “other” to be excluded from the nation. This requires designations such as “enemy” or “target” for people without acknowledging their humanity. 

What is a “cybernetic border?”  Iván Chaar López defines it as “a regime centered on data capture, processing, and circulation in the production and control of the boundaries of the nation.” Essentially, it is a “system of systems” equipped with “electronic fences, ground sensors, computers, radio communications (...), unmanned aerial systems, and other information technologies.”


The “System of Systems”

These borders rely on data to identify “intruders” based on “essentialized racial characteristics,” therefore connecting one’s sovereignty with how they are organized within and perceived by information technologies. For example, drones are not equipped with human judgment or intelligence, so how do they identify who the “intruders” are and who are just border patrol agents? Drones “see” through operational images which, when combined with artificial intelligence and deep learning, exclude humans and give machines full interpretive power. These interpretations are driven by data that has been classified by humans, which is where it becomes clear that technology such as this is not objective or exempt from human bias. Someone decided what characteristics would trigger intruder detection, most of which are racial. This is an example of race being explicitly encoded into technology and causing harm, although there are plenty of examples where it isn’t as clear. 

The reason I chose to highlight this example of harmful technology is because border control has been a subject of discourse, especially since the election of our current president. Donald Trump has clearly expressed his anti-immigrant sentiment: 

“The Democrats say, ‘Please don’t call them animals. They’re humans.’ I said, ‘No, they’re not humans, they’re not humans, they’re animals’ … Nancy Pelosi told me that. She said, ‘Please don’t use the word animals when you’re talking about these people.’ I said, ‘I’ll use the word animal because that’s what they are.” 

Since his inauguration, Trump has focused a lot of his attention on immigration and border control. In January, he issued an executive order to secure the U.S. southern border. In this order, he directs the construction of physical barriers to achieve “complete operational control” of the southern border and to take “all appropriate actions to detain” migrants crossing unlawfully “to the fullest extent permitted by law.” He has also declared a national emergency at the southern border, which allows the use of the military, and issued another executive order that makes sealing the borders a key military priority. These measures are all part of the immigration enforcement strategy of “Prevention Through Deterrence.” This strategy essentially makes it more dangerous to cross the border to discourage those from trying. However, this strategy fails to deter border crossers and instead increases the number of casualties. The use of technology to enforce prevention through deterrence increases the danger and scope of border patrol, and therefore the number of people who lose their lives in the process. In order to better understand the extent of migrant lives lost in the Arizona borderlands, see this interactive map by Humane Borders. 

It’s important to recognize that technology exists within a society of oppression and exclusion and cannot be entirely objective or neutral. We must learn to question the tools that we use daily and notice the ways they have the potential to harm others. We must also understand the power that automated technology has, and that sometimes, that power shouldn’t be left unchecked.  

*This article was inspired by a class I took at UW Seattle titled Digital Geographies (GEOG 258) taught by Dr. Erin McElroy. If you’re interested in the subjects discussed in this article, consider taking the class to learn more! 

Spot Illustrations by Aaliyah Diaz

iJournal is the UW iSchool’s student-led publication. Find us on Instagram or email us. ©2025.

iJournal is the UW iSchool’s student-led publication. Find us on Instagram or email us. ©2025.

iJournal is the UW iSchool’s student-led publication. Find us on Instagram or email us. ©2025.