Biometric Data in Predictive Policing: Exposing Inequalities and Seeking Solutions
In modern law enforcement, biometric data is introduced into crime detection algorithms with the intention to reduce inaccuracies in criminal detection; paradoxically, biometric data used in predictive policing algorithms exacerbate inequalities due to their misinformed, predictive decisions, disproportionately arresting marginalized populations. The arrival of predictive policing was made possible by the advancement of biometric recognition technology starting from the late 19th century. Predictive policing is characterized as taking data and analyzing them to anticipate, prevent and respond more effectively to future crime, a seemingly practical and objective assessment in predicting future crimes. Biometrics are presented as “accurate technology”, producing data to enhance crime prevention, due to findings from the National Institute of Standards and Technology (NIST) that claims certain data like facial recognition had reduced inaccuracy rates from 5% to 0.2% from 2010-2018. Contrary to that belief, evidence suggests that the way these technologies perpetuate systemic biases built up throughout history, such that creates a feedback loop to impact communities of color, unhoused communities, and immigrants. This article examines the historical factors that have contributed to the existing prejudice in the criminal industrial complex among minority populations in the United States, and explores how the adoption of predictive policing technologies further amplifies racial disparities.
​
Disparities in criminalization refers to the unequal treatment of individuals from marginalized communities, primarily regarding racial and ethnic gaps in incarceration. Specifically, overrepresentation of incarcerated Black individuals is longstanding; for decades, Black people have been represented in over a third of the national jail population in the United States, despite constituting only about 14% of the American population. Such disparities in incarceration rates occur with a root cause. Historically, segregation has effectively isolated Black people in socioeconomically challenged areas and contributed to poverty and low access to education, nutrition, mental and physical health, and employment. In turn, these factors have been associated with heightened levels of crime, law enforcement surveillance, arrests, and the likelihood of behavioral health disorders, such as substance misuse and mental illness, given that people with behavioral health disorders are also disproportionately targeted in jail admissions.
​
However, senior policy analyst Michael Guariglia argues that the advancement of biometrics and predictive policing fails to address these historical factors. Although predictive policing algorithms with biometric data can arguably be the solution to racial disparities due to enhanced accuracy, preventing the instinctive biases of police officers alone. Although biometric data is highly accurate at identifying subjects as said by the NIST, Guariglia continues contending that its applied usage by police officers results in further disproportionate criminalization by reinforcing a feedback loop of over-policing. Specifically, biometrics used in predictive policing algorithms rely on historical data from crime activity and deploys officers according to highest crime neighborhoods, further policing neighborhoods that are already overpoliced.
​
Overview of Predictive Policing and its Relation to Technology
The Santa Cruz City Council defines predictive policing as “software that is used to predict information or trends about crime or criminality in the past or future, including but not limited to the characteristics or profile of any person(s) likely to commit a crime, the identity of any person(s) likely to commit crime, the locations or frequency of crime, or the person(s) impacted by predicted crime”.
As Guariglia says, these technologies use information from historical crimes in regards to their time of day, weather locations, and types of victims to allocate the number of officers in that time and area. Because of this, predictive policing algorithms result in over-policing in specific areas, but not every area receives equal regulation. For example, research associate Jamie Guterman finds that Predictive policing tool HUNCHLAB is a database that solely includes publicly recorded crimes, leaving out criminal activity that correlates with poverty, involving higher officer discretion and potential bias at locations. Similarly, a separate program in Pasco creates a list of people with high recidivism rates, based on the county’s interpretation of their connections to particular people and criminal records. Despite the promising objectivity of such a solution, predictive technologies misallocate police patrols: some neighborhoods are over-policed while others are underpoliced. Such over-policing in particular neighborhoods facilitates harassment that affects entire families. Historically, police are known to arrest more people in Black and other minority neighborhoods, leading algorithms to direct more policing to those areas, resulting in continued arrests without considering the root socioeconomic factors that play into each crime.
​
Technological Reforms
Despite current shortcomings of predictive policing algorithms, researcher biometric recognition technology can identify perpetrators of crime using comprehensive and accountable strategies as recommended by sociologist Sarah Brayne. First, comprehensive analysis of data would look like analysis on not only the citizens under suspicion but also the police systems. Brayne writes that “If the surveillant gaze is inverted, data can be used to fill holes in police activity, shed light on existing problems in police practices, monitor police performance and hit rates, and even estimate and potentially reduce bias in police stops”. Second, these algorithms require more caution when approached by police agencies. Specifically, the use of predictive analytics tools should be preceded by evaluation from various parties, ranging from independent agencies to community forums and to interdisciplinary agents, including lawyers, community experts, cops and program architects. An example of a strategy involves adding randomness to predictive programs, which looks like sending officers to locations with varying levels of historical crimes. As a result, this strategy will ensure a more widespread allocation of police officers, reducing algorithmic bias in the data sets.
​
While predictive policing was introduced with the hopes of becoming a comparatively accurate and objective crime detector for police officers, it exacerbates disproportionate criminalization of marginalized communities by perpetuating systemic biases. Specifically, these algorithms take into account historical criminal activity without regard for the context behind each crime. As a result, predictive policing continues policing biases in Black communities, while not investing resources to acknowledge the socioeconomic factors leading to crime in each neighborhood. This result not only impacts the vulnerable minorities who are disadvantaged under neighborhoods with high crime proximity, but also neglects restorative justice since predictive policing punishes without rehabilitating socioeconomic factors that lead to each crime. Despite perpetuating current biases, a reformed future for predictive policing algorithmic technology envisions address to underlying socioeconomic factors, and contributing to a comprehensive and accountable criminal justice system that includes all communities representative of crime.