Artificial Intelligence in Security and Justice: Lessons for Mexico

June 2023
Artificial Intelligence in Security and Justice: Lessons for Mexico

Introduction

In recent years, artificial intelligence (AI) has rapidly developed and spread into many aspects of our lives. As governments collect and organize growing amounts of data, the use of automated systems to inform decision-making is becoming more common. In criminal justice, these tools have been used for decades by governments around the world to predict crime, assess risk, and support judicial procedures. However, neither data nor algorithms are free from bias. In fact, they can reproduce and even reinforce existing inequalities, leading to unfair outcomes. That’s why it’s urgent to discuss the risks of automated systems in security and justice—especially in countries like Mexico, where there are not enough safeguards to prevent data misuse and protect fundamental rights such as privacy.

What is AI in Security and Justice?

What we now call AI is essentially the automatic processing and analysis of information. These systems are driven by algorithms and statistical models trained on large datasets and are used to automate human processes—identifying patterns, making predictions, or solving problems. While current AI systems are not truly “intelligent,” the field’s ultimate goal is Artificial General Intelligence (AGI): systems that could eventually perform any intellectual task a human can do.

Examples in Practice

Main Challenges for AI in Justice in Mexico

  1. Lack of Reliable Data: Mexico still lacks reliable data on judicial processes. This is mainly due to the absence of standardized information systems that structure criminal case data across all stages. Data is often not broken down by victim or defendant, making it difficult to uncover inequalities—such as discrimination by sex, gender, age, socioeconomic status, disability, or indigenous status. These data gaps can result in outcomes that disproportionately affect vulnerable groups, as seen with predictive policing tools.
  2. Algorithmic Bias and Black Boxes: As with the COMPAS case, algorithms and the parameters they use are not immune to bias. They can reinforce stereotypes and perpetuate inequalities, leading to unfair results. But detecting discrimination in these systems is often very difficult, as most are “black boxes”—opaque and hard for the public to understand.
  3. Privacy and Surveillance: Without proper regulation and safeguards, governments may use personal data to violate human rights, such as privacy. In recent years, Mexico has seen increased unauthorized surveillance by authorities. Agencies like the former Attorney General’s Office (now FGR), the National Intelligence Center, and the Ministry of Defense have been accused of using Pegasus spyware to target citizens and civil society members—without accountability. Alarmingly, even the President has justified such surveillance for intelligence purposes.

We Need Safeguards to Avoid Automating Injustice

AI is here, and we can’t ignore the risks. Technology is not neutral, and automating criminal justice can lead to deeply unfair outcomes. In fact, the risks are so high that some organizations in the European Union have proposed an outright ban on automated risk assessments in judicial procedures. Cities like San Francisco and Boston have already banned police use of facial recognition. In Mexico, where impunity and rights violations remain high—even by authorities themselves—it’s urgent to have a public discussion about the regulations needed to prevent misuse of automated systems.

International experience shows Mexico needs a legal framework for the ethical and responsible use of data, transparency, and privacy protection. The worst scenario would be for AI to outpace us—leaving us with technological solutionism, legal gaps, and none of the safeguards needed to prevent injustice and further rights violations.

This article was originally published in Spanish as “Inteligencia artificial en seguridad y justicia: lecciones para México.” Read the original at Nexos (June 2023).