Can courts protect justice in AI age?

Can courts protect justice in AI age?

Blind Justice

Credit: Pauel Denlivic from Packel

In the system of criminal justice, decisions about people have historically been detained by other people, such as judges and parole boards. But this process is changing: The decision makers have increasingly incorporated the artificial intelligence system into numerous tasks, from predicting crime, from predicting crime to DNA analysis.

The use of AI in these domains raises questions as to how these computing systems use data to offer predictions and recommendations, as well as big questions about the protection of justice in AI age.

In particular, many AI systems are “black boxes”, which means their behavior and decision -making processes are vague to check. This is facing a problem in justice system, in which the accountability of important players like public trust and the accountability of important players like judges is linked to how and why life changing decisions are made.

In addition, even if a black box system is fair and accurate in terms of data, it cannot meet the standard of justice of the procedure required through our constitutional system.

In April 2024, the National Institute of Justice (NIJ) issued a public application for information that could help to inform future guidance principles about the safe and effective methods of using AI in the criminal system.

Computing Research Association – a major organization that focuses on modern -day computing research on timely challenges. SFI Professor Chris Moore and outdoor professor Stephanie Fors (Arizona State University) were among the authors. The argument of this group was clear: Where constitutional rights are at stake, critical decisions should not be made using AI with a hidden process.

The authors noted, “The idea that an ambiguous system – that neither the defendant, nor his lawyer, nor his judges can understand – can play a role in major decisions about a person’s freedom,” the authors noted. “An ambiguous system is an accusation that cannot face the defendant. The witnesses they cannot examine, and give evidence that they cannot compete.”

In August, this group followed with opinion published in it ACM’s communication. Although the original executive Order 14110, which has indicated the NIJ’s inquiry, has been abolished, a new executive order 13859 has a safe test of AI and “promoting public confidence and confidence in AI technologies and demanding civil liberties, privacy and protection of US values.”

Moore says that in the order of criminal justice, AI technologies will fit this bill only if it improves both the current system of justice and transparency. This is a part of which makes the AI ​​charming. Human decision -making processes, however, are not always transparent.






https://www.youtube.com/watch?v=NZJITXAWgr8

Christopher Moore “talked about the AI’s responsible use in the US criminal justice system.” Credit: ACM Communication.

“If we are made the judicial system more transparent and accountable, we should use AI,” Moore says. “If that doesn’t happen, we should not use it.”

He and his colleagues presented their remarks to the NIJ in May, 2024. He highlighted the important arguments that the Justice Department should consider as it develops and implements a new guideline about the fair and beneficial use of the AI ​​in punishment and other matters.

Many of these arguments emphasized the need for transparency: Everyone who uses either AI or is affected by the recommended recommendation should be a clear understanding of the data used, and how it has emerged with its recommendations or risk scores. In addition, experts suggested, the procedure through which the Judge AI uses guidance from the system should be clear.

Some researchers have warned that increased transparency can reduce the use of the AI ​​system, but in the past few years, researchers in the field of “Creed AI” have developed views that illuminate these models’ information and how to develop input.

Explanable AI systems can help, but Moore notes that there are many ways to explain transparency. Transparency does not mean that everyone understands computer codes and mathematics under the nerve network. This may mean which data has been used, and how.

He pointed to the Fair Credit Reporting Act (FCRA), which requires credit rating companies to disclose the information of consumers used to make credit decisions and fix the rating. Moore says companies can keep their process hidden, but a user can easily download the information used in the algorithm.

It also gives users the right to confront these data if they are not correct. On the other hand, he said that the FCRA does not allow users to question whether the algorithm is doing the right thing with their data.

He says, “It is important to see the internal works of an AI not only to its inputs and output.”

In addition to recommendations for transparency, researchers suggested that the output from the AI ​​system should be specific and quantitative. For example, instead of describing a “high risk” suspected person, should report “7 % possibility of violent crime”. Moore says the quality label, leave a lot of room for misinterpretation.

“If the judge thinks what the system production means, including what kind of mistakes they can make, I think they can be useful tools,” Moore says. “Not as a change of judges, but to provide an average or baseline recommendation.”

Critically, the authors warned that the AI ​​system should never completely change human decision makers, especially in cases where the detention and constitutional rights of a person are at stake.

In the maximum scenario, the AI ​​system can become a kind of digital consultant that is considered by other factors related to the matter, as well as a judge or other decision maker. “But we should always be prepared to explain the AI’s recommendation and ask how it was prepared,” says Moore.

More information:
Christopher Moore Et El, about AI’s responsible use in the US Criminal Justice System, ACM’s communication (2025) DOI: 10.1145/3722548

Provided by Santa per Institute

Reference: Can courts protect justice in AI age? (2025, September 5) retrieved 5 September 2025 from https://phys.org/news/2025-09-COURTS-Safeguard-Airness-Ai-age.Html

This document is subject to copyright. In addition to any fair issues for the purpose of private study or research, no part can be re -reproduced without written permission. The content is provided only for information purposes.

Share this article

Leave a Reply

Your email address will not be published. Required fields are marked *