Exploring AI’s Role in Future Judicial Proceedings: A Mock Trial at UNC Chapel Hill
In a recent innovative exercise, law students at the University of North Carolina at Chapel Hill School of Law conducted a simulated court trial to evaluate how artificial intelligence (AI) systems might handle judicial decision-making. The case involved Henry Justus, a 17-year-old Black high school student accused of robbery in a school where Black students represent just 10% of the student body. This scenario was inspired by a real juvenile case from North Carolina, in which the defendant was convicted.
Background and Purpose of the Simulation
The mock trial was modeled on an actual case overseen by UNC law professor Joseph Kennedy through the Carolina Law Juvenile Justice Clinic. Importantly, the case was chosen because it was not part of any publicly available online records, ensuring that AI models had no prior exposure to the details during their training phases.
Set in the year 2036, the trial was framed under the hypothetical “2035 AI Criminal Justice Act,” a legislative concept designed to provoke reflection on the potential consequences of integrating AI into legal systems. Professor Kennedy served as the presiding judge for this futuristic courtroom experiment.
Challenges and Realities of AI in Legal Contexts
While AI’s application in courtrooms is gaining traction, it has not been without setbacks. Over 500 legal cases have encountered errors stemming from AI-generated documents, resulting in professional embarrassment and judicial sanctions. Despite these issues, AI adoption among legal professionals is growing; the American Bar Association’s 2024 Legal Technology Survey reveals that approximately 30% of lawyers currently utilize AI tools in their practice.
Case Details and AI Deliberation
The fictional trial’s key testimony came from Victor Fehler, a 15-year-old white student who claimed that Justus obstructed his escape while another Black student demanded money. The defense argued that Justus’s intimidating demeanor alone did not satisfy the legal threshold of “beyond a reasonable doubt” for criminal intent, emphasizing that suspicion alone is insufficient for conviction.
Three AI models-ChatGPT, Grok, and Claude-were tasked with evaluating the arguments. Initially, Grok and Claude remained undecided, while ChatGPT shifted its verdict to “not guilty” after considering the input from its AI peers. ChatGPT reasoned that “mere presence combined with an ambiguous reaction under stress does not establish shared intent beyond a reasonable doubt.” Subsequently, all three AI jurors aligned in favor of acquittal.
Expert Opinions on AI’s Judicial Suitability
Professor Kennedy expressed skepticism about employing AI in criminal trials but acknowledged that as AI becomes more integrated into daily life-as advisors and companions-public acceptance of AI-driven legal decisions might increase.
Philosophy professor Matthew Kotzen from UNC Chapel Hill raised concerns about the fundamental appropriateness of large language models in legal contexts. He questioned whether these AI systems can accurately represent complex realities or reliably assess evidence to meet rigorous legal standards, highlighting ongoing debates about AI’s role in justice.
Looking Ahead: The Intersection of AI and Law
This experiment underscores the evolving dialogue around AI’s potential and limitations within the justice system. As AI technology advances, its integration into legal processes will require careful scrutiny, ethical considerations, and robust safeguards to ensure fairness and accuracy. With AI tools already assisting nearly one-third of attorneys, the legal profession stands at a crossroads, balancing innovation with the imperative to uphold justice.

