On November 22, a daylong conference hosted by the Center on Civil Justice at NYU Law and The Future Society examined questions of ethics, accountability, and justice raised by the growing application of artificial intelligence (AI) in the legal system and elsewhere.
In the first panel, Cassandra Carley ’21, Marc Canellas ’21, and Dillon Reisman ’21 of the NYU Law student organization Rights Over Tech provided an introduction to algorithms and their role in the legal profession. The second panel, conducted by Chloé Bakalar, an assistant professor at Temple University and Erin Miller, a Harry A. Bigelow Teaching Fellow and lecturer in law at the University of Chicago School of Law, examined a case study addressing key ethical and policy issues that AI poses for the legal community. The final panel, moderated by University Professor Arthur Miller, addressed ways to ensure that the judiciary understands the automated decisions systems that are already being used in civil and criminal cases.
Selected remarks:
Cassandra Carley: “While AI might sometimes try to replicate what human intelligence does, it’s not always going to be perfect. It’s going to be doing things differently, and algorithms are not going to be doing exactly what we do as humans, and we don’t do exactly the same things as algorithms do. So, as we use AI increasingly, it’s important to understand how the AI is working so that we can understand whether or not we want to use it.”
35:20-35:44
Chloé Bakalar: “So if clients are unaware about what kind of systems lawyers are using,… this can distort trust in the discovery process, with some believing that TAR [technology-assisted review] systems are better than they actually are, and some believing that they’re worse than they actually are.… What would it take to make TAR more trustworthy in the legal setting? Should we be thinking about client trust as we’re evaluating these technologies, and not just the responsibilities of lawyers?”
1:21:54-1:22:35
Arthur Miller: “Once you put discretionary factors up and down the algorithm production line and then multiply discretion in the decision-making, the notion that AI may help reduce bias or improve predictability or generate uniformity of treatment may be wrong…It may well be that utilization of AI tools with all of those well intentioned discretionary factors may exacerbate the very things you’re trying to minimize.”
47:16-48:03
Follow the full discussion on video:
Panel 1: Primer on AI
Panel 2: Princeton Dialogues on AI and Ethics – A Case Study
Panel 3: Educating the Judiciary
Posted January 22, 2020