Lex Machina

NYU Law faculty and alumni assess the impact of artificial intelligence on law and the world.

BY MICHAEL OREY

ILLUSTRATIONS BY OPENAI’S DALL-E

During the 2022–23 academic year, just two events on the NYU Law calendar mentioned artificial intelligence (AI) in their titles. In 2023–24, 21 events did. These events tackled a wide gamut of topics, among them the regulation of AI in Brazil, how AI is shaping corporate governance, and how it might advance diversity, equity, and inclusion (by detecting discrimination and closing pay gaps, for example).

It’s not that engaging with issues related to AI is new at NYU Law. In 2013, panelists explored issues related to algorithmic decision-making at a conference organized by NYU’s Information Law Institute (ILI)—one of the earliest examples of a law school event examining AI-driven computing, according to Alfred B. Engelberg Professor of Law and ILI Director Katherine Strandburg. In 2018, in a post on an American Civil Liberties Union blog, Associate Professor of Clinical Law Vincent Southerland discussed concerns that AI in the criminal legal system would exacerbate existing biases. Since 2021, Segal Family Professor of Regulatory Law and Policy Catherine Sharkey has served on the Roundtable on Artificial Intelligence in Federal Agencies, formed by the Administrative Conference of the United States to help agencies develop and improve protocols and practices for using AI tools.

But the advent of ChatGPT and other rapidly evolving generative AI platforms—which can produce text, images, and other content—has raised a host of pressing new questions about AI’s potential impact on society and the law. When he started as a 1L in 2022, Youssef Aziz ’25 had no particular interest in the technology. “But then,” he says, “ChatGPT came around, and I was like ‘Wait, this is actually super, super important.’” In a clinic as a 2L, he jumped at the opportunity to focus on AI. “Now it’s everywhere,” he says.

In response, NYU faculty members in multiple fields—criminal justice, national security, intellectual property, and torts, to name a few—are pursuing work on AI-related matters, including scholarship, regulatory filings, and reports for government institutions. Across these areas are some common themes. One is that AI holds both promise and peril. The systems that may discover new antibiotics, notes Walter E. Meyer Professor of Law Stephen Holmes, can also be used to create novel biological warfare pathogens for which there are no defenses.

Then there is the speed at which AI technology is evolving and how hard it can be for academics or policymakers to keep up. Last fall, Christopher Jon Sprigman, Murray and Kathleen Bring Professor of Law, submitted comments on AI to the US Copyright Office. “You’re really nervous when you’re doing that,” he says, “because it’s based on an understanding of the technology on that Thursday, and you wonder if by the next Monday, something important is going to change.”

In January, at an NYU Law Forum on AI in law practice, Latham & Watkins partner Ghaith Mahmood ’07, a member of the firm’s AI task force, walked the audience through some of the basics. AI, he said, “basically means any system that allows a computer to mimic human intelligence.”

The version that in 1999 first defeated a human chess champion, he noted, was a so-called expert system, meaning that human experts programmed it with rules to apply when facing off against an opponent. Nearly two decades would pass before AI began beating top human players of the game Go, which has exponentially more possible outcomes than chess. To determine its moves, this AI employed machine learning to develop its own guiding rules represented as statistical models based on its analysis of vast troves of data (in this case, previously played games of Go).

Top of mind for many students is the degree to which AI— particularly large language models (LLMs) at the heart of generative AI—may take over work now done by lawyers. Florencia Marotta-Wurgler ’01, Boxer Family Professor of Law, is pursuing a project in an area in which she has done extensive scholarship—internet privacy policies—that may shed light on AI’s impact on legal practice. In a working paper, “Can LLMs Read Privacy Policies as Well as Lawyers,” Marotta-Wurgler and her former student David Stein ’22 describe preliminary results of work they did with NYU Law students to create a coded dataset to benchmark LLMs’ ability to analyze and interpret consumer privacy policies. (A research scholar at NYU Law in 2023–24, Stein is now an assistant professor of law and computer science at Northeastern University.)

AI may be very good at certain tasks, Marotta-Wurgler says—such as identifying whether a contract has a class-action waiver, for example. But that leaves plenty of room for lawyers, she notes. “A lot of legal text, a lot of the practice of law, exists in gray zones, where there is no clear answer. I think that the challenging and interesting part of law exists in these gray areas, which AI should identify as gray, but doesn’t necessarily solve,” she says. At an “Introduction to AI for Law Students” discussion in March, Stein told attendees, “No graduate of this institution needs to worry that AI is going to steal [their] job.”

Ai-generated image of machinery in a law-themed room

The infiltration of AI into law practice and the legal system raises other issues. This past year, students in the Legal Empowerment and Judicial Independence Clinic, taught by Professor of Clinical Law Margaret Satterthwaite ’99, prepared case studies on how the use of AI tools by judges and lawyers around the world comports with human rights requirements that entitle people to competent, independent, and impartial courts. Does this include the right to have a case decided by a human judge? Aziz and Andrea McGauley ’25 studied this question— which is so novel, Aziz says, that one of the few areas they could look to for analogy was autonomous weapons. Drawing on ethical principles developed for robots and with an eye toward international humanitarian law, experts have said the weapons should always be subject to “meaningful human control.” Aziz and McGauley examined whether it makes sense to apply a similar standard in a judicial context. Satterthwaite, who is the United Nations special rapporteur on the independence of judges and lawyers, plans to incorporate work by her clinic students into a report on AI in judicial systems that she will deliver to the UN in 2025.

The Law School’s Center for Human Rights and Global Justice (CHRGJ) looked at human rights implications of AI more broadly through a series of essays on the topic in 2023. In the opening essay, Professor of Clinical Law and CHRGJ Chair César Rodríguez-Garavito comments on the ambivalence many feel about the technology. Generative AI, he writes, “can increase misinformation and lies online, but it can also be a formidable tool for legally exercising freedom of expression; it can protect or undermine the rights of migrants and refugees, depending on whether it is used to monitor them or detect patterns of abuse against them; and it can be useful for traditionally marginalized groups, but it also increases the risks of discrimination against communities like the LGBTQI+, whose fluid identities do not fit into the algorithmic boxes of artificial intelligences.”

Others at the Law School are looking at the increasing use of AI-based technology in policing. In a forthcoming Virginia Law Review article, Barry Friedman, Jacob D. Fuchsberg Professor of Law, and University of Virginia Law Professor Danielle Keats Citron write that law enforcement agencies are acquiring “vast reservoirs of personal data” on Americans and then using AI-derived tools “to develop vivid pictures of who we are, what we do, where we go, what we spend, who we communicate with, and much, much more.” Friedman and Citron call for this practice to cease, “at least until basic rule of law requisites are met.”

In a 2023 UCLA Law Review article, Southerland looked at a variety of police surveillance technologies that rely on AI, including facial recognition, gunshot detection tools, and license plate readers. Southerland discusses legal approaches that communities might use to resist and abolish the use of such technologies, which, he writes, “are disproportionately wielded against economically disadvantaged communities of color, infringe on privacy, and tend to operate under a veil of secrecy.” An initiative underway at NYU Law’s Policing Project, of which Friedman is founding director, is developing a regulatory structure for police use of AI.

Few areas of law stand to be more directly implicated by generative AI than copyright. Trained on massive datasets of text and images scoured from the internet, systems like ChatGPT, DALL-E, Synthesia, and Boomy generate content—including reports, poetry, images, videos, and music—in response to text prompts. Lawsuits alleging that both data harvesting and generated material violate copyrights are raising questions both novel and familiar. A 2019 Columbia Law Review article co-authored by Walter J. Derenberg Professor of Intellectual Property Law Jeanne Fromer and her former student Mala Chatterjee ’18 (now an associate professor at Columbia Law School) anticipated a question that AI has thrust into the spotlight: whether machines have the volitional mental state required to obtain rights or incur liability under copyright law. (Perhaps, the article suggests, a “conscious mental state” should be required for obtaining rights, but mere functionality suffices for liability.)

More recently, Fromer has joined the advisory board of Metaphysic, a company producing what she calls “ethical deepfakes” in conjunction with leading figures and entities in Hollywood. The role, Fromer notes, “involves lots of thinking on AI, copyright, and the right of publicity.”

Does AI call into question some of the very foundations of copyright? Emily Kempin Professor of Law Amy Adler has long queried the need for copyright protection for visual arts, where, she says, a “norm of authenticity” in the marketplace already acts to place a far higher value on original works. The debate over copyrightability of AI-generated images, Adler writes in a working paper, “turns on the long-standing premise in copyright law that we can separate ‘man’ from ‘machine’ in authorship.” This distinction, Adler says, “enshrines an increasingly irrelevant and outmoded model of creativity that was dated since its inception and has now become untenable.” Writing in a special volume of the MIT Press journal Grey Room on “Art beyond Copyright” (coedited by Adler), Professor of Clinical Law Jason Schultz predicted that questions raised by generative AI—what is art? who creates it?—will lead to “a profound destabilization of copyright law.”

Sprigman is more worried about the reverse. “I think it’s much more likely that copyright law will break AI than AI will break copyright law,” he says. The development of AI will be stifled if courts conclude that use of copyrighted material to train AI systems constitutes infringement, Sprigman argues. Such training, in his view, is fair use, as he and two other law professors argued in comments submitted to the US Copyright Office last year. A primary reason: employing content for training is not the kind of “expressive” use that triggers legal protection. While many worry about AI’s rapid development outpacing legal rules, Sprigman actually favors a lag. “I’m not sure it’s absolutely essential that law keep up,” he says. “What’s essential is that law not destroy something before it understands it.”

There is no shortage of dystopian scenarios in which AI presents an existential threat to how humans live. Edward Rock, Martin Lipton Professor of Law, points out that founders of two of the biggest players in the field, OpenAI and Anthropic, “like many in the field, are of the view that, while the potential benefits of AI are enormous, there’s a nontrivial chance that AI could lead to the extinction of the human species.” Both companies created unique governance structures designed to direct development of AI that is safe and beneficial to humanity. Last spring, NYU Law’s Institute for Corporate Governance & Finance (ICGF)—of which Rock is a co-director— co-hosted two conferences to discuss these structures with Wilson Sonsini Goodrich & Rosati, the Silicon Valley–based law firm that created them.

Rock sees AI as straining the capacities of such measures. “If the risk of extinction from generative AI is as high as some say, it is hard to see how any standard or bespoke corporate governance structure will be adequate to control that risk,” Rock says. “That is not what corporate governance is designed to do.”

Studying the war between Russia and Ukraine, Stephen Holmes points to some specific and disturbing risks. The idea of “meaningful human control” over autonomous weapons (which Aziz had looked to in researching the right to a human judge) is quickly slipping away, he says. The Ukrainians, Holmes notes, were making effective use of drones until the Russians figured out how to jam connections between drones and their operators. The Ukrainians have responded, he says, by giving drones full autonomy to identify and then kill or destroy a target. The prospect worries Holmes, whose scholarship has focused on geopolitical conflict. The US, he says, is the only country that has committed to always having a human in the loop for deployment of nuclear weapons. “So, what help is that to us?” he asks. “Not very much.”

AI’s implications for our political system are already being felt. When in 2020 people were warning that AI-produced deception and disinformation could threaten democracy, “it really seemed like a sci-fi future,” recalls Lawrence Norden, senior director of the Elections & Government Program at NYU Law’s Brennan Center for Justice. But the subsequent rollout of generative AI platforms, including those capable of creating deepfake audio and video, he says, has led him to think, “Hey, maybe the future is here.”

In advance of the 2024 US election, Norden and his colleagues have conducted tabletop exercises with election officials—including in key swing states Arizona, Michigan, and Pennsylvania—to prepare them for AI threats. On their checklist of defensive measures are adoption of cybersecurity best practices and preparation of rapid-response communications plans.

And leaving aside the worst-case scenarios of AI-induced global catastrophe, what happens when an AI-based system harms someone in a more everyday accident? In a Q&A in the spring 2024 newsletter of the American Law Institute (ALI), Mark Geistfeld, Sheila Lubetsky Birnbaum Professor of Civil Litigation, notes that many scholars think that the “‘black box’ nature of AI decision-making would seem to make it virtually impossible to prove issues like ordinary negligence [and] defective product design.”

But Geistfeld disagrees, pointing to a liability regime he has proposed for autonomous vehicles (AVs). If one is involved in a crash, he told the ALI, its safety performance can be evaluated “on a systemwide basis because there is in effect one driver (the operating system) for the entire fleet of vehicles.” Determining liability by looking at crash statistics for the fleet “doesn’t require an understanding of why a particular vehicle crashed on a particular occasion,” he said.

Sharkey similarly thinks that an established product liability framework—specifically one for medical devices—can address potential harms caused by AI. In a spring 2024 paper in the Columbia Science & Technology Law Review, Sharkey notes that in recent decades the Food and Drug Administration (FDA) has approved approximately 700 AI-enabled medical devices, some of which “learn” and adapt based on data input. The FDA, in turn, has adapted its regulatory process—for example, extending its device regulation to govern medical software.

Importantly, in Sharkey’s view, this premarket FDA review is augmented by the tort system, allowing harmful products to be identified—and cost of harm allocated—after products are in use. Geistfeld likewise advocates for a regulatory scheme for AVs that combines agency oversight with tort system liability.

As many wonder whether it will be possible to manage the risks of AI through regulation, Strandburg takes a long view. She notes that following the introduction of the telephone and the internet, policymakers took measures to address concerns raised by the technologies—and that now, with social media, they are at least exploring doing so. It’s a positive development, she says, that discussion of regulation of AI has already begun.

“It’s hard to know how to do the regulation when something is very new,” Strandburg acknowledges. “But I’m a person who thinks that because we can’t do it perfectly doesn’t mean we shouldn’t do it. We have to try.”

Alumni Perspectives

Noah Weisberg
Noah Waisberg

NOAH WAISBERG ’06

Co-founder and CEO of Zuva, whose AI platform helps clients extract and use information from contracts; co-author of AI for Lawyers

“If you think about the legal market right now, it’s actually characterized by scarcity. There are a million lawyers in the United States, but most people are not represented, particularly on the poorer end, so you’ve got an access to justice crisis. Even at the mid- to high end, I think there are massive amounts of legal work that [are not being done] right now because it’s not a problem that’s economical to solve with the current system. So I think [AI] can make a lot of problems solvable that weren’t solvable before.”

Alex Elias
Alex Elias

ALEX ELIAS ’12

Co-founder and CEO of Qloo, which uses an AI-powered platform to provide consumer taste data and recommendations to businesses

“Thinking of AI in the same way as we think about computers is fundamentally flawed. The former is more probabilistic, whereas the latter are deterministic machines. Providing a computer with an input 100 times will always yield the same result. Doing the same with AI can result in countless different outputs. This is what makes AI so useful, but also why it is more challenging to regulate. AI models will always have a level of inexplicability— trying to fully control what happens inside the ‘black box’ is a fool’s errand. Instead, regulators should focus on what goes into the box and how the outputs are utilized.”

Esosa Ohonba
Esosa Ohonba

ESOSA OHONBA ’23

Founder and legal AI advisor of Layman Ltd., which uses AI to help users represent themselves in civil court

“I think it’s wrong to say that AI will replace lawyers. Instead, AI demands that we think bigger about the future of law and how we want to shape it. However, with great power comes great responsibility: AI’s automated decisions impact real lives. If we’re not vigilant, existing biases and prejudices will warp what AI contributes to law—cementing divides instead of breaking them. Responsibility for accountability is now shared among us all, and it’s up to us to ensure that AI improves the legal system, not undermines it.”

Jennifer Berrent
Jennifer Berrent

JENNIFER BERRENT ’00

CEO of Covenant Law, whose AI tools perform document review for investors

“Technology is always a double-edged sword. On the one hand, word processors made legal work so much easier— drafting and editing documents took a fraction of the time than it would when markups had to be handwritten and then typed up by pools of secretarial staff. On the other hand, removing those barriers made it so much easier to tinker with documents, commenting or editing where previously it would not have been worth the effort. I wonder whether this has actually to some extent proliferated complexity instead of reducing it—making negotiations longer and more stressful as lawyers take advantage of the technology not to simplify things but to argue over the minutiae. One fear I have for AI is it could compound this issue if applied in the wrong way!”

Michael Orey is public affairs director at NYU Law. Alumni interviews conducted by assistant dean and chief communications officer Shonna Keogan and former public affairs officer Emily Rosenthal.

Posted on September 10, 2024