Mark Geistfeld named as Reporter for ALI’s new Principles of the Law on AI torts

The American Law Institute (ALI) has named Mark Geistfeld, Sheila Lubetsky Birnbaum Professor of Civil Litigation, to lead its newly launched project focused on the civil liability risks associated with artificial intelligence (AI). ALI produces scholarly work designed to clarify, modernize, and improve the law through its Restatements, Principles, and Model Codes.

Mark Gersfeld
Mark Geistfeld

ALI's initiative—Principles of the Law, Civil Liability for Artificial Intelligence—arrives as AI’s impact on society and a broad array of industries continues to widen. “Given the anticipated increase in AI adoption by many industries over the next decade, now is an opportune time for The American Law Institute to undertake a more sustained analysis of common-law AI liability topics through a Principles project,” ALI Director Diane Wood said in a statement. The project aims to provide guidance to courts, legislators, regulators, and businesses that are now grappling with the legal implications of AI.

“Courts are already facing the first set of cases alleging harms, largely related to copyright and privacy, stemming from chatbots and other generative AI models,” Geistfeld said in a statement, “but there is not yet a sufficient body of caselaw that could be usefully restated. Meanwhile, influential state legislatures are actively considering bills addressing AI, and Congress and federal regulators pursuant to President Biden’s Executive Order 14110 are also addressing these matters. These efforts could benefit from a set of principles, grounded in the common law, for assigning responsibility and resolving associated questions such as the reasonably safe performance of AI systems.”

In tapping Geistfeld, ALI draws on his extensive expertise in tort law, including his scholarship addressing common-law rules governing the prevention of and compensation for physical harms. He has authored or co-authored five books along with over 50 articles and book chapters, often showing how difficult doctrinal issues can be resolved by systematic reliance on the underlying legal principles. Geistfeld has previously explored AI-related tort issues on the liability and insurance implications of autonomous vehicles in publications such as “A Roadmap for Autonomous Vehicles: State Tort Liability, Automobile Insurance, and Federal Safety Regulation” in the California Law Review.

Geistfeld holds a PhD in economics from Columbia University, with highest distinction, and a MA in economics from the University of Pennsylvania. His primary teaching areas are torts, products liability, and insurance. He has also taught law and economics. Before joining the NYU Law faculty, Geistfeld worked as a litigation associate at Dewey Ballantine and Simpson Thacher and as a law clerk for Judge Wilfred Feinberg of the US Court of Appeals for the Second Circuit. He continues to stay involved in litigation practice, serving as an expert witness or legal consultant in tort and insurance cases.

Geistfeld is a senior editor of the Journal of Tort Law and has served as an Adviser to ALI’s Restatement of the Law Third Torts: Concluding Provisions and its Restatement of the Law Third Torts: Medical Malpractice. He is often a referee for peer-reviewed scholarly journals, university presses, and governmental funding agencies.

According to ALI, the Principles of the Law initiative led by Geistfeld will center on tort problems of physical harms—such as injury or property damage—linked to AI, while other ALI projects focus on copyright, privacy, and defamation issues stemming from AI. “There are certain characteristics of AI systems that will likely raise hard questions when existing liability doctrines are applied to AI-caused harms,” Geistfeld explained in a statement. “Examples include the general-purpose nature of many AI systems, the often opaque, ‘black box,’ decision-making processes of AI technologies, the allocation of responsibility along the multi-layered supply chain for AI systems, the widespread use of open-source code for foundation models, the increasing autonomy of AI systems, and their anticipated deployment across a wide range of industries for a wide range of uses.”

Posted October 22, 2024