This website uses cookies
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.
Artificial intelligence is reshaping how law is practiced, interpreted, and taught, and Cornell Law School is making sure lawyers are leading that change. In this Q&A with Jed Stiglitz, Director of the Center for Law and AI, Associate Dean for Academic Affairs, and Richard and Lois Cole Professor of Law, we explore how the Center is bringing together experts in law, technology, and ethics to ensure AI serves justice, fairness, and accountability. Through innovative courses and cross-disciplinary research, the Center is preparing students to navigate and shape the evolving legal landscape in the age of AI.
What inspired the creation of the Center for Law and AI, and what do you see as its main purpose?
The Center grew out of a simple recognition: artificial intelligence is transforming how law is practiced, made, and studied, and lawyers must help guide that transformation rather than react to it. Cornell created the Center to bring legal reasoning and technological expertise into a conversation. Its purpose is to prepare students for a profession in transition, support scholarship that uses and examines new technologies, and help ensure that the law serves as a source of fairness, trust, and democratic oversight.
What role should lawyers and scholars play in shaping the use of AI?
Lawyers and legal scholars have a dual role to play. On one hand, we need to understand and help design the ways AI is deployed in legal institutions, from discovery and compliance tools to adjudication and lawmaking itself. On the other hand, we have to serve as the conscience of that transformation. We should ask what values are embedded in these systems, what biases they may reproduce or amplify, and how they can be regulated and made accountable.
For lawyers in practice, that means becoming conversant with how AI systems work—they should see the boundaries of technological competencies, so they can identify legal and ethical issues. For scholars, it means bringing doctrinal, historical, and empirical insight to questions that technologists can’t fully answer alone. What does fairness mean in law? What counts as reasoning or explanation?
How is the Center helping Cornell Law students build the skills and awareness they’ll need to practice in an AI-driven world?
We’re helping students become not just users of AI, but leaders in shaping how it’s used responsibly. That means giving them both technical literacy and legal perspective. We hope to develop not only an understanding of what AI can do, but also what it *should* do.
Throughout the first-year curriculum, we integrate use and critical engagement with AI tools. Our year-long legal research and writing course asks students to find and examine the capabilities and limits of AI tools, both general purpose and specific to the legal domain. And students learn to critically question an algorithm’s reasonableness and fairness, just as they would a precedent.
Our goal is for Cornell Law graduates to be professionals who can ensure that innovation strengthens justice and our shared values.
AI raises important questions about fairness, bias, and accountability. How is the Center encouraging students and researchers to think about those issues?
Those questions are at the heart of our concerns. Fairness, bias, and accountability are core to law’s contribution to the AI conversation. Every major technical question about AI has a corresponding legal and ethical one, too.
In our courses, students interrogate the role and biases of AI in a variety of legal domains: in armed conflict, in healthcare, and more generally in a course that challenges students to critically evaluate legal information.
Researchers in the Center also focus their work on bias and accountability in AI. To take an example, Frank Pasquale, a law professor at Cornell Tech, is a world-class pioneer in thinking about secrecy and transparency in algorithms—what the black box of algorithms means for society, and how in turn society should address and regulate it.
The Center brings together faculty from across Cornell. How do collaborations between law, computer science, and other fields help tackle the complex challenges AI presents?
AI’s challenges don’t fit neatly into existing disciplines. They implicate the intersection of law, computer science, ethics, social science, and policy. When lawyers, engineers, and social scientists work together, they see different parts of the same problem.
One interdisciplinary project I worked on, for example, used AI to understand historical patterns in legal philosophy. To do that project, we needed both an understanding of how the models worked, and which models would be effective—and also domain knowledge about our judicial system and legal philosophies. The limits in capability of the models often derive from data and local understanding, not from constraints in the models themselves. So, to run an effective project, we needed both legal and computational perspectives.
There are also many examples in the field where—to carry out a legal and ethical project—you need to call on expertise in different areas. To take two much-discussed areas, consider predictive policing and hiring algorithms. People use algorithms to predict who will be a good hire or who will commit a crime. Those predictions can carry real consequences for people’s lives, and they implicate often difficult legal and ethical problems. To answer not just what the models can do, but what they should do, legal perspectives can be indispensable.
Why is it important for lawyers to understand how AI systems are built and for technologists to understand how the law works?
Law and AI are increasingly co-authors with us in our lives, so each needs to understand the other. Lawyers who see how AI systems are built can also see where legal and ethical principles must be integrated into design. Technologists who understand the law learn to treat transparency and accountability as design goals rather than irritating compliance hassles. The goal is to advance technology while preserving our shared moral and legal commitments, and for that we need to collaborate.
What kind of courses will this Center now allow students to take, which will prepare them for the ever-changing legal landscape?
Cornell Law students are already seeing the curriculum evolve to meet the realities of technological change. Courses like AI Law and Policy give students the tools to understand both how these systems work and how they should be governed. Our first-year lawyering program focuses on the nuts and bolts of how practitioners should (and should not) use AI for legal tasks. The Law School’s Copyright and Intellectual Property courses, likewise, examine novel but pressing questions about the ownership of information in the AI era. Virtually every substantive course, from those touching on healthcare to the law of armed conflict, has reoriented to confront new AI-based questions in their domains. The goal isn’t to turn law students into engineers, but instead to prepare them to be professionally responsible users of AI and to be leaders in a world where law and AI work in tandem.