In a recent article, Professor Stephen Choi and his co-author note that Donald Trump “has said that he considered one of his greatest achievements in office to have been appointing a record number of judges.” In fact, the article adds, “Trump may have understated his achievement.”
That’s because Choi and Mitu Gulati of University of Virginia School of Law have evaluated the performance of a set of appellate court judges appointed by President Trump and a comparable set appointed by Presidents Barack Obama and Joe Biden. They compiled Top 10 lists in three categories: opinion production (total number of reported majority, concurring, and dissenting opinions); influence (number of citations from outside the judge’s circuit); and independence (explained further below). The result: Trump-appointed judges dominated—each list contained just a single Democratic appointee.
Some may find this deep dive into ranking judges by Choi surprising. He is the Bernard Petrie Professor of Law and Business, co-director of the Pollack Center for Law & Business, and teaches Corporations and Securities Regulation. But if you look at Choi’s publications over the years, you’ll find that he has tackled topics ranging far afield from the world of shareholders and boards, and one area that has drawn his repeated attention is judicial performance.
In 2004, Choi, then at the University of California, Berkeley School of Law, and Mitu Gulati, then at Georgetown University Law Center, published “A Tournament of Judges?” in the California Law Review. Noting that “partisan bickering has resulted in delays in judicial appointments as well as undermined the public's confidence in the objectivity of justices selected through such a process,” Choi and Gulati set forth performance-based criteria to guide judicial selection. The criteria, they suggested, could be used to conduct a tournament, “where the reward to the winner is elevation to the Supreme Court.”
As co-authors, Choi and Gulati have revisited the topic numerous times, and “How Different Are the Trump Judges?,” posted online this fall, is their latest effort. Although the study was completed before the November election, their findings have taken on particular salience in the wake of Trump’s victory.
We asked Choi about his work on judicial performance.
Your first scholarship on judicial selection appeared two decades ago. What prompted that?
Mitu and I had to drive from Washington DC to a conference at the University of North Carolina. One of us had overheard a conversation in DC where two lawyers were arguing about whether X or Y federal court judge was better suited for a Supreme Court appointment. And their conversation was largely about where these people had gone to law school, whether they were on Law Review, their views on abortion, etc. During the car ride, we started talking about strange that was. Both of these judges had been judges for almost a decade. Why look at how they had done in law school when we had data on how they had done for ten years as judges?
More broadly, it seemed that judges were being rated on very subjective criteria, including how “smart” is a particular judge. We were also concerned about the ability of a president to select a person to nominate for the Supreme Court with one motivation—likely political—while hiding this motivation with bland statements about how qualified that person is for the Court. During the car ride, we hit upon an idea: why not create a simple set of objective metrics against which to measure potential nominees to the Supreme Court against one another—a tournament of judges?
Our goal was not necessarily to have these metrics be the dispositive measures on which to select a justice. Instead, our goal was to use these metrics as a foil against which to measure bland and subjective statements about the qualifications for a judge. If a person states that a particular judge is great but the judge did poorly on our objective metrics, our hope was that this would put the burden on that person to demonstrate why given the poor objective metrics. In other words, we created a tournament of judges to provide an information-forcing mechanism.
When you apply these objective metrics that you came up with, what do they show?
We think that these relative measures of what judges have done—for example, if one judge had 50 reported opinions in a year and another has only five—suggests that, other things equal, the first judge is putting more effort into their opinions, since reported opinions likely take greater effort to produce. So if we care about judicial effort, this is information.
Similarly, if Judge X’s opinions were cited 300 times in 2020 and Judge Y’s opinions were cited three times, other things equal, that says something about which judge is having more influence. Of course, one has to try to make “other things equal”, but one can try to do that since the job of judging is remarkably similar across settings.
Why was one of the scoring criteria you used independence—the “‘maverick’ nature of a judge,” as you write?
There are different ways of thinking about judicial independence. One way is to simply ask whether a judge decides cases without regard to their own partisan leanings. As a way of getting at this, we created a baseline: what is the fraction of active judges on a circuit court who are Republican-appointed judges? And then we can measure whether a particular judge dissents more or less against this baseline fraction of Republican-appointed judges. If partisanship matters, one would expect that a Republican-appointed judge will tend to dissent more against Democrat-appointed judges.
However, partisanship alone may not be a full measure of judicial “independence.” What about a judge who only writes one or two dissents a year compared with a judge who is more willing to take a stand against other judges and who writes numerous dissents—and concurrences—a year? For our measure of independence, we take both partisanship as well as willingness to write dissents and concurrences into account. We recognize, though, that people can differ in what they think counts as judicial independence. So we refer to our measure as one of a judge’s “maverick” nature to make clear this is our specific measure.
You note that in public statements President Trump emphasized that “his” judges would support his ideological preferences in areas such as religion, guns, and abortion. What can you say about Trump’s success on this front?
I think Trump has been successful in this regard. Among Trump’s promises were that his judges would be pro-religion—and implicitly pro-Christianity—and pro-guns, through the Second Amendment. We’ve done papers looking at outcomes in these areas, and on both fronts, Trump judges have vindicated his claims regarding how they would behave: ruling systematically in favor of religion—in particular Christianity but not Islam—and for guns in Second Amendment cases.
With a second Trump administration about to begin, what do you think that augers for the federal judiciary?
We know that Trump did not directly select the judges appointed to the federal lower courts in his first administration. Many of the high-performing Trump judges in our study, at least from the early part of the first Trump administration, were the product of [recommendations from] White House Counsel Don McGahn with input from Leonard Leo, executive vice president of the Federalist Society. It’s hard to predict the direction the second Trump administration will take in terms of judicial selection, or whether Trump’s own interaction with the judicial system while out of office will affect who are appointed as federal judges. There is lots of speculation going on about whether the Federalist Society will play as much of a role this time. It will be fascinating to see what happens.
Posted January 17, 2025