Return to Blog Homepage

Why U.S. News Law School Rankings Are Lame: Part Deux

trent_rankings_lame_deux1

In my last post, I bemoaned USNWR’s law school rankings as being performative rather than merely descriptive; that is, I claimed that the rankings create and reinforce a hierarchy of law schools, rather than merely tracking an independently established hierarchy.

I’m going to do more of the same here, but now I want to focus on another of USNWR’s ranking criteria: the opinions of lawyers, hiring partners, and judges. These are especially important, because the opinions of this group constitute one of the most heavily weighed criteria in the rankings.

Using these opinions as criteria appears to be quite sensible because it ensures that law school rankings don’t merely exist in a vacuum, but are instead sensitive to the combined wisdom of professionals who see representatives of these schools on a daily basis. Great, right? It’s good that they sent out all of those polls.

A couple of concerns, though…

Let’s forget about the fact that all lawyers and judges are necessarily biased because they attended a particular law school, for which they no doubt harbor sympathy (or antipathy, but for which they have notably different feelings than other schools they didn’t attend). More concerning, is the fact that most lawyers and judges know almost nothing about the inner-workings of any law schools (except perhaps for the one they attended, potentially years ago).

In order to evaluate the relevance of these professionals’ opinions of law schools, we should try to understand how they arrived at them. Below I consider what strike me as the two most plausible sources: (1) inferences about the relative quality of law schools based on graduates they’ve met and (2) their own internal sense of which schools are best.

You might think that lawyers and judges come into contact with graduates of many law schools on a regular basis and, through evaluating them, have a basis to make evaluations of the law schools from whence they come. But you’d be wrong.

First, it’s not as if lawyers and judges meet hundreds (or even tens) of graduates from each of the top 100 law schools. Most lawyers and judges have either no or very little exposure to graduates of many of the country’s law schools. This isn’t through any fault of theirs, but there are far too many law schools to have met graduates of most, let alone each, of the top 50. Furthermore, one’s familiarity will tend to be with law schools ranked near one’s own. If you went to Yale you probably know a bunch of people who went to Harvard, but you’re unlikely to know many people from the University of Idaho.

But even if there were lawyers or judges who’d met at least a few people from every one of the top 50 law schools in the country, such a sample would be woefully insufficient to license any inference about the school.

Consider the following: Through teaching LSAT courses to thousands of students, I might have met at least a few undergraduates from each of the top 50 universities in America (though, even this might be a stretch.) But I couldn’t make any inference about the relative quality of these institutions on the basis of these acquaintances. Nor, for that matter, could I if I knew 20 people from each school.

To make an inference about the quality of the school from having met its graduates, you’d really want to have met a large sample from each school. Furthermore, you’d want to interview your sample rigorously while trying to exclude other causal factors.

For example, your sample could be tainted because you just happened to meet the clever or torpid students from a particular school. My sample consists of the kind of students from each school that would consider LSAT prep, which by itself biases the group. Judges meet the kind of law school graduates who actually go to court (which is a small minority, btw…) and hiring partners at large firms generally meet a fairly small cross-section of graduates that meet the firm’s GPA/specialization requirements. In this way, any of these samples might be unrepresentative.

If you stop to think about it, it would be pretty hard to find a diverse enough sample of graduates to license inferences about the law school from which they hail. It is for similar reasons that we’re chastised for making inferences about a race, gender, or religion on the basis of a limited sample. And yet, USNWR actually treats lawyers’ and judges’ anecdotal impressions of law schools as a criterion for ranking.

Even if a judge or hiring partner did conclude that school X generally had better graduates than school Y, this might be to the credit of the better legal education the students received at X. But it’s equally possible that school X just chose its students from a better applicant pool than Y. If school X started off with far better students than Y, it might actually produce better graduates, even if the education provided was quite a bit poorer than Y’s.

So, even if there are some hiring partners around the country who steadfastly contend that Columbia Law School graduates prove to be better associates than Cardozo Law School graduates, this doesn’t imply that Columbia is to be credited for this difference. It seems equally (or more) likely that the better students went to Columbia in the first place because it was the higher ranked school.

By analogy, you’ll find better players in the NBA than in the corner pickup game, not because the NBA necessarily offers better training or development, but because to get into the NBA in the first place players must already be better than the guy on the public court.

Thus, we get the following circularity: the hiring partner’s opinion that Columbia is the better school is used to justify Columbia’s higher ranking. But, it’s quite possible that the reason he believes Columbia is the better school is that it got better applicants in the first place. So, a school’s higher initial ranking attracts better applicants, who, in turn, make better impressions on hiring partners, whose opinions are polled to determine the school’s subsequent rankings.

But hitherto I’ve assumed that the professionals’ rankings of law schools were based on inferences about graduates they’d met. Lawyers and hiring partners I’ve spoken with inevitably have an internal sense of where various law schools rank that is not directly informed by the inferences about the school’s graduates.

Rather, these opinions seem to be based on what might be generously called “common knowledge.” Many lawyers and judges know the approximate ranking of law schools, not because they’ve conducted some systematic study of their own, but because they’ve read USNWR’s ranking or spoken with people who’ve been informed by it.

Either through direct study of the report, or through indirect acquaintance, almost every judge and hiring partner knows that Yale has long been considered the premier US law school. So it’s unsurprising that when they’re asked which law school they consider the best they say, “Yale.” Here again, USNWR uses the professionals’ opinions of law schools as a criterion in their ranking, but these very opinions are largely informed by their direct or indirect familiarity with prior versions of USNWR’s rankings.

Sure, there are other criteria that USNWR uses as well, and some of them aren’t as self-reinforcing. But the fact that this report influences the decisions of everyone from law school applicants to hiring partners underscores the extent to which it affects the phenomenon it seeks to study.

So, I’m going to smugly take myself to have established that there’s something self-reinforcing and unhealthy about USNWR’s law school rankings. In my next, and hopefully last, installment in this happy series, I’m going to discuss what this phenomenon should mean to you. It’s not as if you should disregard USNWR, right…?

Article by Trent Teti of Blueprint LSAT Preparation.