Am I the poster child for why the USNWR rankings are bad?

There I was this morning, happily posting a blog about shared governance on my own site, when I noticed the TaxProf Blog about the National Jurist's cover story on the rankings. Then I read the first paragraph (the part about my resignation).

To the extent that I've become the poster child for how schools can use the USNWR rankings for other ends, so be it. (I could live without yet another reporter who wasn't at the meeting describing what it was like; for another take on the meeting, see this article, which is no longer on the Houston Chronicle's website.) Footnote 56 of Not Quite “Them,” Not Quite “Us”: Why It’s Difficult for Former Deans to Go Home Again gives a rather more complete description of what prompted my resignation.

The point is that the USNWR rankings can be used for all sorts of reasons, as the recent AALS Workshop on the Ratings Game indicated. They can be used to provide imperfect information for prospective students and prospective faculty candidates. They can be used as an imperfect way of benchmarking, as Andy Morris & Bill Henderson ably demonstrate in Measuring Outcomes: Post-Graduation Measures of Success in the U.S. News & World Report Law School Rankings. They can be used--in much the same way that first-year law students use first-semester grades--as an artificial way of measuring relative strengths, with those schools and students receiving low grades tending to feel worse about themselves than they should. (Those tiny and irrelevant differences among schools within any given cluster remind me of Ellen DeGeneres's bit about the difference between airline seats in their upright position and in their reclined position.) And they can be used as surrogates for other motives.

Poster child for improper use of the USNWR rankings? Sure, as long as people learn the appropriate lesson: using the USNWR rankings as the sole justification for any serious decision about law school policy leads to all sorts of collateral consequences.