Supreme Court clerkships and law teaching

Supreme CourtLinda Greenhouse's recent New York Times on the relative scarcity of female Supreme Court clerks, to say the least, has caused quite a stir among lawyers and law professors. The Greenhouse article extends an inquiry begun earlier this summer by Feminist Law Professors, Prettier Than Napoleon, and the Volokh Conspiracy. After the publication of the Greenhouse article, the Georgetown Law Faculty Blog, SCOTUSblog, Sentencing Law and Policy, the Volokh Conspiracy (again), and WSJ Law Blog have all entered the fray. Prettier Than Napoleon helpfully reminds us that Wikipedia's list of Supreme Court clerks should enable a curious empiricist to test the many hypotheses that are now swirling around the issue.

True to its mission, MoneyLaw will now make an observation or two -- strictly on the academic worth of the Supreme Court clerkship. The question of balance among men and women in this elite corps will wait.

The Supreme Court clerkship remains the most elite credential available to an American lawyer. Law firms are willing to pay substantial bonuses to associates who bring the experience or perhaps just the cachet to work. But to what extent does a Supreme Court clerkship predict success in legal academia?

Billy BeaneI strongly suspect that the Supreme Court clerkship, in the mind of a MoneyLaw-minded academic talent scout, has become the law school equivalent of the 270-foot dash that Billy Beane won when he entered the baseball draft. The story is vividly recounted in Michael Lewis's Moneyball.

The 270-foot dash measures raw speed, specifically over the maximum distance that a baseball player is likely to run on an ordinary play. It's nice to be that fast, and speed over 270 feet translates into more triples and more reliable scoring from first base on doubles hit by a player's teammates. On even rarer occasions, speed over 270 feet means scoring from first off a single (in a play most famously associated with Enos Slaughter). The related skill of covering 360 feet with extreme celerity raises the probability, however slightly, of the inside-the-park home run.

But these baseball plays are spectacular precisely because they are rare. As a result, the 270-foot dash measures something that is probably more salient in the mind of the talent scout than it is relevant to the business of trading runs for outs. Billy Beane finished his major league career with more strikeouts than hits (80 to 66) and a woeful OPS of .546. OPS, by the way, stands for On-base percentage Plus Slugging percentage. Baseball traditionalists will more readily understand Billy Beane's lifetime .219 batting average, dangerously close to the Mendoza line and flatly unacceptable for an outfielder. It was no fluke; Billy took the better part of six seasons to compile this wretched record.

If the foregoing is sabermetric gibberish to you, no amount of linking now will help you. Perhaps I shall explain in a future MoneyLaw post. Suffice it for the moment to observed that Billy Beane, first-round bonus baby, winner of the 270-foot dash at his combine, basically ... pardonnez-moi, je cherche le mot juste en français ... sucked.

This is not to suggest that the Supreme Court clerkship should be devalued altogether as an academic credential. Nor would I conclude that the clerkship hangs like an albatross around the neck of a law professor so unfortunate as to have spent a year of her or (more likely) his life working at 1 First Street N.E., Washington, DC 20543. Like any other factor that correlates only weakly, if at all, with ultimate success, the 270-foot dash, the Supreme Court clerkship, the newly fashionable brand name Pee-Aitch-Dee, and other rough guides to future performance are just that: rough guides. For every Billy Beane, there are other first-round draft picks whose careers have resembed that of B.J. Surhoff (overpaid mediocrity), Chipper Jones (marginal Hall of Fame candidate), or Alex Rodriguez (probable Hall-of-Famer, barring injury). So it is in law and law teaching. Predicting 40 years of productivity on the basis of an individual's appeal to a Supreme Court Justice at the age of 27 or 28 is at best a perilous pursuit.

Bulls and bears in law teaching


Dan Burk, an astute spotter of trends in legal scholarship, reminds me that Sara K. Stadler's frolic through legal pedagogy, The Bulls and Bears of Law Teaching, deserves a closer look. And so it does:
This Essay provides readers with a unique perspective on the world of law teaching: Employing a quirky methodology, Professor Stadler predicts which subjects are likely to be most (and least) in demand among faculties looking to hire new professors in future - rating those subjects, like so many stocks, from "strong buy" to "weak buy" to "weak sell" to "strong sell". To generate the data on which her methodology is based, Professor Stadler catalogued, by subject, almost every Article, Book Review, Booknote, Comment, Essay, Note, Recent Case, Recent Publication, and Recent Statute published in the Harvard Law Review between and including the years 1946 and 2003. In the end, she found an interesting (and, she thinks, predictive) relationship between the subjects on which faculty choose to write and the subjects on which students choose to write.
Invest well, MoneyLaw readers. Just remember that past performance is no guarantee of future performance.

Beyond ratings: Actually doing our jobs

Socialist realismThe Conglomerate has rightfully devoted careful attention to Larry Gavin's recent SSRN post, The Strange Death of Academic Commercial Law. Christine Hurt and Vic Fleischer have each posted thoughtful proposals for reconfiguring the law school curriculum to bring this venerable and valuable subject back to legal academia.

Vic's suggestion warrants further elaboration in this forum. There is something to be said for reconfiguring the law school curriculum, especially in a third year that is as widely wasted as it is dreaded, according to the functional needs of new lawyers rather than the intellectual predilections of sinecured professors or, even worse, those professors' personal convenience.

Short of a comprehensive restructuring of the upper-level law school curriculum -- which after all is the sort of proposal that sinks tenure petitions, ends deanships, and generally withers otherwise promising academic careers -- perhaps we can consider a more modest intermediate step. Every law school student should complete a six-credit, two-semester "capstone" sequence as part of her or his third-year experience. Relying strictly on my personal arsenal of curricular weapons, I could conceivably offer full-year sequences in economic regulation (from antitrust to full-blown, command-and-control regulation of entry and rates), agricultural law and agribusiness law, the law of disasters, or natural resource and public lands management, among other possibilities. These are not offerings that lend themselves to a single 2-, 3-, or 4-credit course. In the tradition of, say, sports and entertainment law, they undertake to explain an entire way of doing business and to integrate such bodies of law as may be pertinent -- all from a prospective client's perspective rather than the professor's idiosyncratic view of the field. Team teaching, skills training, and clinical experience can all be incorporated into this capstone sequence.

Of course, an academy that is paralyzed by fear of The Ratings will be loath to try something different, no matter how sensible or how useful the alternative might be. The question therefore must be asked: To what extent does mastering the art of winning an unfair academic game keep us from doing our jobs?

Notes from the Front Lines (i.e., the Faculty Hiring Committee)

OK, so I spent much time recently (when I perhaps should have been working on my paper on monument law) going through the Faculty Appointment Register. And, as in years past, I'm very impressed. Lots and lots of very smart, accomplished people with terrific pubs, circuit court clerkships, law review experience, terrific practice experience. Our friends over at prawfs are talking about this as well.

I have the feeling that were I competing with these folks as a 20-something Ph.D. candidate, as I was back in 1993, that I wouldn't be hired at Alabama. (Oh, yeah, I wasn't hired at Alabama as a rookie--so I guess I can be pretty sure: I wouldn't get an entry-level job here today!) Each year, it seems to me, the market gets tougher, though that's rank speculation on an issue where Bill Henderson and Paul Caron could probably give us some hard data.

Our faculty hiring committee is faced, as they all are, with trying to figure out whom to interview in DC. What strikes me about this process is how much it channels decisions. Those who get interviews will pretty much define the pool who get callbacks (we could and sometimes do go outside the people we interviewed in DC but that's rare), which will define the pool of people who get offers. So, MoneyLaw folks, this is the draft. This is where metrics matter. What should we be looking at? Of course, this question sent me back to the bible--Paul Caron's and Rafael Gely's instant classic, Moneyball.

A yes/no on publications can't quite do it anymore. Virtually everyone has published an article; in fact, many have published a lot of articles. I think placements are an even worse indictor of quality for entry-level people than they are for laterals, because I think entry level people often face enormous hurdles in placing articles (though in some cases they still know people at reviews and so are able to use connections to place articles above where their quality deserve). So my guess--though I acknowledge this hypothesis may be wrong--is that placements are a poorer indicator of quality for entry-level than for laterals.

What I find interesting--though not surprising--is how much we're all making decisions based on the factors that may not have great predictive ability. And in going through hundreds of these for initial cuts, I am forced to fall back on what's on the one-page form (with occasional glances at the full-resume). The form causes us to look to such factors as the J.D.-granting institution, law review, clerkship, big firm or prestigious government practice, as well as publications. In a surprising number of instances I'm familiar with the person's scholarship (but those are almost always in the area of legal history--and the needs of our law school being what they are, expertise in legal history is among the last subjects we're looking for. I will, however, later this semester have a few thoughts on the growth of legal history as a field and its importance as a "method.")

I think citations to a candidate's work aren't great measures of quality; most of the people in the FAR are too young to have many citations. And though I'm a fan of citations as a measure of overall quality of a law review, there is notorious field bias. Want citations? Write in areas like professional responsibility, intellectual property, criminal law, and constitutional law, not legal history and wills. I'll have some limited data on this one later in the semester.

We're also looking for laterals and here it's easier: it's largely a question of finding people who've actually produced good work. Here we have a track record. I'm not at all convinced that on average laterals get better after they are hired (though we can hope that faculty will continue to mature as scholars--they may learn more, see more connections, be able to bring together ideas from various fields). Law, like history (the other field I know something about) is a cumulative discipline.

At some point I hope we'll talk about how to evaluate lateral candidates--how, for instance, do we measure the quality of scholarship?

Back with you after APSA.

Alfred L. Brophy

Some advice for jobseekers

WeenieBrian Leiter has sparked an interesting discussion of factors that hurt job applicants. Among the intriguing responses is this observation by Thomas Main:
I think that the expression of a geographic restriction can hurt a candidate in unexpected ways. Announcing unwillingness to relocate to a region (e.g., Deep South, Northeast, the West) can rub me the wrong way even when my school is not in the targeted region. Among the other, perhaps more obvious questions that such an expression invites, I find it careless for a candidate to eliminate a region (as opposed to a state). Is Duke in the Deep South? Is Temple in the Northeast? Is UNLV in the West?
Well said. I second the emotion. Geographic weenies -- that is after all the technical term for people who express crass regional prejudices of this sort -- hurt their own professional interests. Or, for that matter, and in this respect I disagree with Mr. Main, even a single state. You may well hate cold places. Or places that barbecue pigs instead of cows. But keep it off your form. No one can make you accept an interview, much less a job.

SatanAnnouncing a preference for a region or a state doesn't communicate a comparably negative message. On the contrary, all this says is that a candidate has family responsibilities -- intragenerational, intergenerational, or both.

All this is a bit late for this year's recruiting season. Still, it bears noting that Satan has a better attitude than a goodly number of candidates in every AALS faculty appointments register:
The mind is its own place, and in itself
Can make a Heaven of Hell, a Hell of Heaven.
Amen, brother.

The Hylton Rankings

Gordon HyltonLast spring my friend Gordon Hylton put together a simple but important table, which ranked law schools according to two factors: mid-point between 25th and 75 percentile on the LSAT and US News peer assessment scores. (By the way, I’ve been a huge fan of J. Gordon Hylton ever since, as a youngster (i.e., law clerk), I read his great work on African American lawyers in Reconstruction Virginia. I highly recommend it. Gordon and David Callies and some other folks have an innovative property casebook, which I recommend to you for insights into teaching and property theory. Plus, Gordon, like many of my favorite scholars--David Brion Davis, Robert Ferguson, Robert Post, Daniel Hulsebosch, and Mary Bilder--has a Harvard American Civilization Ph.D.)

Gordon makes the point, which I agree with, that law school rankings ought to focus on quality of students and quality of faculty. The key question for me is what’s the intellectual experience at a law school? Is it a place on fire with ideas? (Don't you like the link to a discussion of Richardson's book on Emerson, The Mind on Fire? One of these days I'll be posting a little bit over at ratio jurist about antebellum legal thought, with a title along the lines of "the legal mind on fire." So much for product placement.) If a school is on fire with ideas, it deserves a good ranking, in my opinion. Moreover, I like getting rid of the clutter. I tend to think that much of the rest of that stuff is (1) manipulable (particularly self-reports on graduates’ employment) and (2) irrelevant to the intellectual experience of students and faculty at the school.

I’m enamored of what has become known in the trade as the “Hylton Rankings.” (And, hey, wikipedia refers to them, so you know they've arrived!)

I thought Hylton's method–adding the mid-point LSAT (after subtracting 130 from it) of each school to its peer assessment score (multiplied by 10)--was worth a little more investigation. The standard deviation, as you will notice below, for the LSAT-midpoint is larger than that for peer assessment; therefore, the combined score gives more weight to the former more than to the latter. The means and standard deviations for peer assessment and LSAT midpoint are as follows:

LSAT mid-point Peer assessment
maximum 173.0maximum 4.9
median 157.50median 2.30
minimum 146.5minimum 1.3
Mean 158.11Mean 2.51
SD 5.24 SD 0.85
N=180N=180

So, I tweaked the Hylton rankings slightly. I calculated standard scores with a mean of 50 and a standard deviation of 10 for each of the two variables (peer assessment and LSAT midpoint), added the scores for each school, and divided by 2. Thus, the composite score gave equal weight to the new variables. Want to see the schools whose “Hylton ranks” and “modified-Hylton ranks” differed by more than |3|? Check out my post over at propertyprof last spring, with additional comments from me here.

Alfred L. Brophy

Boasting, Law School Style

UT School of LawAnyone who finds MoneyLaw interesting will also find Brian Leiter's Law School Reports worth reading. Prof. Leiter helped to pioneer the critical study of law school rankings and has developed widely-regarded alternatives to U.S. News & World Report's methodology. He recently re-posted an oldie but goody: 'Tis the Season for Promotional Brochures from Law Schools..., in which Prof. Leiter offers a list of "dos" and "don'ts" for law schools playing the mass-mailing game.

Feeder Law Schools for Supreme Court Clerkships

Brian Leiter has updated his data on the law schools attended by Supreme Court clerks during the 1996-2006 Terms. In all, thirty law schools had at least one graduate serve as a Supreme Court clerk during this period. Here are the Top 10 feeder schools:

1. Harvard (95 clerks)
2. Yale (70)
3. Chicago (45)
4. Columbia (27)
5. Stanford (26)
6. NYU (16)
6. Virginia (16)
8. Michigan (14)
9. Berkeley (10)
10. Texas (9)

Paul L. Caron

Baseball box office

Baseball fansNow for a quick glimpse at the flip side of MoneyLaw: what law and allied disciplines have to say about the business of baseball. Mark McDonald and Daniel A. Rascher recently posted Does Bat Day Make Cents?: The Effect of Promotions on the Demand for Baseball:
A primary objective of sport marketers in the professional sport setting is to develop strategies to increase game attendance. Historically, one of the strategies to accomplish this goal has been the utilization of special promotions. This paper studied the impact of promotions on attendance at professional sport games. Specifically, this research examines (1) the overall effect of promotions on attendance, and (2) the marginal impact on attendance of additional promotional days. Using a data set containing 1500 observations, we find that a promotion increases single game attendance by about 14%. Additionally, increasing the number of promotions has a negative effect on the marginal impact of each promotion. The loss from this watering down effect, however, is outweighed by the gain from having an extra promotion day.
Hat tip to Jurisdynamic Idol Andrea Matwyshyn for spotting this item.

Top 25 Tax Faculties

Over at TaxProf Blog, Theodore P. Seto (Loyola-L.A.) has updated his monthly rankings of the Top 25 U.S. Law School Tax Faculties, as measured by the number of SSRN downloads (through 8/15/06). Here are the Top 10:
    1. Harvard
    2. UCLA
    3. Penn
    4. USC
    5. Michigan
    6. Columbia
    7. Colorado
    8. Boston University
    9. Cincinnati
    10. Chicago
Paul L. Caron

New Ranking Methods: Suggestions for MoneyLaw?

cosonsalenow0705.jpgOK, so Christine Hurt, building on Joe Liu's brilliant suggestion for a law faculty fantasy league, thinks law profs should get points for being quoted in the New York Times, WSJ, Washington Post, and LA Times. I agree; that's an indication of who's opinion counts. (Though I'm not so sure it's as important as the points suggested. Let me get this right: 10 quotations in the NYT are worth the same as a book from Harvard University Press, which publishes brilliant books like this one?! Let's leave that aside for a while.)

And lose a point for a quote in USA Today? That's just pure prejudice against Gannett. I think it springs from the same sentiments that lead some to call the USA Today the McPaper. Incidentally, my colleague Norman Stein (who's quoted frequently on pension law in the NY Times and WSJ) thinks USA Today is the hardest newspaper in the country to get quoted in, because they seem to have a snob-factor.

Be that as it may, there may be some other venues that are even more meaningful in ranking faculty. No, not the United States Reports. I mean pop-culture magazines--like Rolling Stone and Sports Illustrated. You don't see many law profs quotes in Sports Illustrated, though sometimes my colleague Gene Marsh is. (I can't find a Marsh quote in SI on-line--my apologies to those who are doing cite-checking on this piece. But Gene is quoted in SI now and then.)

And then the work of my colleague Bob Kuehn, who is a great humanitarian, great lawyer, person of titanium integrity, and all-around terrific person, was featured in a Lifetime movie, "Taking Back Our Town". The New York Times Magazine listed my colleague Susan Pace Hamill's idea that we should use Biblically based arguments for state tax reform as one of the fifty best ideas of 2003.

A few years ago, I started reading Rolling Stone to try to "connect" with my students--doesn't work, btw, Rolling Stone's for an older crowd than this generation of law students, I learned. When I referred to it, thinking that I'd look hip, they looked at me like I was an artifact. I'd be inclined to say, they looked at me like I was Rip Van Winkle awakened from twenty years of sleep, but I'm not sure anyone reads Washington Irving any more. Maybe the book I need is Signs of Life in the USA: Readings on Popular Culture for Writers.

Or, perhaps, I ought to be reading Tommyland. Recently, I was watching a perhaps little-known cable station, Vhi, and saw a weird show about some old guy (about my age), who went back to college. He wasn't all that motivated or dedicated, so he didn't do all that well. But then get this: in English class, the professor was discussing the man's autobiography. So, given that the book was worthy of discussion in a college English class, that led me to look up his autobiography. Turns out he's a really successful musician. He has this insight on narratives:
In court, in fights, and in arguments with people I love, there isn't one truth, there are many. This book is my truth.
I usually try to illustrate that principle, which happens to be rather important for lawyers, by reference to Rashomon, but I think Tommyland may do better for this generation.

So to return to my story: I saw Jessica Litman (then of Wayne State University Law School, now of the University of Michigan) quoted in Rolling Stone. An appearance in RS must be evidence that an intellectual property lawyer has ascended to the top of the field.

Don't forget Cosmo.

Harvard Law School Professor Randall Kennedy is a towering figue in the legal academy. He's frequently quoted in the New York Times; they review his books; even more than that, they run articles about him. He publishes major books (like Race, Crime, and Law) with leading trade presses that are so important they become best-sellers. (Race, Crime, and Law has sold an unbelievable 40,000 copies last time I heard; probably a lot more by now.)

Here's another indicator of Professor Kennedy's greatness. True, it's not as significant as the accomplishments in the last paragraph, but my favorite librarian told me recently that she was reading Cosmo in a salon and was excited to see Randall Kennedy quoted (on Interracial Intimacy.) And because she's a terrific librarian, she was able to locate a full text copy of the article; it appeared in the July 2005 issue. [I feared she might have been joking about this; if she had been, I would have looked rather foolish.] Then again, being quoted in Cosmo is small potatoes compared to the fact that Boston Public ran a whole episode about his book, on the N-word, back in the spring of 2002. You might also be interested in Professor Kennedy's discussion of Interracial Intimacy in The Atlantic. And as long as I'm talking about the N-word, I think there's some good work that remains to be done on courts' toleration--indeed invocation--of it during Jim Crow.

The appearances of law professors in the popular press remind us that law is connected in fundamental ways with the lives of ordinary Americans. And that law faculty often provide a framework for thinking about weighty matters--like marriage, copyright and real property rights, and the war on terror.

Alfred L. Brophy

The University of Chicago gains six places after a USN&WR "recount"

CNN has reported on Chicago's sudden six-place jump in the U.S. News & World Reports college rankings. Apparently the U of C had misreported its student and faculty numbers.

Hat tip to my research assistant, Kevin Wells, for spotting this item.

SSRN Law School Rankings and US News Law Schools Rankings


Last week Jim talked a little about Dave Hoffman's post over at co-op, which ranked the top fifty law schools in terms of SSRN downloads. Very interesting stuff. So that led me to look a little more at his data. What, for instance, is the relationship between the top fifty law schools in terms of SSRN downloads and the US News rankings?

All but four schools in Hoffman’s list of the 50 US law schools with the most SSRN downloads are in the USNews top 100. The exceptions are Hofstra, Marquette, Michigan State, and Northern Kentucky.

Of the 50 U.S. law schools with the most SSRN downloads:
11 are in the USNews top 10 (or 8)
22 are in the USNews top 20 (or 19)
25 are in the USNews top 25 (or 22)
14 have USNews ranks below 50 (including the 4 schools in Tiers 3 and 4).

How, you might ask, can there be 11 schools in the US News top 10? Because three schools are tied at 8 (Berkeley, Michigan, and Virginia). Same general story for the top 20--three schools tied at 19 (George Washington; Minnesota; Washington University).

The Spearman rank-order correlation between SSRN rank and USNews rank for the 46 schools in the US News top 100 is .64. (I excluded Hofstra, Marquette, Michigan State, and Northern Kentucky from the analysis.)

You may recall, however, that the correlation between the US News top 100 law schools and recent citations to their main law reviews is higher: .89. So amidst all this talk about SSRN rankings, I think we should be focusing on other factors--like citations to a law school's main journal--to measure a law school's quality. (I know that a lot of talk of SSRN is about its utility for evaluating individual scholars rather than entire faculties, but obviously this post relates SSRN as a measure of an entire faculty.)

And here's a table that combines Hoffman's table of SSRN downloads with each school's US News rank, so that you can see for yourself how the two measures line up. (Doing a little cross-posting from my regular blog, propertyprof. Hope you'll visit us.)

Soon I'll have some notes from the front line--the entry-level hiring committee.

Alfred L. Brophy

A bibliometric manifesto

95 World SeriesIlya Somin's post, Evaluating Billy Beane and Moneyball, at The Volokh Conspiracy has forced the issue. It is time for MoneyLaw to take a methodological stand. Although I've developed this theme far more extensively in my recent paper posted to SSRN, Modeling Law Review Impact Factors as an Exponential Distribution (which I plan to mine for many posts here at MoneyLaw), a briefer statement of the project is in order. In the spirit of David Grabiner's classic, A Sabermetric Manifesto, I'll call this declaration A Bibliometric Manifesto.

It is no surprise that this forum takes its literary inspiration from Michael Lewis's book, Moneyball: The Art of Winning an Unfair Game. It is even less surprising that quantitatively inclined law professors should be drawn toward baseball and sabermetrics as the basis for evaluating their own profession. Baseball combines mathematical rigor with a respect for tradition. Just as sabermetrics represents what Bill James has called “the search for objective knowledge about baseball,” bibliometrics represents the quest to quantify texts, information, and the academic pursuit of truth. As with baseball and sabermetrics, the preeminence of mathematics transforms bibliometrics into a hopeful, uplifting enterprise.

Bibliometrics does differ from its sabermetric counterpart in one crucial respect. Whereas law aspires to define itself as the grand “enterprise of subjecting human conduct to the governance of rules,” baseball affects nothing besides the happiness of devoted individuals who play or follow “a game with increasingly heightened anticipation of increasingly limited action.” As observed in the recent blockbuster, The Wages of Wins, sports “do not often change our world; rather they serve as a distraction from our world.” Statistical evaluation of baseball is fun precisely because it is frivolous.

Law, of course, is an altogether different game. Law takes itself quite seriously and legal education even more so. I say this even as a true adherent of the Church of Baseball. What bibliometrics ultimately learns from its sabermetric equivalent is therefore twofold. Let's take the math and the joy. We can, should, and must dedicate ourselves to quantitative rigor — without forgetting to have fun.

Play ball!

The Relationship Between Law Review Citations and Law School Rankings

Thanks, Jim, for the kind introduction. It's an honor and pleasure to be part of your shop. I'm a huge fan of your work and Paul's, Ronen's, and Tim's. And I'm grateful that you folks are bringing some social science rigor to the important task of ranking schools and scholars--and thus making it easier for the academy to improve our hiring and promotion practices.

Alabama Crimson TideI thought I'd begin my time here with a simple question: what's the relationship between a law review's quality and the quality of its parent institution? This is an important question for money-law folks, because it raises some possibilities for measuring the quality of law schools. But I arrived at that question because I have been working as faculty advisor to the Alabama Law Review for a few years. And I'm interested in what the Alabama Law Review could do to improve itself. Basically, I wondered whether I could argue that an increase in the quality of scholarship in the ALR would benefit the school. There are a lot of other reasons why this is important. In addition to the two already mentioned, it has implications for decisions about where to publish.

Thankfully, John Doyle, law librarian extraodinare at Washington and Lee, has a terrific website that contains data on recent citations to law journals by other journals and by courts. And the data on citations by other journals is available in two forms: overall citations and by impact (total citations divided by number of pieces, such as articles, notes, book reviews).

So let me begin with some data that I think you'll find of some interest: the correlation between the peer assessment scores for the US News top 100 schools (according to the April 2005 ranking) and the overall citations to their main law reviews in the period 1997-2004 is .89. A rather remarkable correlation, it seems to be--especially given the criticism that's been leveled against the US News peer assessment scores.

I'm going to be talking more about the correlation between US News data and law review citation data over the next few weeks (including using the US News data released in 2006)--and that I hope will give me a chance to finish up some work I'm doing on the importance of secondary journals as a measure of school quality. Shortly I'll discuss what use we can make of law journal citations for ranking law schools; I'll also make a few predictions about what schools will rise in US News.

Alfred L. Brophy

Alfred Brophy joins MoneyLaw

Alfred BrophyAlfred Brophy of the University of Alabama School of Law has joined MoneyLaw. Al's deep body of work includes Reconstructing the Dreamland: The Tulsa Riot of 1921 (Oxford Univ. Press, 2002) and Reparations Pro and Con (Oxford Univ. Press, 2006). MoneyLaw readers, one suspects, may find themselves drawn to an altogether different theme in Al's scholarship, particularly The Emerging Importance of Law Review Rankings for Law School Rankings and The Relationship Between Law Review Citations and Law School Rankings.

Please welcome Al Brophy to the MoneyLaw roster.

Editor's note: Careful readers of this forum may have noticed that none of the first members of the MoneyLaw team have gotten equivalent fanfare in connection with their arrival. Sometime after Gil Grantmore, webmaster for the Jurisdynamics Network, returns from a European trip, MoneyLaw pledges to provide deeper biographical information on all of its contributors.

Poll results: The "bar course" obsession

The time has come to announce the results from the inaugural MoneyLaw poll.

As the graphic to the left makes clear, the poll asked about the extent to which students at readers' law schools took the bar exam into account when selecting courses. The results of this decidedly unscientific assay are displayed toward the bottom right corner of this post.

A total of 19 MoneyLaw readers responded to this poll. Nearly half -- nine -- responded that students at their law school give "substantial" weight to the subject-matter coverage of the bar exam when selecting courses. Another six answered "moderate" in response to this question. The apparent consensus is that law students give fairly substantial weight to the bar exam in selecting courses. To be sure, three respondents said that their students give "little to no[]" weight to the bar in course selection. My original conjecture was that students are likelier to give greater weight to the bar at less prestigious schools. If that conjecture has any basis in reality, then MoneyLaw appears to have a sliver of readers from elite law schools.

MoneyLaw thanks you for your participation and looks forward to conducting future polls. With any luck, those polls' entertainment value will be inversely proportional to their scientific merit.

The "bar course" obsession: a MoneyLaw poll

As I promised in my most recent MoneyLaw post, I have returned to the question of the correlation vel non between teaching and scholarship. Further thought on the question has pushed me to launch anouther technological innovation for the Jurisdynamics Network: an online poll. Before we get to the fun, though, I must provide some context. In exchange for your indulgence, I'll even propose two ideas for empirical research.
Update at 10 p.m., August 17: The poll now appears as an embedded part of this post. If you want to take the poll right away, please feel free to click this popup window.
The occasion for the latest renewal of the old debate over the relationship between teaching and scholarship is the posting of Benjamin Barton's new paper, Is There a Correlation Between Scholarly Productivity, Scholarly Influence and Teaching Effectiveness in American Law Schools?. As I noted here at MoneyLaw, much of the criticism of this paper -- both positive and negative -- has focused on Barton's reliance on a single measure of teaching effectiveness: student evaluations. Many questions in life defy quantitative analysis. Many others don't. The reliability of student evaluations of teaching belongs to the latter category.

Let me first take the long view. Whenever I think of student evaluations, I think of the punishment some of my very first students delivered to me, with ample justification but also with perhaps more delight than was healthy for their souls. The course? Legislation. In a recent post at Jurisdynamics, I have lamented the profession's collective failure to teach (and learn) statutory interpretation. My little reading list represents the slightest effort to ameliorating legal academia's greatest pedagogical oversight.

Now, if only I had a dollar for each students who have come to regret how they railed about that awful legislation course Minnesota used to require, I could buy -- well, an iPod. I'd load my new toy with testimonials about how that legislation course really came in handy the first time a statute came into the picture.

So here's my first serious suggestion for empirical research. My happy confession of personal bias on this score aside, I do think that student evaluations of teaching mean quite little in the short run. The true measure of teaching effectiveness comes in the long run, sometime after students actually put their law school training to work. Surely someone wishes to undertake a longitudinal study of law school graduates' retrospective regard for their erstwhile teachers. A future Jurisdynamic Idol, perhaps.

The second suggestion arises in connection with Paul Secunda's splendidly entertaining rant at PrawfsBlawg, Why Can't Labor and Employment Law Just Be on the Bar Exam?. Why indeed. If only Minnesota would ask a single bar exam question each year on the canons of construction or the use of legislative history. The possibilities are endless. I am told -- or perhaps I merely wish -- that Felix Frankfurter's class on public utilities law was the most popular of its time at Harvard. Very well. Minnesota's bar examiners, I beseech you. In exchange for putting public utility law on the bar exam, I will draft a question involving the regulation of entry and/or rates as long as I live in this jurisdiction.

I do have a serious suggestion in response to Paul Secunda's fantastic tirade. I strongly suspect that the marginal propensity of a law school's student population to treat the bar exam as a factor in course selection correlates very strongly with that school's perceived ranking. Identifying the precise causal vector can wait; it would just be nice to establish the correlation. Again, this is empirical research waiting to happen. So many ideas, so little time. Hélas.

While the world awaits the eager and thorough pursuit of these research ideas, I can offer a little entertainment. This MoneyLaw poll seeks to gauge the extent to which law students consider the bar exam in selecting courses. There is absolutely no scientific value in this poll, but I offer it in the name of amusement. Results will be posted shortly.

Teaching versus scholarship: a recap of the opening arguments in a new debate

Paper ChaseThere is no such thing as the right time to start a new blog. But MoneyLaw appears to have come a scant couple of weeks too late to have taken meaningful part in a new debate over one of the longest running debates in legal academia. Better late than never: I will now endeavor to summarize the debate so far, reserving till later further commentary on the issue.

Tennessee's Benjamin Barton has posted his paper, Is There a Correlation between Scholarly Productivity, Scholarly Influence and Teaching Effectiveness in American Law Schools? An Empirical Study:
This empirical study attempts to answer an age-old debate in legal academia: whether scholarly productivity helps or hurts teaching. The study is of an unprecedented size and scope. It covers every tenured or tenure-track faculty member at 19 American law schools, a total of 623 professors. The study gathers four years of teaching evaluation data (calendar years 2000-03) and creates an index for teaching effectiveness.
Larry Solum offers by far the most extensive summary of Barton's paper. Questions of methodology having been thoroughly aired on the Legal Theory Blog, a brief statement of Barton's conclusion will suffice here. After correlating each of five measures of scholarly output and influence with teaching evaluation scores for all 623 professors in his study, Barton found "no correlation between teaching effectiveness and any of the five measures of research productivity."

As Bill Henderson observes at ELS Blog, this is the most significant effort to address the relationship between teaching and scholarship since Jim Lindgren & Allison Nagelberg, Are Scholars Better Teachers?, 73 Chi. Kent L. Rev. 823 (1998). In their study of faculty members at three law schools, Lindgren and Nagelberg found a modest correlation between citation counts and popularity on student evaluations of teaching.

Barton's striking conclusion has caught the eye of Lisa Fairfax at the Conglomerate, Brian Leiter, Dan Markel of PrawfsBlawg, Orin Kerr, and Stuart Buck. Bill Henderson and ELS Blog have hosted a forum on Barton's article, which the author himself nicely summarized. Much of the resistance to Barton's paper targets the study's heavy reliance on student evaluations as the sole measure of teaching quality.

Perhaps the most astute single observation in the entire debate to date comes courtesy of Jeff Stake. In one of his contributions to the ELS Blog's forum, Jeff concluded:
[I]t seems likely that writing and teaching are complements at low levels of writing and substitutes at high levels of writing, and determining whether any particular faculty member has increased his or her writing to the point that further increases will reduce teaching quality is a very tricky business.
With that, the debate will surely continue. MoneyLaw, with a fairly high degree of certainty, will be weighing in.

Dave Hoffman on SSRN school rankings

Over at Concurring Opinions, Dave Hoffman has fun with SSRN's law school rankings. Underneath the fun lies a serious layer of analysis. To what extent can downloads on SSRN inform, check, or even supplant other ranking systems such as the U.S. News and World Reports survey?

PageRanking academic labor markets

Marko TerviöMarko Terviö has posted a fascinating new paper, Network Analysis of Three Academic Labor Markets. Here is the abstract (also available on Marko's SSRN page):
The academic labor market is analyzed as a citation network, where departments gain citations by placing their Ph.D. graduates into the faculty of other departments. The aim is to measure the distribution of influence and the possible division into clusters between academic departments in three disciplines (economics, mathematics, and comparative literature). Departmental influence is measured by a method similar to that used by Google to rank web pages. In all disciplines, the distribution of influence is significantly more skewed than the distribution of academic placements. This is due to a strong hierarchy of departments - the strongest being in economics - in which movements are seldom upwards. It is also found that, in economics, there are clusters of departments that are significantly more connected within than with each other. These clusters are consistent with anecdotal evidence about freshwater and saltwater schools of thought, although this division appears to be on the wane. There is a similar although weaker division within comparative literature, but not within mathematics.
This is a methodologically rich paper, with much to teach those of us in the legal academy who wish to evaluate our own profession by more quantitatively rigorous means. Even its incidental findings, such as the apparently greater extent of cliquish "clustering" in economics but not mathematics, as if to confirm Deirde N. McCloskey's longstanding assertion that economics is more a branch of rhetoric than of mathematics. Perhaps the paper's most useful twist is its application of Google's PageRank algorithm. (Google, of course, discloses the absolute minimum about its basic operating protocol. Ian Rogers of IPR Computing Ltd. offers a far more comprehensive and informative analysis in his paper, The Google Pagerank Algorithm and How It Works.) The upshot is that the very tools used to assess influence and connectedness on the World Wide Web can and should be applied to social networks such as the academic labor market.

I tip my hat to Dan Farber for bringing this paper to my attention.

Idolatry

With the designation of Adam Kolber as the latest Jurisdynamic Idol, a few words on the relationship between that award and MoneyLaw are in order.

In a recent post on Concurring Opinions, Daniel Solove acknowledges the legal academy's collective difference of "opinion about what factors should matter most" in the entry-level hiring market. He identified these factors:
1. Law School Attended
2. Law Review Membership
3. Graduate Degrees
4. Work Experience and Clerkships
5. Quality of Scholarship
6. Quantity of Scholarship
7. Placement of Publications
8. Cohesive Scholarly Agenda and Vision
9. Teaching Experience
10. References
The Jurisdynamic Idol approach to this question is straightforward. I stated in my original post announcing the Idol competition: "Law school, graduate school, and clerkship credentials are nice but not terribly weighty. Actual scholarly performance is much nicer and very weighty."

Let me make this unequivocally clear. The only things that matter, in a world of Jurisdynamic Idolatry, are the quality and quantity of scholarship. A "cohesive scholarly agenda and vision" almost invariably follows actual scholarly productivity. Placement is a lazy evaluator's shortcut for assessing quality. The other factors are window dressing. A law faculty hired on the basis of clerkships, law review membership, and graduate degrees will be a law faculty filled with former clerks, former law review editors, and professors who don't suffer from Ph.D. envy. Tell me what those professors actually think and write, and perhaps then we can pretend to assess their academic caliber and that of their faculty at large.

Reforming the USN&WR Law School Rankings

[Thanks, Jim, for inviting me to contribute to the MoneyLaw blog. I here offer something I just posted on Agoraphilia.]

Earlier this summer, I began a series of posts about the U.S. News & World Report's law school rankings. (Please see below for links to each post in the series.) My research uncovered many interesting and troubling things about the rankings. I discovered errors in the data that USN&WR used for the most recent rankings and, consequently, errors in the way that it ranked several law schools. More distressingly, I discovered that almost no safeguards exist to correct or prevent such errors. I think it fair to say that, but for my peculiar obsession with the USN&WR rankings, nobody would have noticed the errors I've documented. That won't do. We cannot rely on one nutty professor to keep the rankings honest. I thus here wrap up my series about the most recent USN&WR law school rankings by describing several reforms designed to make law school rankings more accurate and open. Although I suggest all of them, implementing any one of these reforms would make errors in the rankings less likely, and surviving errors more likely to get corrected.

1. USN&WR's Questionnaire Should Mirror the ABA's

Both the ABA and USN&WR send law schools questionnaires each fall. The latter apparently wants schools to repeat their answers to the former. Judging from how it asked schools to report their median LSAT and GPA data last fall, however, USN&WR could do a better job of clarifying exactly what data it wants. To avoid honest confusion or lawerly logic-chopping, USN&WR's questionnaire should simply ask schools to repeat exactly the same answers that they put on the ABA's questionnaire.

2. USN&WR Should Commit to Publishing Corrections and Explanations

Law schools have a strong incentive to answer the ABA's fall questionnaire accurately, as that organization controls their accreditation. USN&WR, in contrast, wields no similar threat. Furthermore, law schools have a much more powerful incentive to dissemble on the USN&WR questionnaire, as their responses directly affect their rankings. What can USN&WR do to encourage law schools to give it accurate data?

USN&WR should commit now to publishing corrections to any inaccuracies it discovers in the data it uses to rank law schools. It should do so at all events, given that students use the rankings to make very important decisions. It can do so easily, too; it need only update its website. Yet USN&WR has thus far failed to correct the erroneous data it used to (mis)rank the University of Florida College of Law and Baylor University School of Law. For shame!

Perhaps USN&WR does not want to publicly acknowledge that its law school rankings sometimes contain errors, fearing that to do so would decrease the credibility of its rankings and, ultimately, its profits. Consumers will eventually discover the errors, though. Better that USN&WR should correct the rankings when necessary and thereby reassure its customers that it sells the best data available.

In addition to promising to correct errors in its rankings, USN&WR should also promise to document the cause of any errors it discovers. That double commitment would strongly discourage law schools from misreporting data on the USN&WR questionnaire. No school wants to earn a reputation for opportunistic lying. (Nor, of course, should any school suffer the wrongful imputation that it lied if, in fact, USN&WR causes errors in the rankings.)

3. USN&WR Should Publish All the Data it Uses in Ranking Law Schools

At present, USN&WR publishes only some of the data that it uses to rank law schools. Why? It is not at all clear. Even supposing that it would constitute an unwieldy amount of information in a print format, USN&WR could easily offer all the relevant data online. Specifically, USN&WR should publish the following additional categories of data for each law school it ranks:
  • median LSAT;
  • median GPA;
  • overhead expenditures/student for the last two years, which includes
    • instruction and administration expenditures;
    • a cost-of-living index applied to the preceding sum;
    • library operations expenditures;
    • law school miscellaneous expenditures; and
    • full-time enrollments;
  • financial aid expenditures/student for the last two years, which includes
    • direct expenditures on students; and
    • indirect expenditures on students;
    • (as well as the same full-time enrollments figures used in calculating overhead expenditures/student, above); and
  • library resources.

Publishing all that data would allow others to double-check it, thereby helping to keep law schools honest and the law school rankings accurate.

4. The ABA Should Publish the Data it Collects and that USN&WR Uses to Rank Law Schools

At present, a law school must pay the ABA $1430/year to receive "take-offs" summarizing the data that the ABA has required all member schools to report. The ABA marks that data as "confidential" and forbids its unauthorized publication. As I discussed earlier, the ABA apparently treats law school data that way not to protect law schools or the ABA from public scrutiny, but rather to increase ABA revenue.

Given that revenue model, the ABA has a strong disincentive to publicly disclose all the data it collects from member schools. Fortunately, however, it need not do so in order to improve the USN&WR rankings. Rather, the ABA need only publicly disclose those few categories of data that it collects and that USN&WR uses to rank law schools. Together with the Law School Admission Council, the ABA already publishes much of that data in the Official Guide to ABA-Approved Law Schools. It remains only for the ABA to publish data in the following categories:
  • overhead expenditures/student, including
    • instruction and administration expenditures;
    • library operations expenditures; and
    • law school miscellaneous expenditures;
  • financial aid expenditures/student including
    • direct expenditures on students; and
    • indirect expenditures on students.

It would greatly help, too, if the ABA would publish in a conveniently downloadable form that and the other data that USN&WR uses in its rankings. The Official Guide to ABA-Approved Law Schools currently comes only in paper or PDF formats, making it necessary to scan or re-key the data needed double-check the USN&WR rankings. That grindingly tiresome process invites the introduction of errors, throwing a needless hurdle before those of us interested in improving the law school ranking process.

As I said, adopting any one of the reforms I suggest would improve how law schools get ranked. Adopting all four would prove better, yet. Please note, though, that I do not promote these reforms for the sake of USN&WR. It seems quite capable of milking the rankings cash cow without my help. Rather, these reforms stand to benefit all of the rest of us—students, professors, and administrators—who live in the shadow of USN&WR's law school rankings.

By opening up public access to the data used to rank law schools, moreover, the reforms I've proposed make it more likely that alternatives to the USN&WR rankings will grow in popularity. Rankings require data, after all. In a better world, the ABA would make lots and lots of data about the law schools it accredits freely available in an convenient-to-use format. Those of us who doubt that USN&WR has discovered the one sole Truth about how to measure law schools might then easily offer the world our own, new and improved, rankings.


So ends my series of posts about the most recent USN&WR law school rankings. I thank my gracious host and co-blogger, Glen Whitman, for putting up with my often-dreary march through the necessarily statistical and administrative arcana. Readers—if any!—who share my interest in these matters may want to note that I plan to write an academic paper relating and expanding on the observations I've made here. Please feel free to drop me a line if you have any suggestions about how I might make such a paper useful to you.

Earlier posts about the 2007 USN&WR law school rankings:

The art of winning an unfair academic game

The Jurisdynamics Network is proud to introduce a new blog, MoneyLaw. Inspired by Michael Lewis's book, Moneyball: The Art of Winning an Unfair Game, many law professors have pondered the extent to which this profession can learn from Billy Beane's approach to winning baseball games for the Oakland Athletics. Four of those professors -- Jim Chen, Tom W. Bell, Paul Caron, and Ronen Perry -- will now discuss the ways in which Moneyball's emphasis on quantitative assessment of baseball-related performance can inform law school governance, academic rankings, and the overall mission of legal academia.