Image Credit: Pexels
Image Credit: Pexels

Across disciplines, academic institutions are working to determine the best ways to gauge the research impacts and performance of scholars for assessments in the realms of career advancement, research funding, and more. Many institutions are working to find the right balance of qualitative and quantitative assessment factors, with quantitative metrics being among the trickiest to land on given that there are no perfect means of looking at scholarly performance in aggregate. Many institutions, particularly law schools, are grappling with how to best approach quantitative impact measures and some are even launching initiatives to evaluate and iterate on their current research impact assessment methods.

Gary Lucas, professor of law and executive associate dean for administration and finance at Texas A&M University, knows the challenges of trying to pinpoint accurate quantitive assessment metrics for legal scholars firsthand. He was recently tasked with identifying impact metrics to assess the collective and individual performance of Texas A&M’s law faculty. In his essay published in the University of Pennsylvania Law Review titled “Measuring Scholarly Impact: A Guide for Law School Administrators and Legal Scholars,” he chronicles his experience researching various impact metric options, the primary issues that arise in measuring the impacts of legal scholarship as compared to other disciplines, and his proposals for quantitative legal scholarship assessments.

In the interview below, Lucas discusses his research and the recommendations in his essay, which he hopes will serve as a useful guide to measuring impacts of legal research for interested law deans and legal scholars.

Q&A with Gary Lucas

You were assigned the task of identifying scholarly impact metrics to be used in assessing the collective and individual performance of Texas A&M law faculty. Can you briefly share how this came about and how you approached the task?

GL: The central administration at Texas A&M emphasizes quantitative measures of performance. Unfortunately, the metrics used to assess the other colleges and departments within the university aren’t always useful in assessing the law school. Our dean asked me to identify metrics for assessing scholarly impact that are relevant for legal scholars. I reviewed the literature, we held discussions internally, and we settled on an approach that works for us.

GL: SSRN downloads receive a lot of attention among legal scholars. In part, this is because download information is readily available. Generating citation counts for legal scholars requires some work, though it’s becoming easier thanks to Google Scholar and HeinOnline. Until citation information is easy to come by, its use will remain limited.

What potential problems do you see in measuring scholarly impact based on law review placements?

GL: Within the academy, there is a widespread perception - supported by some evidence - that the reputation or ranking of the school with which an author is affiliated plays a large role in determining journal placement. There is also a perception that certain types of articles, e.g., those in specialized areas like bankruptcy, are less likely to be placed in top journals. In general, it’s just not clear that the student editors (mostly 2Ls) who select articles for publication can adequately assess quality, so they may rely on proxies and place too much emphasis on the limited number of topics with which they are familiar from their brief experience in law school.

You recommend ranking law school faculties by Google Scholar citation count. Why do you think this is generally a better option than The Leither Score and other impact measurement systems?

GL: I don’t know if Google Scholar is better than the Leiter score, but it does have some significant advantages. Google Scholar picks up citations in social science journals and books, whereas the Leiter score reflects only citations in the journals and law reviews database of Westlaw. Also, once a faculty member sets up a profile, Google Scholar automatically calculates citations and adds recently published articles. It is easy to track citations over time. The Leiter method requires manual searches and a lot of labor to weed out false positives. A Google Scholar citation count is completely transparent; anyone can verify it easily. A Leiter count is more opaque.

GL: A librarian at each law school should set up a Google Scholar profile for his or her school that includes the tenured faculty. Setting up the profile is easy and only takes a few hours. I explain the steps in my article, “Measuring Scholarly Impact: A Guide for Law School Administrators and Legal Scholars,” 165 U. Pa. L. Rev. Online 165 (2017). Once the profile is created, the librarian will need to spend a little time each year to make sure the faculty list is updated. If all of the research-oriented law schools created a Google Scholar profile, we would have a wealth of information that would facilitate research on the impact of legal scholarship.

In your article, you discuss the possibility of field bias in citation counts. What steps should law schools take to avoid field bias in citations when comparing faculty at the individual and group level?

GL: Scholars who write in high-citation fields like constitutional law are more likely to be cited than those who write in low-citation fields like family law or tax law. In comparing law schools, field bias may not be that important since most law schools have faculty across the various sub-disciplines. But in assessing individual faculty, field bias is significant. You wouldn’t want to compare a family law scholar to a con law scholar. To compare apples to apples, a dean should assess a family law scholar by comparing his or her citation count to those of other family law scholars.

Do you think most law schools are focused on quantitative impact metrics and should they be?

GL: I haven’t conducted a survey, so I can’t say for certain. SSRN downloads receive a lot of attention. Periodically, Greg Sisk ranks the top third of law schools based on Leiter score, and his ranking receives a lot of attention. That said, I’m not sure that many law schools incorporate quantitative metrics into important decisions like lateral hiring. Journal placement still looms large. In my view, that’s unfortunate.