Friday, 22 June 2012

The problems with law journal rankings

There are three types of law journal rankings: those based on ‘expert opinions’ (such as the Australian Research Council [ARC] one; now not continued any more), those based on ‘market data’ (eg, a citation based ranking, such as the Washington&Lee [W&L] one), and those that combine both types (such as my own one). However, all of them have severe problems:
.
Expert-based journal rankings
.
A frequent criticism of expert rankings is that it is impossible to determine whether a particular journal is really "better" than another one. This point can, however, be challenged as follows: assume that initially all journals are of equal quality, and that an expert committee (eg, a research council) randomly picks 10% of the journals and calls them A* journals, and that financial rewards are provided for publications in these journals (by the research council, or by universities e.g. promotions). Then, what happens is that these A* get more submissions; thus, they can be more selective, and the quality of pieces published in A* journals will be better than articles of other journals. As a result, the ranking itself would create a more competitive market for publications, which may be regarded as positive.
.
But, in the real world, there are two problems with this model:
  • First, given the human nature, it is not unrealistic to assume that these A* journals do not only decide on merit but that cronyism plays a role as well. Sure, if you only accepted pieces of friends and family, the number of submissions would go down and the expert committee may downgrade your journal. But in practice things are often more mixed: eg, you may give half of the slots to your cronies, and let the other half remain competitive. Networks would also matter since academics of the same few institutions may be the editors of the A* journals, its authors, and the members of the expert committee.
  • Second, one needs to consider that there are different types of journals; in particular we can distinguish between general and specialised journals, and between mainstream and non-mainstream ones. If the expert committee decides on the basis of majority voting, only mainstream general journals become A* journals. That would be harmful to innovation since these journals may regard advanced research on particular issues or new approaches as “too exotic”. Thus, to do it properly, the expert committee would need to apply a quota system, giving A*s to a certain proportion of specialist and non-mainstream journals as well. In practice, that may however be quite unlikely (eg, imagine a committee with ten members, and only one of them supports a new and controversial method, whereas the others think that it's just nonsense).
As a result, I’ve my doubts about expert-based journal rankings. I also feel that the two problems outlined here are apparent in the ARC ranking since in the law list almost all of the  A* journals are mainstream general journal often affiliated with a small number of institutions.
.
Market-based journal rankings
.
I’m mainly thinking about two methods: a citation statistic (as the W&L ranking) or the number of submissions per slots (ie, the rejection rate). How would this address the two problems that I identified for expert rankings?
  • The first problem was that the top journal editors may abuse their power by favouring “friends and family”. To some extent, this may be possible here as well. Yet, if it goes beyond minor favours, this would negatively impact on the number of submissions and presumably also the citations. Thus, the market approach may be a way to induce publishers and editors to maintain high-quality of review process.
  • The second problem was that specialised journals and non-mainstream approaches would be disadvantaged. Here, the same problem would arise, because, naturally, the more general and the more mainstream a journal is, the more citations and submissions it gets. That makes such rankings doubtful since they would not actually indicate quality and may hinder innovation.
So, as a result, again, my overall assessment is sceptical. Of course, a way out of the second problem may be to focus on sub-rankings only, i.e. rankings limited to particular areas of research (as available in the W&L ranking) and a particular types of research. However, then the ranking could not be used for a general comparison any more – though comparison is exactly what a ranking is aiming to achieve.

A comment on the following post

Looking at the statistics of my blog, the popularity (in terms of hits) of my posts on law journal rankings does not seem to go away, in particular as regards my own ranking. I’m not really happy about it since I’m not convinced that these rankings are really meaningful. Thus, the following post summarises my criticism (being based on earlier posts here and here), its main aim being to refer to this post as a disclaimer where I discussed and presented law journal rankings earlier.

Saturday, 16 June 2012

How close is ‘law’ to other academic disciplines?

A big question: one may try answering it by way of conducting a survey, but it is also possible to develop objective criteria. For example, ‘closeness’ may be shown by way of establishing to what extent lawyers publish in the same journals as sociologists, economists, philosophers etc.
How can we do this? The RAE 2008 website offers a good and easily accessible resource of the journals in which UK academics from all disciplines publish (see datasets here). Thus, I took the journal information for law (in total 3668 journal articles) and counted how many of these journals also feature in the datasets of eight other disciplines (to be precise, the sets of intersection of the journal titles). This led to absolute ‘overlaps’ between 18 (law and anthropology) and 302 (law and social work). Since the total number of journal publications per unit of assessment is quite diverse (eg, 636 in anthropology but 11374 in business), I translated these numbers into percentages. This leads to the following result, in terms of overlap between journal publications in law and …:
  1. sociology: 10.10%
  2. social work and social policy & administration: 8.59%
  3. politics and international studies: 5.83%
  4. business: 4.83%
  5. philosophy: 3.53%
  6. anthropology: 2.83%
  7. history: 1.79%
  8. economics: 1.33%
A surprise? Looking at the precise journals it can be seen that the high numbers for sociology and social work are driven by criminology publications. The overlap between law and politics also make sense, eg, considering journals on human rights or European law & politics in which academics from both disciplines publish. Next, with respect to business, it likely matters that in some of the new universities law is part of the business schools: so, these universities may have decided to include some of the lawyers in the RAE business school submission. An interesting contrast is that economics is at the bottom of this list. Given the (modest) rise of law and economics thinking in the UK, this may be a bit of a surprise. Yet, proper economics is fairly technically; thus, it is perhaps also plausible that legal scholars hardly publish in economics journals (and economists hardly in law ones).

Friday, 8 June 2012

Are old universities better? – discussing the new THE top 100 ‘under 50’ ranking

A few days ago the Times Higher published a new ranking of universities which are younger than 50 years (available here; also here and for the methodology see here). Should this be seen as a ranking of promising but still second-tier universities, thus implying that the older a university is the ‘better’ (ie higher ranked) it is?
An initial look at the new ranking may confirm such a view since the highest ranked ‘under 50’ university is only ranked 53rd in the list of all universities. But, then, the ‘under 50’ ranking also contains precise information on the year each university was founded. Thus, I was wondering whether the overall score of the 100 universities in the ‘under 50’ ranking is at all related to age of these universities.
The answer, is ‘no’ with the correlation between overall score and year founded being close to zero (to be precise -0.08). But, then, as always, one may be wondering how these rankings are constructed anyway (previous posts eg here and  here). The THE ‘under 50’ ranking mixes objective data (eg on citations) with subjective ones (ie reputation). Overall, reputation has a 20% weight, and one may expect that here perhaps age does matter. Unfortunately, the reputation data are not disclosed. But as a (imperfect) proxy one can look at the research sub-ranking where reputation has a 40% weight.
It can be seen that here age matters a bit more (the correlation is now -0.16). Thus, the overall result is that age only seems to play some role for reputation but not for objective criteria of quality.
Can this be generalised? Someone from the UK may be  sceptical since even a casual glance at the various rankings (not being based on reputation) indicates a clear negative correlation: the ‘post-1992 universities’ are with few exceptions at the bottom of these rankings and the universities established in the 1960s may perform fairly well but usually not as well as the oldest universities (‘Doxbridge’ and the London ones). Of course, correlation does not imply causation. In the UK the aim of having new universities in the 1960s and 90s was to provide university education for wider parts of the population. But, the ‘under 50’ ranking shows that this has been different in other countries (eg South Korea), ie here there may have already been many ‘ok old universities’ but the explicit aim was to have new elite institutions. Thus, country-specific strategies seem to be more important than ‘age’.