Thursday, 23 June 2011

Do Journal Rankings work? (1) – A comment on expert rankings

I have blogged about journal rankings a couple of times (e.g., here, here and here) but I have not really been explicit about my own view; thus, this post. The main distinction is between rankings by experts (such as the Australian Research Council ranking) and market-based solutions (e.g. a citation based ranking, such as the Washington & Lee one) [similar in an article not limited to law here]. This post comments on expert rankings, a comment on market-based rankings will follow later on.
A frequent criticism of expert rankings is that it is impossible to determine whether a particular journal is really "better" than another one. This point can, however, be challenged as follows: assume that all journals are of equal quality, and that an expert committee (e.g., a research council) randomly picks 10% of the journals and calls them A* journals, and that financial rewards are provided for publications in these journals (by the research council, or by universities e.g. promotions). Then, what happens? These A* get more submissions; thus, they can be more selective, and the quality of pieces published in A* journals will be better than articles of other journals. As a result, the ranking itself would create a more competitive market for publications, and useful signals, which may be regarded as positive.
But, in the real world, there are two problems with this model:

  • First, given the human nature, it is not unrealistic to assume that these A* journals do not only decide on merit but that cronyism plays a role as well. Sure, if you only accepted pieces of friends and family, the number of submissions would go down and the expert committee would downgrade you. But in practice things are often more mixed: e.g., you may give half of the slots to your cronies, and let the other half remain competitive. There may also be network effects in place since academics of the same few institutions may be the editors of the A* journals, its authors, and the members of the expert committee.
  • Second, one needs to consider that there are different types of journals; in particular we can distinguish between general and specialised journals, and between mainstream and non-mainstream journals. If the expert committee decides on the basis of majority voting, only mainstream general journals will become A* journals. Of course, that may be harmful to innovation since these journals may regard advanced research on particular issues or new approaches as “too exotic”. Thus, to do it properly, the expert committee would need to apply a quota system, giving A*s to a certain proportion of specialist and non-mainstream journals as well. In practice, that may however be quite unlikely (e.g., imagine a committee with ten members, and only one of them supports of a new and controversial method, whereas the others think that it is just nonsense).

So, as a result, I’ve my doubts about expert based journal rankings. I also feel that the two problems outlined here are apparent in the ARC ranking (in the law list almost all of the A* journals are mainstream general journal often affiliated with a small number of institutions).

Friday, 17 June 2011

How practical (or impractical) are bibliometric measures in law? A self-test of Harzing’s PoP

It’s a matter of debate of whether scholarship can or should be assessed by way of citation counts or other quantitative measures (for a recent contribution see van Gestel & Vranken). In any case, I would think that any academic should be interested in how the scientific community responds to his or her research. A problem in law is of course how to find where your research is cited since not all law journals and books are easily electronically available.
Thus, occasionally, I search the relevant databases (Westlaw, Beck-Online, Google Scholar, Google Books etc.) for references to my research. To be sure, this is a quite burdensome task. I was therefore interested to learn that Harzing’s software “Publish or Perish”, freely available here (!), provides an easy way of showing your citation counts, being based on Google Scholar. In this post I compare my own “hand-count” with the Harzing count, of course always excluding self-citations. I have searched my English-language articles, omitting the very recent and very short ones; in total, this lead to 32 pieces. They compare as follows, see the Harzing count, and in brackets my “hand count”:
58 (45), 37 (35), 31 (27), 30 (15), 25 (19), 21 (19), 20 (20), 15 (13), 14 (13), 12 (6), 10 (13), 8 (12), 11 (2), 8 (9), 4 (2), 7 (24), 7 (6), 2 (13), 1 (3), 4 (4), 4 (5), 2 (6), 5 (3), 4 (4), 1 (4), 4 (6), 3 (4), 3 (10), 2 (0), 0 (1), 1 (2), 0 (0) = total 354 Harzing (345 “hand count”) and correlation between Harzing and “hand count”: 0.89
A surprise? Yes, I would have expected that the Harzing count would be less, not more, than my own one since my count has not relied on Google Scholar only. Thus, I had a look at the Harzing result list to see whether the Harzing numbers are too high or my ones too low. Though there are a few sources which I have indeed missed, my hand-count seems to me more reliable because the Harzing list double-counts some of my articles (if they have been cited by someone in a working paper and then in the identical final journal article) and there may also be a few more false positives. Further, it is interesting to compare the individual figures because there are also a few instances where the hand count is actually higher (in bold above). These are mainly articles and books in traditional law journals which have been cited in books and articles not included Google Scholar. Finally, my top three articles are the first Leximetric one, the legal origins one, and the one on numerical comparative law (available here, here and here).
Now, overall, what shall we think of Harzing? Actually, its number of the total cites seems to be a decent guess, and the correlation coefficient is pretty high as well (see above). So, despite my scepticism, perhaps Harzing may even be somehow practical in legal research.

Monday, 13 June 2011

True and perceived quality of universities: a model

I have been thinking about this for a while. First the figure:


What’s the background? Having been at different universities, I was often wondering what people think about my current, or past, affiliations. This not only depends on the university in question but also on how much someone knows about universities in this country. If you talk to someone, say your landlord, who knows close to nothing about universities, it just does not matter; for him, the name of a top university is as impressive as an average one; in the figure above see the three straight lines at low values of x. Conversely, someone with close to perfect knowledge will correctly assess whether you are at a top, good or average university; see figure above at x4.

Most interesting, or most “dangerous”, are however the views of persons who have partial knowledge of universities. For instance, someone who just knows the names of two UK universities (guess which ones) may think that everyone at these two universities is really great, and everyone else is a bit of a loser: see figure above at x1. Then, if someone knows the names of the top ten universities, he/she may over-appreciate anyone at these ten universities but think that everyone else hasn’t made it yet: see figure above at x2. And, then, only if we assume fairly good knowledge, average universities would benefit as well: see figure above at x3.
So, is this just about vanity, i.e., what landlords, or people who meet at dinner parties etc., think about you? Of course not: student and staff recruitment crucially depends on a university’s image. Thus, a top university may be content – and actually it may benefit – if people have just limited knowledge about the university landscape, whereas an average university should be keener on marketing its quality. Moreover, universities should consider where and how they want to market their degrees since, naturally, persons who live close to a particular university may have a fairly good knowledge about it (i.e. they may be at x3), whereas potential applicants from the other side of the world may be fairly ignorant (i.e. they may be at x1).
PS: I did not assume perfect knowledge but rationality. Thus, I do not consider that an academic at a top university with good knowledge (e.g., x3 or x4) may well be arrogant in claiming that every other institution is inferior (i.e., despite his knowledge, he is actually holding the view of x1).

Monday, 6 June 2011

The New College of the Humanities – what’s the strategy?

The announcement that a new private university college is to be established has caused a lot of attention. See here for the website of this New College of the Humanities and, e.g., here and here for the discussions in the Times Higher and the Guardian.
I blogged about private universities a while ago. To quote from my post on “Private universities as gap-fillers” (here):
(…) I was wondering why some private universities have laxer and others have higher admission and examination standards than public universities.
The best starting point may be that private universities typically charge higher fees than public ones (though there are exceptions; e.g., private universities mainly funded by altruistic donors). Why are students willing to pay higher fees? Private universities themselves may usually say that they provide a 'better product', such as more student-oriented teaching, better infrastructure etc. To some extent this may be the case. However, this is not the entire story, given the fact that many academics (which includes me) teach at private and public universities - and, usually, they would do it in a similar way.
So, there has to be a second reason. Here, we get to the distinction between different types of private universities. In some countries public universities have relatively tough admission and examination standards. Thus, private universities fill the gap for applicants who fall below these standards. In other countries, however, public universities accept almost everyone. Here, private universities typically have tougher standards; thus, they deliberately target students who benefit from the university's elite branding and are therefore willing to pay higher fees. (…)
I would think that New College of the Humanities would be fall under the first category mentioned in the previous paragraph.
Further comment: I’m slightly puzzled by the name. The New College will offer courses in Law, Economics, History, English Literature and Philosophy. Usually, however, economics and possibly law would be regarded as belonging to social sciences, not humanities.