SO WHERE SHOULD WE PUBLISH? Alan Baddeley

 

Lachmann and Rowlinson... suggest that the increasing use of bibliometric analysis based on impact factors and citation counts is corrupting the peer review process.

 

LACHMANN, P., and ROWLINSON, J. It's not where you publish that matters. Science and Public Affairs, Winter Issue. p. 8, 1997.

 

 

Unless referees are using citation counts in preparing their referee reports, or editors are using citation counts in weighing the referee reports, it is not at all clear how citation counting corrupts peer review.

 

I suspect there is a conflation here between two forms of evaluation: (1) the direct scholarly/scientific evaluation of the quality of a submitted manuscript (including feedback on how it needs to be improved in order to meet a given journal’s standards) and (2) the “evaluation” of published journal articles (usually in the context of institutions evaluating their authors for employment or promotion – or of funders evaluating both authors and their institutions for funding).

 

Citation counts are used in (2) but not (1). They are , however, correlated with (1), because journals with higher quality standards tend to have both higher rejection rates and higher citation counts.

 

Both these correlations stand to reason, as quality is a relative matter (just as “tallness” is). So, by definition, “high quality” tends to mean the high end of the bell curve.

 

By the same token, one would expect it to be the higher-quality work that tends to be used and cited more – although there is a bit of a contradiction here: Used and cited by whom? Raw citation counts may be misleading, because the middle end of the bell curve is so much fatter.

 

This is correctable, however, by citation analysis itself, by simple, recursive algorithms that weight the citation counts by the quality of the citing article and author. (This is co-citation and hubs/authorities analysis.)

 

To a first approximation, however, the raw citation counts are more predictive than one might expect, with the lower-citation

journals citing “up” vertically to the higher-citation journals rather than horizontally. This is why the smaller upper tail of the bell curve in terms of rejection rates is nevertheless also the upper tail in citation counts.

 

So, in summary: There is no evidence that citation counts corrupt peer review; rather, they probably accurately reflect its standards, at a first approximation.

 

They suggest that there is a growing preoccupation with where a paper is published, rather than what it says.

 

This may well be an empirical fact, but it is a fact that follows from the quality-levels and track-records of journals, not from citation counts. Citation counts simply correlate with these quality-levels; hence it is quite natural to use them as predictors.

 

This has nothing to do with peer review, except that it is one of the correlates of its outcome. It is unreasonable to expect that every post-hoc evaluation of a published paper must be based on yet another cycle of peer review. Peer review, done once, properly, should be enough. Moreover, peers are a scarce resource, and referee time-lag suggests that they are already an over-harvested resource. So correlates rather than recapitulations are what must be relied upon in second-order evaluations. Apart from that, it is the ongoing literature itself -– in the form of peer commentaries, critiques and elaborations – that is meant to “self-correct” any errors or oversights of peer review.

 

Bibliometric measures tend to bias publication towards US journals, where the scientific community is largest, which in turn creates problems for those journals due to overload.

 

This may be true. There is also a bias toward publishing in English. But there is not much to be done about it: Quality will be in whatever journals (and languages) enjoy the submission-levels and enforce the quality-levels that will provide that quality. The levels of the high-quality journals are not just a function of volume but of selectivity. One has little control over the submission volume, but one would expect it to follow the quality gradient. (Though this varies with fields: In some fields, such as physics, authors pre-select more realistically, submitting to the journal that is most likely to be at the right quality level for the submission: in psychology and biology, there is a tendency to shoot first for the top, and then sample downward when rejected, till the right level is found.)

 

Lachmann and Rowlinson... deplore the tendency of the Research Assessment Exercise to encourage it.

 

The RAE, being second-order evaluation, not peer review, has no choice but to use the correlates of the first-order evaluation (the journal’s name, reputation, and impact factor) rather than to try to recapitulate peer review itself.

 

There are much better correlates to do this than just journal impact factors (e.g. direct article citation counts – Smith & Eysenck 2002), including the co-citation and hub/authority analysis just mentioned – but in all cases this is still just scientometric analysis, not a re-enactment of peer review.

 

Smith, Andrew, & Eysenck, Michael (2002) "The correlation between RAE ratings and citation counts in psychology," June 2002 http://psyserver.pc.rhbnc.ac.uk/citations.pdf

 

one psychology department... encourage[s] publication in 'high status' journals, which, it was suggested, means American Psychological Association journals first, Psychonomic Society and other North American journals second, and non-North American journals third. In terms of the commonly used bibliometric measures, I suspect that this is broadly true

 

The geography is a historic fact. The strategy of trying to publish in high-quality journals is rational and desirable. The use of correlates as predictors is also rational, and methodologically sensible and practical. It also has no alternatives. (Recapitulating peer review is a non-starter: What other concrete proposals are there, for the RAE? And what should departments be counseling instead of publishing in high-impact journals?)

 

APA journals [are] somewhat conservative, and inclined to reject anything that does not convince all of the, often somewhat staid, referees. The greater the pressures to publish in such journals, the greater the conservatism is likely to become.

 

This is true in every field, but especially in psychology. The intrinsic conservatism of peer review is not having a deleterious effect on physics. Perhaps psychology has intrinsic quality problems, as a discipline.

 

And these days, with open online access to all articles growing, if a high-quality article is not accepted by a high-quality journal, its quality will be discovered anyway, and much more quickly, even though it appears in a lower-quality journal. This – plus open peer commentary and citation itself – is a new online-age strengthening of the self-corrective process of learned inquiry and publication.

 

The bibliometric measures in question tend to emphasize 'impact' (citation within the first two years) and more general citation rates. Such measures will inevitably tend to favour short-term factors such as concern with the currently fashionable, and a tendency to use conventional and hence unobjectionable measures.

 

This is all true, but the tyranny of superficial fashions and short-term trends is already there anyway (along with the conservatism of peer review). It is not 2-year citation counts that keep people doing superficial, fashionable things; they do it anyway. The citation counts just follow the followers.

 

But citation counts need not be 2-year journal impact factors! They can be tracked continuously, by article (and by author) – and before them, article download counts can be tracked too. The two are correlated, and downloads today can predict citations in 6 months to two years:

 

http://citebase.eprints.org/

http://citebase.eprints.org/analysis/correlation.php

 

At a theoretical level, pressures for the rapid and fashionable will tend to encourage the sort of simplistic 'oh yes it is! -oh no it isn't! ' controversy which has been all too common in psychology over the last 30 years

 

True, but not the fault of citation counts (and probably a particular liability of certain disciplines).

 

At an empirical level it is liable to encourage the 'experimental goldmine' based on a simple paradigm allowing endless manipulations. These typically involve countless variations on an established theme

 

Again, let’s not blame the endemic problems of certain disciplines on citation counts! Bibliometrics is just the messenger…

 

So what should we do; where should we publish?

 

Has anyone got a better suggestion than: The highest-quality journal whose peer-review standards the article manages to meet?

 

It is important to accept that there are no easy solutions. It is simply not possible for assessors to read and judge all the papers included, for example, in the RAE submissions, or for that matter, all the publications of people who apply for jobs.

 

Indeed. So what alternative is being proposed, then?

 

It almost certainly is the case that publishing in high impact journals does indicate a good level of competence and diligence.

 

Again, given the confirmation of these home-truths, just what is the message here?

 

The danger however, is that we implicitly elevate competence above originality.

 

That is a danger of peer review, and of human enterprise in general. It has nothing to do with citation-counting.

 

In doing so, European scientists lose a major advantage, namely that we do not need to become entirely part of the North American scene. Just like North Americans, we tend to read our own journals and attend talks at our own local meetings.

 

The question of which journal to submit to is a trivial one if one’s work is important. In the open-access age, all journals will be equally accessible. But what will not change is the correlation between the journal’s peer-review standards and the probability that the work will be used and cited – for that outcome ultimately depends on the quality of the work itself. The nationality of a journal will be of even less consequence than it is now.

 

Consequently it is much more acceptable for European scientists to publish in their own journals than it would be for their North American equivalents to publish here.

 

Yes, yes, but who cares, in the online, open-access age? A journal-name is just a quality-control label.

 

because the pressure of submissions is not so great, there is more scope for originality 

 

There are some advantages in submitting to a journal with a lower submission rate – but not if its quality-standards are lower too.

 

Admittedly, the papers may be less likely to be widely read in North America.

 

This will no longer be the case, in the open-access age – providing quality-standards are at the same level.

 

However, any important new work is likely to generate subsequent, less controversial work which can then be published in the more conservative North American journals.

 

This is again conflating the problem of the general conservatism of peer review, the specific pre-eminence of North American journals, and the peculiar problems of psychology as a discipline. And none of it has anything to do with citation analysis, pro or contra.

 

it is unrealistic to expect the bibliometric pressures to change

 

Not only unrealistic, but unreasonable, and counterproductive.

 

 

as European scientists, we actually have an advantage in having a series of good journals that will accept novel ideas and publish them in a shorter time.

 

If so, use them. As long as they have sufficiently high quality standards, the online open-access age will be a leveler, making their articles as visible and accessible as those in all other journals.

 

As evaluators of science, we need to remind ourselves that estimates based on where something is published are at best guides to competence rather than originality.

 

So is peer review itself: So, what else is new?

 

we should value our journals and try to ensure that they continue to be able to compete in terms of originality and quality with those of our more overloaded North American friends.

 

Such geographic considerations will prove to be not only parochial, but moot, in the open-access age.