Re: UK Research Assessment Exercise (RAE) review

From: Jan Velterop <>
Date: Mon, 25 Nov 2002 21:51:27 +0000

>bb> "Where an article is published is an irrelevant issue. A top
>bb> quality piece of work, in a freely available medium, should get
>bb> top marks. The issue is really that many assessment panels use
>bb> the medium of publication, and in particular the difficulty of
>bb> getting accepted after peer review, as a proxy for quality. But
>bb> that absolutely does not mean that an academic who chooses to
>bb> publish his work in an unorthodox medium should be marked down.
>bb> At worst it should mean that the panel will have to take rather
>bb> more care in assessing it."

On Monday, November 25, 2002, at 03:04 PM, Stevan Harnad wrote:

> A rather complicated statement, but meaning, I guess, that the RAE, is
> assessing quality, and does not give greater weight to paper journal
> publications than to online journal publications.

Funny how the same words can be read in different ways. When Bahram
says 'medium', I read it as 'journal', 'channel of communication': a
'carrier-neutral' notion, so to speak, not making the distinction at all
between 'print' and 'electronic'. His concern is that the journal, or more
in particular, the journal's perceived acceptance policy upon peer-review,
is used as a proxy for quality. This acceptance policy can be as strict
in on-line journals as in print ones, so there would be no reason for
him to equate strict policies with those employed by print journals.

> But I would be more skeptical about the implication that it is the RAE
> assessors who review the quality of the submissions, rather than the
> peer-reviewers of the journals in which they were published. Some
> spot-checking there might occasionally be, but the lion's share of the
> assessment burden is borne by the journals' quality levels and impact
> factors, not the direct review of the papers by the RAE panel!
> (So the *quality* of the journal still matters: it is the *medium* of
> the journal -- on-paper or online -- that is rightly discounted by
> the RAE as irrelevant.)

The quality of journals matters, but quality is not the same as impact
factor. Possibly, journals with the highest impact factors can be seen to
be -- in general -- of higher quality than those with low impact factors,
but, as one often sees, rankings on the basis of differences that
run into the single digit percentage points (e.g. IF 2.35 vs IF 2.27)
are utterly meaningless. It is a known phenomenon that impact factors
are highly vulnerable to manipulation and that in just about any given
journal a minority of articles is commonly responsible for the bulk of
the citations on which the impact factors are based. An American medical
journal will almost always have a very much higher impact factor than its
European qualitative equivalent, simply because in the medical areas the
culture 'dictates' that American authors publish in the main in American
journals and do not cite their European colleagues, whereas European
authors publish as much in American as in European journals and usually
cite all relevant literature, be it American or European. Quality in
this example is not easily measurable in terms of impact factors.

> (Hence the suggestion that a "top-quality" work risks nothing in being
> submitted to an "unorthodox medium" -- apart from reiterating that
> the medium of the peer-reviewed journal, whether on-line or on-paper,
> is immaterial -- should certainly not be interpreted by authors as RAE
> license to bypass peer-review, and trust that the RAE panel will review
> all (or most, or even more than the tiniest proportion of submissions
> for spot-checking) directly! Not only would that be prohibitively expensive
> and time-consuming, but it would be an utter waste, given that peer
> review has already performed that chore once already!

However, if 'unorthodox medium' means 'new journal with an unorthodox
publishing model' (after all, since most journals have an on-line edition
nowadays, being electronic by itself would hardly have been described by
Bahram as 'unorthodox'), then authors of top-quality work are perceived to
take a risk by publishing in them, for these unorthodox new journals will
not have an impact factor yet. This is not to say that articles in the new
open access journals are not cited as often as in conventional journals
-- on the contrary: we have strong indications at BioMed Central that they
are actually cited a great deal more often than similar articles published
conventionally. The system of impact factors, however, is stacked against
new journals and has a considerable bias toward entrenched journals
and their toll-gate models. Fortunately, this is only an irritating,
but temporary problem as the rates at which articles published in BMC
open access journals are cited, will ensure high impact factors once
the Impact Factory deems the time ripe to calculate them.

>jv> HEFCE clearly recognises the flaws of the RAE methodology used
>jv> hitherto, which is the first step towards a more satisfactory
>jv> assessment system. What is not clear to me is the question whether
>jv> your suggested reform will indeed be saving time and money. It seems
>jv> to me that just adding Impact Factors of articles is indeed the shortcut
>jv> (proxy for quality) that Bahram refers to, and that anything else will
>jv> take more effort. I don't pretend to have any contribution to make
>jv> to that discussion on efficiency of the assessment methodology, though.
> I couldn't quite follow this. Right now, most of the variance in the
> RAE rankings is predictable from the journal impact factors of the
> submitted papers. That, in exchange for each university department's
> preparing a monstrously large portfolio at great time and expense
> (including photocopies of each paper!).

I agree that the preparation of large portfolios is a waste of time and
expense if all that happens is straightforwardly adding impact factors
of the journals in which the papers have been published. So it is not
so much that RAE rankings are 'predictable' from the impact factors,
they are *based on* the impact factors.

> Since I seriously doubt that Bahram meant replacing impact ranking by
> direct re-review of the all the papers by RAE assessors, I am not quite
> sure what you think he had in mind! (You say "just adding Impact Factors
> of articles is indeed the shortcut" but adding them to what, how?)

Adding as in 'adding up', Stevan, 'tallying'.

> At Southampton we are harvesting the RAE 2001 returns
> into a demo -- RAEprints -- to give a taste
> of what having a global national open-access research archive would
> be like, and what possibilities it would open up for research access,
> impact, and assessment. (For a preview, see: )

We agree that open access could help, even with research impact

Received on Mon Nov 25 2002 - 21:51:27 GMT

This archive was generated by hypermail 2.3.0 : Fri Dec 10 2010 - 19:46:43 GMT