Re: UK Research Assessment Exercise (RAE) review

From: David Goodman <dgoodman_at_PHOENIX.PRINCETON.EDU>
Date: Mon, 25 Nov 2002 12:50:19 -0500

Yes, it clearly says that the i.f. of the journal should not be counted.
But it also says that many panels do count it anyway.

This highlights the difficulty in using impact factors correctly.

As a senior administrator at a university I know of put it (a few years
back, not an exact quote) "of course promotion committees read the
papers and make their own judgments about the quality rather than just see
where it was published. However, in doubtful cases, they sometimes
have been known (smile)..."
I do not know the actual operations of the people on
the UK panels, but human experience suggests that those
researchers who rely on the formally stated criteria do so at their own
risk. (It should be possible to prospectively and reprospectively measure
this, also).

There are three levels of judging quality

a/ judging the quality of a particular piece of work: this is the job of
the referees, but a specialist in the field should be able to confirm the
accuracy of the referee's judgment.

b/ judging the quality of a particular researcher's work. This requires
judging the cumulative effect and trend of the body of papers, and an
extrapolation to what is likely to be produced in the future. I am not
aware of any studies of this, but I have not looked for them.
Anecdotally, we all know of people who have been rejected for tenure at a
particular university who have gone on to do brilliant work elsewhere.
This represents the failability (and biases) of human judgement. It should
be possible to measure the frequency of this with the control being those
who do get tenure at the same place. It should also be possible to test
possible objective measures to see what they would have predicted.

One complication is that an important factor in the true quality of a
researcher is also the success of that person's students and postdocs.
This is measurable too, but it requires taking
account of a longer time scale reprospectively or prospectively.

c/ judging the quality of a particular department's work. This is just the
sum of its researchers. However it is even more affected by the
department's students later careers.
(one can similarly judge a university or a nation . etc.)


A different matter is judging the value of a field of work: i.e., is it
worthwhile giving money to this specialty, or to a researcher, productive
or not, working in this specialty. Short range impact factors have no role
here, unless one is concerned only about increasing short-term
productivity. This is the part where the history of science shows that
people do particularly poorly. It is not merely theoretical--the award of
grants and so on is often based on a judgment of this. It relies on
politics and prejudice more than on science, and I know of no relevant
measurement--except the still short-term possibility Stevan mentioned, of
looking for fields that are just beginning to affect other fields.

As I understand it, to the extent that people use i.f. in this, they are
making the implicit judgement that fields such as classical biology are
not worthy of funding, and consequently that departments devoted to this
are not worthy of funding. I consider this a political not scientific use,
and totally invalid.

 On Mon, 25 Nov 2002,
Stevan Harnad wrote:

> On Mon, 25 Nov 2002, Jan Velterop wrote:
> > "Where an article is published is an irrelevant issue. A top
> > quality piece of work, in a freely available medium, should get
> > top marks. The issue is really that many assessment panels use
> > the medium of publication, and in particular the difficulty of
> > getting accepted after peer review, as a proxy for quality. But
> > that absolutely does not mean that an academic who chooses to
> > publish his work in an unorthodox medium should be marked down.
> > At worst it should mean that the panel will have to take rather
> > more care in assessing it."

Dr. David Goodman
Biological Sciences Bibliographer
Princeton University Library
Received on Mon Nov 25 2002 - 17:50:19 GMT

This archive was generated by hypermail 2.3.0 : Fri Dec 10 2010 - 19:46:43 GMT