Re: UK Research Assessment Exercise (RAE) review

From: David Goodman <dgoodman_at_PHOENIX.PRINCETON.EDU>
Date: Wed, 27 Nov 2002 15:32:38 -0500

The relatively trivial thing to see is to what extent does it
predict short term and long term use,
as measured by standard techniques. (there is obviously a circularity
problem here: the very fact of inclusion in F1000 will increase use)

The practical value of F1000 is that if one trusts the reviewer, one can
use that persons' guidance. The basic problem is the same
as with book reviews--to
what extent does one trust that reviewer? This is easy to decide for a
single individual: does the reviewer think important the same things I
do? This is not as applicable for a professional field as a whole,
Further, it is very difficult to state in
objective and quantifiable terms.

Actually, based on most F1000 reviews I've seen, the reviewers tend to
emphasize immediate interest rather than long term value, and to do so
deliberately. This may well be a good policy: they are reviewing what one
should read now. And in that sense it offers another dimension. The only
easy way to check its validity for this purpose I know of is
inter-reviewer consistency. Have you mean any measurements of correlations
between your reviewers? The more difficult way, is consistency with the
judgements of the readers. This has traditionally been measured in the
publishing field by the number of subscribers/purchasers etc. Do you have
any use figures? In particular, do those people who try it keep using it?

2002, Jan Velterop wrote:

> David,
> I'm not sure that 'accuracy' is a relevant notion in relation to Faculty of
> 1000. The faculty-members offer their opinions on papers they deem of
> interest. I quote from a response I sent earlier to one of Stevan Harnad's
> contributions to this list: The point of Faculty of 1000 is that an open,
> secondary review of published literature by acknowledged leaders in the
> field, signed by the reviewer, is seen by increasing numbers of researchers
> (measured by the fast-growing usage figures of F1000) as a very meaningful
> addition to quantitative data and a way to sort and rank articles in order
> of importance. Of course one can subsequently quantify such qualitative
> information. But what a known and acknowledged authority thinks of an
> article is to many more interesting than what anonymous peer-reviewers
> think.
> What would you have in mind with regard to accuracy in this regard?
> Jan Velterop
> > -----Original Message-----
> > From: David Goodman [mailto:dgoodman_at_PRINCETON.EDU]
> > Sent: 26 November 2002 19:36
> > Subject: Re: UK Research Assessment Exercise (RAE) review
> >
> >
> > Jan, do you have any data demonstrating the accuracy of the
> > evaluations in faculty of 1000?
> >
> > Dr. David Goodman
> > Princeton University Library
> > and
> > Palmer School of Library & Information Science, Long Island University
> >

Dr. David Goodman
Biological Sciences Bibliographer
Princeton University Library
Received on Wed Nov 27 2002 - 20:32:38 GMT

This archive was generated by hypermail 2.3.0 : Fri Dec 10 2010 - 19:46:44 GMT