Re: Future UK RAEs to be Metrics-Based

From: Stevan Harnad <harnad_at_ecs.soton.ac.uk>
Date: Mon, 27 Mar 2006 12:33:30 +0100

On Mon, 27 Mar 2006, [identity deleted] wrote:

> Conjecture 1: Any conceivable metric would distort behaviour in
> undesirable directions.

This is equally true of every evaluative/reward metric, including course
marking, IQ test results, peer review, and indeed anything but totally
unpredictable subjective whims: Any foreknowledge of the metric will
result in explicit efforts to maximise the metric itself, rather than
what the metric is meant to measure. Only a metric that also has
what psychometricians call "face validity" (i.e., it is not only
a correlate/predictor of X, but it is X itself) is immune to this:
Measures of, say, height, are not *predictive* of height: They *are*
height. Course marks, even in math problems, are not face-valid measures
of what has been learnt; they are merely correlates/predictors of it.

I doubt there are any face-valid research metrics, the Nobel prize not
excepted.

The degree to which research done is also used and cited does not have
face validity, and it can certainly be manipulated and distorted, but it
is not without some predictive power. And the idea is that if instead of
one metric (citation count) we have a multiple regression
equation with many weighted metrics (citations, downloads, hub/authority
scores, co-citation fan-in and fan-out, download/citation
latency/longevity, co-text, latent semantic analysis, download-citation
correlations, endogamy/exogamy scores [self-citation, closed citation
circles, out-citing], grants, PhD counts, etc. etc.) it becomes next to
impossible to manipulate it all, and increasingly easy to detect and
expose anomalies, naming-and-shaming such attempts to manipulate it (e.g.,
high downloads from the same IP, high endogamy counts not balanced by
the usual bona-fide endogamy profile, citation bloats without preceding
download rises and vice versa, citation bloats without commensurate
co-citation profile, etc. etc.).

Not all metrics are directly manipulable in concert, and they act as
controls on one another, especially in an open, OA digital database.
Like other forms of cheating (plagiarism, false priority claims) they
are more readily detected and exposed.

> Conjecture 2: The proof will come too late.

On the contrary. There will be feedback cycles, as with viruses and spam,
where a new abuse temporarily makes some headway, and then is countered
and exposed. But unlike with anonymous viruses and spam, the abusers
will be answerable if caught, which will likewise be a deterrent.

The OA world is a very different one from the one we have been
accustomed to.

More replies:

On Mon, 27 Mar 2006, [identity deleted] wrote:

> Q: Which papers get cited most?
> A: Review papers (as well as other categories like seminal new work).
> Might a beneficial effect be that we get back to people carefully
> comparing approaches to a problem, rather than just rushing out
> "here's another way to do it" papers? And to some sorting out of
> which ideas are best? Scholarship might even come back into fashion.

I couldn't quite follow that, but hub/authority metrics can pick out
review papers, as well as depth-of-citation and co-citation metrics,
fan-in and fan-out. This will all be part of the multiple regression
equation, with adjustable weights, customised by discipline and
revisable based on feedback from abuse cycles.

> On another note, the Guardian reported a few days ago that the
> Chancellor said we need not even have RAE 2008 if the Universities
> can agree on something better to replace it. If true, might it not be
> a good moment for us all to be pushing our VCs to try every way
> possible to come to this agreement, rather than us all wasting our
> times on RAE bean counting? Of course, it could just be journalistic
> misreporting...

I for one think retaining the dual funding system (RCUK/RAE) is a good
thing *if* it can be reduced to rational, non-intrusive,
non-time-wasting bean-counting of the multiple regression kind just
sketched. The alternative is just to plough the erstwhile RAE
top-slicing back into the RCUK grants, which is merely a Matthew Effect,
not an improvement.

Stevan Harnad
American Scientist Open Access Forum
http://amsci-forum.amsci.org/archives/American-Scientist-Open-Access-Forum.html

Professor of Cognitive Science
Department of Electronics and Computer Science
University of Southampton
Highfield, Southampton
SO17 1BJ UNITED KINGDOM
http://www.ecs.soton.ac.uk/~harnad/
Received on Mon Mar 27 2006 - 12:35:33 BST

This archive was generated by hypermail 2.3.0 : Fri Dec 10 2010 - 19:48:17 GMT