Re: "Academics strike back at spurious rankings" (Nature, 31 May)

From: Stevan Harnad <harnad_at_ecs.soton.ac.uk>
Date: Sun, 3 Jun 2007 14:20:34 +0100

On Sun, 3 Jun 2007, Loet Leydesdorff wrote:

> OK. Let's assume that we need a structural equation model in which journals
> are one of the predictive variables. Since one wishes (in the Nature
> article) to compare Oxford and Cambridge with Lausanne and Leiden, nation
> should be another independent variable. You also wish to take expert
> judgement (peer review) as a predictor?
>
> But what would be the dependent (predicted) variable?

In the validation phase of developing the metric equation, one of the
external criteria to use is human rankings. That is what we will be
doing in our analyses of the UK 2008 metric RAE rankings, and their
relation to the parallel panel review rankings.

But that is not really "peer review." Peer review is done by journals,
and its outcome is acceptance or non-acceptance at that journal's level
in the journal quality (hence peer-review) hierarchy.

Other ways to validate metrics of course include cross-validating them
against other (validated) metrics and criteria.

But the objective is to develop weighted sets of metrics that have been
validated and can then provide norms and benchmarks, as well as serving
as autonomous predictors in their own right.

Stevan Harnad

> > -----Original Message-----
> > From: ASIS&T Special Interest Group on Metrics
> > [mailto:SIGMETRICS_at_listserv.utk.edu] On Behalf Of Stevan Harnad
> > Sent: Sunday, June 03, 2007 1:57 PM
> > To: SIGMETRICS_at_listserv.utk.edu
> > Subject: Re: [SIGMETRICS] "Academics strike back at spurious
> > rankings" (Nature, 31 May)
> >
> > Adminstrative info for SIGMETRICS (for example unsubscribe):
> > http://web.utk.edu/~gwhitney/sigmetrics.html
> >
> > On Sun, 3 Jun 2007, Loet Leydesdorff wrote:
> >
> > > > "All current university rankings are flawed to some
> > extent; most,
> > > > fundamentally,"
> > >
> > > The problem is that institutions are not the right unit of
> > analysis for the
> > > bibliometric comparison because citation and publication
> > practices vary
> > > among disciplines and specialties. Universities are mixed bags.
> >
> > Yes and no. It is correct that the right unit of analysis is
> > the field or even
> > subfield of the research being compared. But it is also true
> > that in comparing
> > universities one is also comparing their field and subfield coverage.
> >
> > The general way to approach this problem is with a rich and
> > diverse set of
> > predictor metrics, in a joint multiple regression equation
> > that can adjust the
> > weightings of each depending on the field, and on the use to which the
> > spectrum of metrics is being put: There can, for example, be
> > "discipline
> > coverage" metrics (from narrow to wide) as well as "field size" and
> > "institutional size" metrics, whose regression weights can be adjusted
> > depending on what it is that the equation is being used to predict,
> > and hence to rank. The differential weightings can be
> > validated against
> > other means of ranking (including expert judgments).
> >
> > Harnad, S. (2007) Open Access Scientometrics and the UK Research
> > Assessment Exercise. Invited Keynote, 11th Annual Meeting of the
> > International Society for Scientometrics and Informetrics. Madrid,
> > Spain, 25 June 2007 http://arxiv.org/abs/cs.IR/0703131
> >
> > > Our Leiden colleagues try to correct for this by
> > normalizing on the journal
> > > set which the group uses itself, but one can also ask
> > whether the group is
> > > using the best possible set given its research profile.
> > Should one not first
> > > determine a journal set and then compare groups within it?
> >
> > The three things that are needed are (1) a far richer and
> > more diverse set of
> > potential metrics, (2) insurance that like is being compared
> > with like, and (3)
> > validation of the ranking against face-valid external
> > criteria, so that the
> > metrics can eventually function as benchmarks and norms.
> >
> > None of this can be done a priori; the methodology is similar to the
> > methodology of validating batteries of psychometric or
> > biometric tests:
> > Correlate the joint set of metrics with external, face-valid
> > criteria, and
> > adjust their respective weights accordingly.
> >
> > It is unlikely, however, that the relevant and predictive frame of
> > reference and basis of comparison will be journal sets.
> > Breadth/narrowness
> > of journal coverage is just one among many, many potential
> > parameters. The
> > interest is in comparing researchers and research groups or
> > institutions,
> > within or across fields. The journal does carry some predictive and
> > normative power in this, and it is one indirect way of
> > equating for field,
> > but it is one among many ways that one might wish to weight
> > -- or equate
> > -- metrics, particularly in an Open Access database in which
> > all journals
> > (and all individual articles and all individual researchers, and their
> > respective download, citation, co-citation, hub/authority,
> > consanguinity,
> > chronometric, and many other metrics are all available for weighting,
> > equating, and validating).
> >
> > What we have to remember is that the imminent Open Access (OA) world
> > is incomparably wider and richer -- and more open -- than the narrow,
> > impoverished classical-ISI world to which we were constrained in the
> > Closed Access paper-based era.
> >
> > > Furthermore, Brewer et al. (2001) made the point that one
> > should also
> > > distinguish between prestige and reputation. Reputation is
> > field specific;
> > > prestige is more historical. (Brewer, D. J., Gates, S. M.,
> > & Goldman, C. A.
> > > (2001). In Pursuit of Prestige: Strategy and Competition in
> > U.S. Higher
> > > Education. Piscataway, NJ: Transaction Publishers, Rutgers
> > University.)
> >
> > This is still narrow journal- and journal-average-centred
> > thinking. Yes,
> > journals will still be the entities in which papers are
> > published, and journals
> > will vary both in their field of coverage and their quality,
> > and this can and
> > will be taken into account. But those variables constitute
> > only a small fraction
> > of OA scientometric and semiometric space.
> >
> > Shadbolt, N., Brody, T., Carr, L. and Harnad, S. (2006) The Open
> > Research Web: A Preview of the Optimal and the
> > Inevitable, in Jacobs,
> > N., Eds. Open Access: Key Strategic, Technical and
> > Economic Aspects,
> > Chandos. http://eprints.ecs.soton.ac.uk/12453/
> >
> > > Many of the evaluating teams are institutionally dependent
> > on the contracts
> > > for the evaluations. Quis custodies custodes?
> >
> > OA itself is transparency's, diversity's and equitability's
> > best defender.
> >
> > Stevan Harnad
> > AMERICAN SCIENTIST OPEN ACCESS FORUM:
> > http://amsci-forum.amsci.org/archives/American-Scientist-Open-
> > Access-Forum.html
> > http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/
> >
> > UNIVERSITIES and RESEARCH FUNDERS:
> > If you have adopted or plan to adopt a policy of providing
> > Open Access
> > to your own research article output, please describe your policy at:
> > http://www.eprints.org/signup/sign.php
> > http://openaccess.eprints.org/index.php?/archives/71-guid.html
> > http://openaccess.eprints.org/index.php?/archives/136-guid.html
> >
> > OPEN-ACCESS-PROVISION POLICY:
> > BOAI-1 ("Green"): Publish your article in a suitable
> > toll-access journal
> > http://romeo.eprints.org/
> > OR
> > BOAI-2 ("Gold"): Publish your article in an open-access
> > journal if/when
> > a suitable one exists.
> > http://www.doaj.org/
> > AND
> > in BOTH cases self-archive a supplementary version of your article
> > in your own institutional repository.
> > http://www.eprints.org/self-faq/
> > http://archives.eprints.org/
> > http://openaccess.eprints.org/
> >
>
Received on Sun Jun 03 2007 - 15:38:21 BST

This archive was generated by hypermail 2.3.0 : Fri Dec 10 2010 - 19:48:57 GMT