Re: Citation statistics

From: Stevan Harnad <harnad_at_ecs.soton.ac.uk>
Date: Mon, 16 Jun 2008 19:46:53 +0100 (BST)

On Mon, 16 Jun 2008, Loet Leydesdorff wrote:

> If I correctly understand then your claim is that ranking results based on
> peer review at the departmental level correlate highly with MEAN
> departmental citation rates. This would be the case for psychology,
> wouldn't
> it?

With total citations for the departmental researchers included for the
interval, which amounts to the same thing. (Charles will be able to give
the details better than I.)

> It is an amazing result because one does not expect citation rates to be
> normally distributed. (The means of the citation rates, of coures, are
> normally distributed.)

No, citations are not normally distributed. The usual 20/80 rule applies:
The top 20% of articles receive 80% of the citations. But probably that
rule applies across departments too.

> In my own department, for example (in communication
> studies), we have various communities (social-psychologists, information
> scientists, political science) with very different citation patterns. But
> perhaps, British psychology departments are exceptionally homongeneous
> both
> internally and comparatively.

Sounds like three different disciplines.

It might be useful to analyze sub-departments and their respective
citation patterns to make the like-with-like comparison even closer. I
don't know that anyone has done that. Eventually, once journals and
subject matter are better tagged, it will be possible.

> Then, you wish to strengthen this correlation by adding more indicators.
> The
> other indicators may correlate better or worse with the ratings. The
> former
> ones can add to the correlations, while the latter would worsen them. Or
> do
> you wish only to add indicators which improve the correlations with the
> ratings?

No, the validation process is to do multiple regression, to determine
the contribution of each metric to the prediction of the peer rank. The
non-predictive metrics would get zero weight; the anti-predictive metrics
would simply need to have their polarity flipped.

Once the beta weights are initialized, of course, they can still be
adjusted if we have further criterion variables (other than the peer
rankings), or further criteria (such as an a priori emphasis on some
sorts of factors, such as, say, citation growth rate, or download decay
rate, interdiscipliarity, etc.).

> I remember from a previous conversation on this subject that you have a
> kind
> of multi-variate regression model in mind in which the RAE ratings would
> be
> the dependent variable. One can make the model fit to the rankings by
> estimating the parameters. One can also refine this per discipline. Would
> one expect any predictive power in such a model in a new situation (after
> 4
> years)? Why?

That's exactly the approach I recommend in the paper I keep linking.
(It's also the approach for test validation in psychometrics.)

Why do I expect the correlations to replicate? Why would I expect them
not to -- unless you think the peer rankings that have been governing
the RAE for over two decades are random? In fact, in all fields and all
years tested, they have correlated positively and substantially with
the peer rankings. And that's only *one* metric. (I am recommending many.)

    Harnad, S. (2007) Open Access Scientometrics and the UK Research
    Assessment Exercise. In Proceedings of 11th Annual Meeting of the
    International Society for Scientometrics and Informetrics 11(1), pp.
    27-33, Madrid, Spain. Torres-Salinas, D. and Moed, H. F., Eds.
    http://eprints.ecs.soton.ac.uk/13804/

Best wishes,

Stevan Harnad

> With best wishes,
>
>
> Loet
>
> ________________________________
>
> Loet Leydesdorff
> Amsterdam School of Communications Research (ASCoR),
> Kloveniersburgwal 48, 1012 CX Amsterdam.
> Tel.: +31-20- 525 6598; fax: +31-20- 525 3681
> loet_at_leydesdorff.net ; http://www.leydesdorff.net/
>
>
>
> > -----Original Message-----
> > From: ASIS&T Special Interest Group on Metrics
> > [mailto:SIGMETRICS_at_LISTSERV.UTK.EDU] On Behalf Of Stevan Harnad
> > Sent: Monday, June 16, 2008 3:20 PM
> > To: SIGMETRICS_at_LISTSERV.UTK.EDU
> > Subject: Re: [SIGMETRICS] Citation statistics
> >
> > Adminstrative info for SIGMETRICS (for example unsubscribe):
> > http://web.utk.edu/~gwhitney/sigmetrics.html
> >
> > On Sun, 15 Jun 2008, Loet Leydesdorff wrote:
> >
> > > > SH: But what all this valuable, valid cautionary
> > discussion overlooks is not
> > > > only the possibility but the *empirically demonstrated
> > fact* that there
> > > > exist metrics that are highly correlated with human expert
> > rankings.
> > >
> > > It seems to me that it is difficult to generalize from one
> > setting in which
> > > human experts and certain ranks coincided to the *existence *of such
> > > correlations across the board. Much may depend on how the
> > experts are
> > > selected. I did some research in which referee reports did
> > not correlate
> > > with citation and publication measures.
> >
> > Much may depend on how the experts are selected, but that was just as
> > true during the 20 years in which rankings by experts were the sole
> > criterion for the rankings in the UR Research Assessment Exercise
> > (RAE). (In validating predictive metrics one must not endeavor to be
> > Holier than the Pope: Your predictor can at best hope to be
> > as good as,
> > but not better than, your criterion.)
> >
> > That said: All correlations to date between total departmental author
> > citation counts (not journal impact factors!) and RAE peer rankings
> > have been positive, sizable, and statistically significant for the
> > RAE, in all disciplines and all years tested. Variance there will be,
> > always, but a good-sized component from citations alone seems to be
> > well-established. Please see the studies of Professor Oppenheim and
> > others, for example as cited in:
> >
> > Harnad, S., Carr, L., Brody, T. & Oppenheim, C. (2003)
> > Mandated online
> > RAE CVs Linked to University Eprint Archives: Improving
> > the UK Research
> > Assessment Exercise whilst making it cheaper and easier.
> > Ariadne 35.
> > http://www.ariadne.ac.uk/issue35/harnad/
> >
> > > Human experts are necessarily selected from a population of
> > experts, and it
> > > is often difficult to delineate between fields of expertise.
> >
> > Correct. And the RAE rankings are done separately, discipline by
> > discipline; the validation of the metrics should be done that way too.
> >
> > Perhaps there is sometimes a case for separate rankings even at
> > sub-disciplinary level. I expect the departments will be able to sort
> > that out. (And note that the RAE correlations do not constitute a
> > validation of metrics for evaluating individuals: I am confident that
> > that too will be possible, but it will require many more metrics and
> > much more validation.)
> >
> > > Similarly, we
> > > know from quite some research that citation and publication
> > practices are
> > > field-specific and that fields are not so easy to
> > delineate. Results may be
> > > very sensitive to choices made, for example, in terms of
> > citation windows.
> >
> > As noted, some of the variance in peer judgments will depend on the
> > sample of peers chosen; that is unavoidable. That is also why "light
> > touch" peer re-validation, spot-checks, updates and
> > optimizations on the
> > initialized metric weights are also a good idea, across the years.
> >
> > As to the need to evaluate sub-disciplines independently:
> > that question
> > exceeds the scope of metrics and metric validation.
> >
> > > Thus, I am bit doubtful about your claims of an
> > "empirically demonstrated
> > > fact."
> >
> > Within the scope mentioned -- the RAE peer rankings, for disciplines
> > such as they have been partitioned for the past two decades
> > -- there is
> > ample grounds for confidence in the empirical results to date.
> >
> > (And please note that this has nothing to do with journal
> > impact factors,
> > journal field classification, or journal rankings. It is about the RAE
> > and the ranking of university departments by peer panels, as
> > correlated
> > with citation counts.)
> >
> > Stevan Harnad
> > AMERICAN SCIENTIST OPEN ACCESS FORUM:
> > http://amsci-forum.amsci.org/archives/American-Scientist-Open-
> > Access-Forum.html
> > http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/
> >
> > UNIVERSITIES and RESEARCH FUNDERS:
> > If you have adopted or plan to adopt a policy of providing Open Access
> > to your own research article output, please describe your policy at:
> > http://www.eprints.org/signup/sign.php
> > http://openaccess.eprints.org/index.php?/archives/71-guid.html
> > http://openaccess.eprints.org/index.php?/archives/136-guid.html
> >
> > OPEN-ACCESS-PROVISION POLICY:
> > BOAI-1 ("Green"): Publish your article in a suitable
> > toll-access journal
> > http://romeo.eprints.org/
> > OR
> > BOAI-2 ("Gold"): Publish your article in an open-access
> > journal if/when
> > a suitable one exists.
> > http://www.doaj.org/
> > AND
> > in BOTH cases self-archive a supplementary version of
> > your article
> > in your own institutional repository.
> > http://www.eprints.org/self-faq/
> > http://archives.eprints.org/
> > http://openaccess.eprints.org/
> >
>
Received on Mon Jun 16 2008 - 19:47:05 BST

This archive was generated by hypermail 2.3.0 : Fri Dec 10 2010 - 19:49:21 GMT