Re: UK Research Assessment Exercise (RAE) review

From: Stevan Harnad <harnad_at_ecs.soton.ac.uk>
Date: Tue, 26 Nov 2002 15:15:33 +0000

For the sake of communication and moving ahead, I would like to clarify
two points of definition (and methodology, and logic) about the terms
"research impact" and "scientometric measures":

"Research impact" means the measurable effects of research, including
everything in the following range of measurable effects:

(1) browsed
(2) read
(3) taught
(4) cited
(5) co-cited by authoritative sources
(6) used in other research
(7) applied in practical applications
(8) awarded the Nobel Prize

All of these (and probably more) are objectively measurable indices of
research impact. Research impact is not, and never has been just (4),
i.e., not just citation counts, whether average journal citation ratios
(the ISI "journal impact factor") or individual paper total or annual
citation counts, or individual author total or average or annual
citation counts (though citations are certainly important, in this
family of impact measures).

So when I speak of the multiple regression equation measuring research
impact I mean all of the above (at the very least).

"Scientometric measures" are the above measures. Scientometric analyses
also include time-series analyses, looking for time-based patterns in
the individual curves and the interrelations among measures like the
above ones -- and much more, to be discovered and designed as the
scientometric database consisting of the full text papers, their
reference list and their raw data become available for analysis online.

One of the principal motivations for the suggested coupling of the
research access agenda (open access) with the research impact assessment
agenda (e.g., the UK research Assessment Exercise [RAE]) is that there
is a symbiosis and synergy between the two: Maximizing research access
maximizes potential research impact. Scientometric measures of research
impact can monitor and quantify and make explicit and visible the
causal connection between access and impact at the same time that they
assess it, thereby also making explicit the further all-important
connection between research impact and research funding. It is a
synergy, because the open-access full-text database also facilitates new
developments in scientometric analysis, making the research assessment
more accurate and predictive.

http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/2325.html

Now on to comments on Chris Zielinski's posting:

On Tue, 26 Nov 2002 Chris Zielinski <informania_at_SUPANET.COM> wrote:

> This is being offered despite Stevan's being "braced for the predictable
> next round of attacks on scientometric impact analysis: 'Citation impact is
> crude, misleading, circular, biassed: we must assess research a better
> way!'". It remains curious why he acquiesces passively to a poor, biassed
> system based on impact analysis rather than searching for "alternative,
> nonscientometric ways of assessing and ranking large bodies of research
> output" - and indeed seeks to dissuade those who might be doing that.

Acquiescing passively to what? There are no alternatives to scientometrics
(as Chris goes on to note implicitly below), just richer, less biassed
scientometrics, which is precisely what I am recommending! Chris writes
as if I were defending the univariate ISI journal-impact factors when I
am arguing for replacing it by a far richer multiple regression equation!

> Those of us working with developing country journals are well aware of the
> inherent biases and vicious circles operating in the world of impact
> factors.

That is, again, the ISI journal-impact factor (a subset of measure 4 in
my [partial] list of 8!).

But if there is indeed a bias against developing country journals
on measure 4, what better remedy for it than to remove the access
barriers on the current visibility, usage and impact of those journals
by self-archiving their contents in OAI-compliant Eprint Archives,
thereby ensuring that they will be openly accessible to every would-be
user with access to the Web!

> The circularity Stevan refers to is "You cannot cite what you
> haven't read, you tend not to read what is not stocked in your library (or
> readily avaialble online), and your library tends not to stock what isn't
> cited".

Indeed. So forget about relying on your library (and the access tolls it
may or may not be able to afford) and make your research openly
accessible for free for all by self-archiving it. And if you are in a
developing country and you need it, help in doing this is available
from the Soros Foundation's Budapest Open Access Initiative:
http://www.soros.org/openaccess/

The ISI journal-impact factor's vicious circularity is part of the
toll-access circle, which is precisely what open-access and
self-archiving are designed to free us from!

> This certainly applies to developing country journals, and there is
> literature to support this (which - paradoxically - I don't have to hand to
> cite), but it also applies everywhere to new journals, local journals and
> many open access products.

All true, and all relevant: to the urgent need for, and the sure
benefits of, self-archiving and open access. What all these papers and
their authors need in order to maximize their visibility, uptake and
impact is web-wide barrier-free access to their findings for all its
potential users worldwide.

> Surely those supporting open access should be against impact-factor driven
> ranking systems and be searching actively for less-biassed replacements?

Who is against research impact (in the full sense described above, as
opposed to merely the univariate ISI journal-impact factor)? Who is
against measuring research impact (scientometrics)? Who is against
maximizing research impact, by maximizing research access and
strengthening and enriching research assessment and ranking (on a range
of measures, analyzed and weighted to minimize any potential bias)? And
what is all this if not scientometric impact analysis?

> These need not be "nonscientometric", incidentally - no need for the
> suggestion of witchcraft. [Impact factors themselves are more than a tad
> sociometric - measurements of the behavioural patterns of researchers -
> rather than entirely objective. Is the reason someone cited the British
> Medical Journal rather than the Bhutan Medical Journal (assuming she had
> access to both) because the first BMJ was better, or more prestigious, than
> the second BMJ?]

I couldn't follow that: Anything-metric simply means "measured." I
assume that if we want research to have an impact, we'd also like to be
able to measure that impact. Peer reviewers are the first line of defense,
using direct human judgment and expertise as to the quality and hence
the potential impact of research. But I assume we don't want
to make full peer-review our only impact metric, and to just keep on
repeating it (e.g., by the RAE assessors). (It takes long enough to get
a paper refereed once!) So what are the other alternatives? And are any
of them "non-scientometric"? If they are objective, quantified measures,
I can't see that there is any other choice!

As to the reason why a researcher might cite a paper in the British Medical
Journal rather than the Bhutan Medical Journal: There are many reasons
(and lets admit that sometimes some of them really do have to do with
quality differences, if only because the Bhutan researchers, far more
deprived of research-access than British ones, are unable to do more
fully-informed research as a consequence). But surely the fact that most
researchers are unlikely to have access to the Bhutan journal's contents
is one of the main reasons -- and open access is the remedy for it. Once
it's accessible, the BhMJ's own quality can compete for users and citers on
a more level playing field with the other BrMJ (and the playing field
will be made even more level by reciprocal access to all current
research for all researchers, levelling out quality differences owing
to information-deprivation because of access-differences).

> In fact, Stevan mentions "other new online scientometric measures such as
> online usage ["hits"], time-series analyses, co-citation analyses and
> full-text-based semantic co-analyses, all placed in a weighted multiple
> regression equation instead of just a univariate correlation". Indeed,
> impact factors are very crude quasi-scientometric and subjective measures
> compared even with such simple information (easy to obtain for online media)
> as counts of usage - for example, how many articles have been read but not
> cited?

I couldn't follow this. These are all scientometric measures I was
recommending for inclusion in the multiple regression equation for
impact for the online, full-text, open-access database (mandated by the
RAE). Where is the disgareement. (And what is the "quasi-"?)

> All these are indeed worth pursuing and, I would have thought, right on the
> agenda of the OA movement.

And, I would have thought, precisely what I was recommending!

Stevan Harnad

NOTE: A complete archive of the ongoing discussion of providing open
access to the peer-reviewed research literature online is available at
the American Scientist September Forum (98 & 99 & 00 & 01 & 02):

    http://amsci-forum.amsci.org/archives/American-Scientist-Open-Access-Forum.html
                            or
    http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/index.html

Discussion can be posted to: american-scientist-open-access-forum_at_amsci.org

See also the Budapest Open Access Initiative:
    http://www.soros.org/openaccess

the Free Online Scholarship Movement:
    http://www.earlham.edu/~peters/fos/timeline.htm

the OAI site:
    http://www.openarchives.org

and the free OAI institutional archiving software site:
    http://www.eprints.org/
Received on Tue Nov 26 2002 - 15:15:33 GMT

This archive was generated by hypermail 2.3.0 : Fri Dec 10 2010 - 19:46:43 GMT