Re: On Metrics and Metaphysics

From: Heather Morrison <heatherm_at_ELN.BC.CA>
Date: Sun, 19 Oct 2008 22:31:47 -0700

Stevan,

Why is it that negative results are less likely to be published in
the traditional literature? This happens regardless of the validity
of the research and the importance of sharing these results, so that
other researchers know what has been tried and did not work, and
avoid wasting efforts repeating an experiment just because they did
not know it had already been done.

It seems obvious that negative results are less likely to be cited.
Researchers cite the work that they build on, not necessarily roads
not followed.

It follows, then, that if journals and authors are judged solely on
metrics, there will be even less incentive to publish negative
results. A journal that does this, would tend to lower their average
citations and hence impact factor.

The same principle would apply to the scenarios listed below. If
you pack a journal with nothing but articles on the very latest hot
topics, your journal will do well when evaluated metrically. If you
make decisions based on what is best for scholarship, your journal
may not fare so well.

Any opinion expressed in this e-mail is that of the author alone, and
does not represent the opinion or policy of BC Electronic Library
Network or Simon Fraser University Library.

Heather Morrison, MLIS
The Imaginary Journal of Poetic Economics
http://poeticeconomics.blogspot.com

On 19-Oct-08, at 8:25 PM, Stevan Harnad wrote:

> On Sun, Oct 19, 2008 at 8:16 PM, Heather Morrison
> <heatherm_at_eln.bc.ca> wrote:
>
> > Biology - species. There will always, of necessity, be a limited
> > pool of scientists studying any one species in danger of extinction.
> > Do articles and journals in these areas receive fewer citations? If
> > so, what happens if we reward scholars and journals on the basis of
> > metrics? Will these researchers lose their funding? Will journals
> > that publish articles in this area lose their status?
>
> These are nonproblems. Compare like with like, and use multiple
> metrics.

Picture two general biology journals. One evaluates articles based
solely on scholarly merit; the other picks articles based on what is
a current hot topic. Which journal fares better metrically?

>
> > Literature - authors. There are many researchers studying
> > Shakespeare. A lesser-known local author will be lucky to receive
> > the attention of even one researcher. In a metrics-based system, it
> > seems reasonable to hypothesize that this bias will increase, and the
> > odds of studying local culture decrease.
>
> What bias? If a lesser-known researcher does good work, it will be
> used, and this will be reflected in the metrics.

The issue presented is the lesser-known author who is studied, not
the researcher/author.
>
> Compare like with like, and use multiple metrics.

>
> > History - the local versus the global. A reasonable hypothesis is
> > that historical articles and journals with broader potential
> > readership are likely to attract more citations than locally-based
> > historical studies. If this is correct, then local studies would
> > suffer under a metrics-based system.
>
> Compare like with like, and use multiple metrics.
>
> > Medicine - temporary importance: AIDS, bird flu, SARS, are all viral
> > diseases, horrible diseases and pandemics or potential pandemics. Of
> > course, our research communities must prioritize these threats in the
> > short term. This means many articles on these topics, and new
> > journals, receiving many citations. Great stuff, this advances our
> > knowledge and may have already prevented more than one pandemic. But
> > what about other, less-pressing issues, such as the resistance of
> > bacteria to antibiotics and basic research? In the short term, a
> > focus on research usage metrics helps us to prioritize and focus on
> > the immediate danger. In the long term, if usage metrics lead us to
> > undervalue basic research, we could end up with more pressing dangers
> > to deal with, such as rampant and totally untreatable bacterial
> > illnesses, and less basic knowledge to help us figure out what to do.
>
> Compare like with like, and use multiple metrics: Basic research with
> basic research; applied with applied, theme-driven with theme-driven.
>
> And there are other metrics besides usage metrics.

This at least suggests that a metrics-based evaluation system must be
fairly complex, and that some basic decisions need to be made before
even thinking about metrics. For example, that basic research is
important, even if applied is more clearly relevant to short-term
goals, and should be supported. On this we agree. I only hope that
no evaluation system proceeds without a proper understanding of the
complexity.

>
> > Cost-efficiency metrics, such as average cost per article, is a tool
> > that can be used to examine the relative cost-effectiveness of
> > journals. In the print world, the per-article cost for the small,
> > not-for-profit society publishers has often been a small fraction of
> > the cost of the larger commercial for-profit publishers, often with
> > equal or better quality. If university administrators are going to
> > look at metrics, why not give thought to rewarding researchers for
> > seeking publishing venues that combine high-quality peer review and
> > editing with affordable costs?
>
> The big issue is not journal evaluation of journal cost-effectiveness
> but research and researcher evaluation and cost-effectiveness. (Forget
> about the JIF and the rating of journals: they are just one --
> extremely
> blunt -- tool among many.)
>
> Stevan Harnad
Received on Mon Oct 20 2008 - 13:31:09 BST

This archive was generated by hypermail 2.3.0 : Fri Dec 10 2010 - 19:49:33 GMT