Re: On Metrics and Metaphysics

From: Stevan Harnad <amsciforum_at_GMAIL.COM>
Date: Mon, 20 Oct 2008 08:43:39 -0400

On Mon, Oct 20, 2008 at 1:31 AM, Heather Morrison <heatherm_at_eln.bc.ca> wrote:

> Why is it that negative results are less likely to be published in
> the traditional literature?

(1) Negative results can be and are published in the traditional
literature, but in and of themselves, they may not be of sufficient
substance to merit a full article of their own.

(2) It is a false canard that journals do not publish negative
results, but the likelihood of acceptance in a high quality journal is
far greater if one does not simply submit a study to the effect that
"I tried and failed to replicate this study," but one embeds the
negative result in a substantive article reporting something new.

(3) Journals could easily fill their pages cover to cover if all it
required to produce an article were to try and fail to replicate some
published effect.

> It seems obvious that negative results are less likely to be cited.
> Researchers cite the work that they build on, not necessarily roads
> not followed.

Researchers use the work that will bear the weight of being built
upon. Effects that fail to replicate are as risky to use as unrefereed
results.

> It follows, then, that if journals and authors are judged solely on
> metrics, there will be even less incentive to publish negative
> results. A journal that does this, would tend to lower their average
> citations and hence impact factor.

Both of these inferences are incorrect. Negative results can already
be published. The author who shows that an important effect is invalid
will be cited. And as I said before, journal citation averages (JIF)
are of less and less interest. We are talking about article and author
metrics, not journal metrics.

>The same principle would apply to the scenarios listed below. If
> you pack a journal with nothing but articles on the very latest hot
> topics, your journal will do well when evaluated metrically. If you
> make decisions based on what is best for scholarship, your journal
> may not fare so well.

These observations, apart from being focused on the minor issue of
journal impact instead of the major issue of research impact, are too
removed from the actual practice of peer review and editing.

Stevan Harnad
Received on Mon Oct 20 2008 - 13:44:20 BST

This archive was generated by hypermail 2.3.0 : Fri Dec 10 2010 - 19:49:33 GMT