Re: Self-Selected Vetting vs. Peer Review: Supplement or Substitute?

From: Stevan Harnad <>
Date: Mon, 4 Nov 2002 18:05:03 +0000 (GMT)

On Mon, 4 Nov 2002, Andrew Odlyzko wrote:

> Fears about possible damage to the peer review system are slowing down the
> evolution of scholarly communication, and in particular the development
> of freely accessible article archives. I am convinced that these fears
> are unjustified. Although the peer review system will change substantially
> with the spread of such archives, it will change for the better.

I agree the fears are groundless, and that they are holding back
self-archiving, but I am also convinced that some of the fears concern
CHANGE to peer review, so hesitant self-archivers need to be reassured
about that too.

I am certain that online implementation will make (and already is
making) CLASSICAL peer review faster, cheaper, more efficient, and more
equitable. That can be confidently stated. But what (in my opinion) has
to be avoided at all costs is any linking whatsoever between
self-archiving (i.e., author/institution steps taken to maximize the
visibility, accessibility, usage, citation and impact of their
peer-reviewed research output) and any substantive changes in classical
peer review.

Classical peer review is merely the evaluation of the work of
specialists by their qualified fellow-specialists (peers) mediated by
and answerable to a designated qualified-specialist (the editor) who
picks the referees, adjudicates the reports, indicates what needs to be
done to revise for acceptance (if anything) and is answerable for the
results of this quality-control, error-corrective mechanism.

Untested "reforms" to this system, though possible, should not be
mentioned at all, in the same breath as self-archiving, for any implied
coupling between self-archiving and hypothetical peer-review changes
will only work to the disadvantage of self-archiving and open access:

    "A Note of Caution About 'Reforming the System'"

    "Peer Review Reform Hypothesis-Testing"

> A good overview of the history and current state of the peer review
> system is provided by the book [1].

Does Fiona's book cover peer review in all disciplines, or just health
sciences? There is a quantitative empirical literature on this.

> This system is really a collection
> of many different systems, of varying effectiveness. They guarantee
> neither correctness nor novelty of the results, even among the most
> selective and prestigious journals.

No human (or nonhuman) judgement can guarantee that. The only relevant
question -- and it has not been asked or tested, but the default
assumption until it is tested MUST be for, not against, the causal role
of peer review in maintaining the current quality level of the research
literature -- is: How much better or worse is the literature's quality
with (1) classical peer review, (2) with hypothetical (not yet tested
and compared) alternatives, or (3) with no peer review at all (which,
by the way, is NOT tested already by existing pre-refereeing preprint
quality levels, for the invisible-hand reasons I've elaborated)?

Absent the comparative data, there is only speculation (speculation that
may well put the quality of the current refereed literature at risk if
it were implemented before successful pre-testing). This is the sort
of speculation from which I think it is so important to dissociate the
question of self-archiving, completely. Any implied coupling will simply
lose us yet another generation of potential self-archivers.

> However, traditional peer review
> (with anonymous referees evaluating submissions to a journal) does
> perform a valuable screening function.

I haven't read Fiona's book, but traditional (classical) peer review
consists of a series of (trivial) variants; the standard practise is to
make referee-anonymity optional: referees may waive it if they wish.

But almost nowhere is peer-review merely red-light/green-light
screening: Papers are not just refereed for acceptance/rejection.
Referees propose corrections and elaborations, papers are revised and
re-refereed. Peer-review is not a passive, static filter but an active,
dynamic, interactive, corrective one.

> Still, it is just a part of
> the entire communication system, and evaluation of the value of an
> article is never truly complete, as sometimes historians will revisit
> this question centuries after publication.

Yes, the peer-reviewed, accepted final draft, certified as having met
the established quality standards of a given journal, is only a
stage in the embryology of research, a milestone along the "scholarly
skywriting" continuum

    Harnad, S. (1990) Scholarly Skywriting and the Prepublication
    Continuum of Scientific Inquiry. Psychological Science 1:
    342 - 343 (reprinted in Current Contents 45: 9-13, November 11

But it is a critical milestone: the one that both generates and
certifies the (probable) quality level and reliability of the findings.

Without that dynamic, answerable, pre-correction, and without the
tried-and-tested quality-label of an established journal to sign-post
the skyline, I am convinced that the literature would not only quickly
decline in quality, but it would become un-navigable -- till peer review
was simply reinvented!

Yet it is precisely this doomsday scenario that is holding would-be
self-archivers back today, and I'm afraid you may just be reinforcing
their fears here, Andrew!

I sense (I am reading this sequentially in real time) that we are about to
come to the "open peer commentary" alternative to "classical peer review":

After 25 years of opportunity to compare the two professionally, I can
say with some conviction that open peer commentary is a supplement,
not a substitute, for peer review. No one should have to navigate the
raw, unfiltered manuscripts that make their way to editors' desks (and
even those are better than they would be without the "invisible hand"
effect), and no one should trust the self-appointed stalwarts who have
nothing better to do with their time than to try to do just that.
Commentary is valuable, but only after peer-review has ensured that
the paper meets the quality standards for publication.

> It is the presence of such
> self-correcting features in the entire scholarly communication system
> that makes the deficiencies of the current peer review system tolerable.
> However, it is natural to expect evolution to occur.

The self-correction in classical peer review is systematic, deliberate,
and answerable (on a journal by journal basis). The ad-lib
self-correctiveness of self-appointed sleuths tends more toward an
opinion poll than expert guidance.

> In the Gutenberg era of print journals, placing heavy reliance on
> traditional peer review was sensible. Printing and distributing journals
> was very expensive. Furthermore, providing additional feedback after
> publication was hard and slow. Therefore it was appropriate to devote
> considerable attention to minimizing the volume of published material,
> and making sure it was of high quality. With the development of more
> flexible communication systems, especially the Internet, we are moving
> towards a continuum of publication.

I of course agree about the continuum, but it makes no sense to call it
a "publication" continuum: At best it is a "publicizing" continuum
(though I prefer calling it "skywriting"). What is left of the classical
Gutenberg notion of "publication" in this is only the milestone of peer
review (and its accompanying quality-certification tag). Otherwise it
would simply be a non-sign-posted chaos of self-publicization,
patrolled by self-appointed vigilantes -- of unknown quality themselves
-- attesting to quality (and hence worthiness of the investment of time
to read and the risk of trying to use): The blind leading the blind.

Except for their names and prior reputations: We can of course trust
more the opinions of the qualified experts whose expertise has already
been established -- for those happy cases where it is they who happen to
be patrolling the literature for us (as they would have done in
classical peer review). But in classical peer review this matching of
expertise was systematic, and reliably, certifiably done for us in
advance. Here we would just have to hope that it happens, or will happen
(when?). And there, it was the journal's own established reputation and
concerted, answerable efforts that ensured that it would converge if the
milestone (certification of having met the journal's standards) was met.

In this new "system" we would be entrusting all of that to the four

> I have argued, starting with [2],
> that this requires a continuum of peer review, which will provide feedback
> to scholars about articles and other materials as they move along the
> continuum, and not just in the single journal decision process stage.
> We can already see elements of the evolving system of peer review in
> operation.

There always was a continuum, with informal, nonbinding feedback prior
to submission (and after publication). But formal peer review was
systematic, answerable, binding, and not self-administered (take it or
leave it); and it established a quality "tag" (the journal name) that
one could rely upon (within limits) a priori, for an article of a given
level of quality, rigor, and even importance and impact.

Do you really expect the reluctant self-archiver -- who wants only
to increase the visibility, accessibility and impact of his current,
peer-reviewed research output, such as it is, and to access the same
peer-reviewed output of others -- to set aside his worries about the
possible deleterious effects of self-archiving on peer review on the
strength of the hypothetical alternative you are evoking here?

"I wanted reassurance that if I self-archived, nothing would be lost,
nothing would change but the accessibility of my work to would-be users.
Instead, it looks as if EVERYTHING will change if I self-archive! (I'd
better just keep waiting...)?

Andrew, both of us are frustrated by the slowness with which the
research community is coming to the realization that open access is the
optimal and inevitable outcome for them, and that self-archiving is the
way to get there. But do you really believe that inasmuch as they are
being held back by fears about peer review this paper will embolden them,
rather than confirming their worst fears?

Yet it is all completely unnecessary! All that's needed for open access
is to self-archive, and leave classical peer review alone! Why imply

> Many scholars, including Stevan Harnad [3], one of the most prominent
> proponents of open access archives, argue for a continuing strong role
> for the traditional peer review system at the journal level. I have no
> doubt that this system will persist for quite a while, since sociological
> changes in the scholarly arena are very slow [4]. However, I do expect
> its relative importance to decline.

You may or may not be right. But before classical peer review can
decline in the open-access era, we have to bring on the open-access era,
by self-archiving. And if what is holding us back from self-archiving
is fears about the decline of peer-review, your predictions will not
hearten us, they will strengthen our reluctance to self-archive.

You are making predictions and conjectures, which is fine. But why link
them to open-access and especially the current unfortunate reluctance to
self-archive? Speculations will not relieve fears, especially not
speculations that tend to confirm them.

What would-be self-archivers need to be reassured of is the truth,
and that via facts, and the fact is that there is no causal connection
whatsoever between self-archiving and change (present or future) in peer
review to date. And for every speculation that open access may have THIS
eventual effect on peer review, there is a counter-speculation that
it may instead have THAT effect. The speculations are irrelevant, and
should be de-emphasized (in my opinion) -- at least if the objective
is to try to encourage and facilitate universal open access through
self-archiving (rather than merely to speculate about the possible future
of peer review).

> The reason is that there is a
> continuing growth of other types of feedback that scholars can rely on.
> This is part of the general trend (described in [5]) in which traditional
> journals are continuing as before, but the main action is in novel and
> often informal modes of communication that are growing much more rapidly.

There are indeed wonderful new forms of feedback in the online-era, and
there will be even more in the open-access era. But (until there is
substantive evidence to the contrary) these will be SUPPLEMENTS to peer
review, not SUBSTUTUTES for it. Self-archivers need to be reassured that
classical peer review will continue intact: that it is not put at risk
in any way by self-archiving or open access. The rest is just a bonus!

> The growing flood of information does require screening. Some of this
> reviewing can be done by non-peers. Indeed, some of it has traditionally
> been done by non-peers, for example in legal scholarship, where U.S. law
> reviews are staffed by students.

The law-review case, about which I have written and puzzled before,
is an anomaly, and, as far as I know, there are many legal scholars
who are not satisfied with it (Hibbitts included). (Not only are
law-reviews student-run, but they are house organs, another anomaly in the
journal-quality hierarchy, where house-journals tend to rank low, a kind
of vanity-press.) I think it is highly inadvisable to try to generalize
this case in any way, when it is itself unique and poorly understood. In
any case, it certainly will not be reassuring to professors who are
contemplating whether or not they should self-archive, that doing so
may mean that whereas they are marking their students essays on
tuesdays and thursdays, if they self-archive their own papers, their
students may be marking them on wednesdays and fridays, instead of the
qualified editor-mediated peers of times past.

> The growing role of interdisciplinary
> research might lead to a generally greater role for non-peers in reviewing
> publications.

I can't follow this at all. Interdisciplinary work requires review by
peers from more disciplines, not from non-peers. ("Peer" means qualified

> However, in most cases only peers are truly qualified to
> review technical results. However, peer evaluations can be obtained,
> and increasingly are being obtained, much more flexibly than through the
> traditional anonymous journal refereeing process.

That is not my experience. It seems that qualified referees, an
overharvested resource, are becoming harder and harder to come by. They
are overloaded, and take a long time to deliver their reports. Is the
idea that they will be more available if approached some other way? Or
if they self-select? But what if they all want to review paper X, and no
one -- or dilettantes -- review papers A-J?

> Some can come from
> use of automated tools to harvest references to papers, in a much more
> flexible and comprehensive way than the Science Citation Index provided
> in the old days.

Now here I agree, but this falls squarely in the category of using
online resources to implement CLASSICAL peer review more efficiently and
equitably: Here, it is to help find qualified referees and to distribute
the load more evenly. But that has nothing to do with peer review
reform, nor with any of the other speculative alternatives considered
here. It goes without saying that an open-access corpus will make it
much easier and more effective to find qualified referees.

> Other, more up-to-date evaluations, can be obtained
> from a variety of techniques, such as those described in [5].

Not sure which in particular are meant, but please distinguish between
the (very desirable) ways that open access could make classical peer
review faster and more efficient and the much more speculative variants
you also allude to. They really have nothing to do with one another.

> An example of how evolving forms of peer review function is provided by
> the recent proof that testing whether a natural number is prime (that
> is, divisible only by 1 and itself) can be done fast. (The technical
> term is in "polynomial time.") This had been an old and famous open
> problem of mathematics and computer science. On Sunday, August 4, 2002,
> Maninda Agrawal, Neeraj Kayal, and Nitin Saxena of the Indian Institute
> of Technology in Kanpur sent out a paper with their astounding proof of
> this result to several of the recognized experts on primality testing.
> (Their proof was astounding because of its unexpected simplicity.)
> Some of these experts responded almost right away, confirming the validity
> of the proof. On Tuesday, August 6, the authors then posted the paper
> on their Web site and sent out email announcements. This prompted many
> additional mathematicians and computer scientists to read the paper, and
> led to extensive discussions on online mailing lists. On Thursday, August
> 8, the New York Times carried a story announcing the result and quoting
> some of the experts who had verified the correctness of the result.

The same thing could and would have happened (and probably has)
occasionally in paper: A powerful new finding can spread, and be
confirmed, faster than the sluggish, systematic peer-review process.
So what? It happens sometimes in paper and will happen sometimes
on line, but it is hardly the paradigm or prototype for research. Most
research makes little impact, has few qualified experts, and needs to be
vetted before the few potential reader/users can decide whether they
want to spend their limited time reading it, let alone trying to use
and build upon it.

There's another way to put all this: To a first approximation (and
forgetting about what I said about dynamic correction, revision etc.),
a journal's quality level is a function of its rejection rate: The
highest quality journals will only accept the highest quality work,
rejecting the rest. Second-tier journals will reject less, and so on,
right down to the near-vanity press at the bottom, which accepts just
about anything. This is the hierarchy of sign-posted milestones that
guides the prospective reader and user rationing his finite reading time
and his precious research resources. How is this quality triage to be done
on the model you just described (of the prime-number flurry)?

> Review by peers played a central role in this story. The authors first
> privately consulted known experts in the subject. Then, after getting
> assurance they had not overlooked anything substantial, they made their
> work available worldwide, where it attracted scrutiny by other experts.
> The New York Times coverage was based on the positive evaluations of
> correctness and significance by those experts. Eventually they did
> submit their paper to a conventional journal, where it will undoubtedly
> undergo conventional peer review, and be published. The journal version
> will probably be the main one cited in the future, but will likely have
> little influence on the development of the subject. Within weeks of the
> distribution of the Agrawal-Kayal-Saxena article, improvements on their
> results had been obtained by other researchers, and future work will be
> based mainly on those. Agrawal, Kayal, and Saxena will get proper credit
> for their breakthrough. However, although their paper will go through
> the conventional journal peer review and publication system, that will
> be almost irrelevant for the intellectual development of their area.

All I can do is repeat that this picture will not scale to all of
research. It works only for the rare, sexy special cases. And although
in general there is a tendency for the "growing edge" of science to
outpace the more plodding and inefficient formal peer-review machine
somewhat, this is nevertheless being sustained by its invisible hand;
eliminate that, and it will be hanging by its bootstraps -- with the
inevitable result.

> One can object that only potentially breakthrough results are likely
> to attract the level of attention that the Agrawal-Kayal-Saxena result
> attracted. But that is not a problem. It is only the most important
> results that require this level of attention and at this rapid a rate.
> There will be a need for some systematic scrutiny of all technical
> publications, to ensure that the literature does not get polluted to
> erroneous claims.

How much scrutiny? By whom? How will we know? And when? (Are we going to
invite referees to referee belatedly, after the fact? What shall we do
with the literature in the meanwhile? And would you find this
reassuring if you were hestitating about self-archiving because of
worries about peer review and the quality level and usability of the

And until the erroneous-claim pollution is tested and filtered out,
how are tenure and promotion committees supposed to weight those
unrefereed self-publicizations for career advancement? By consulting
self-appointed commentators (if any) on the web, in place if the
established quality-standards and track-records of refereed journals?

> However, we should expect a much more heterogeneous
> system to evolve, in which many of the ideas mentioned in [2] will play
> a role. For example, the current strong prohibition of simultaneous
> publication in multiple journals is likely to be discarded as another
> relic of the Gutenberg era where print resources were scarce. Also,
> we are likely to see separate evaluations of significance and correctness.

It is hard to imagine how (or why!) when referees are already a scarce
and overused resource we would wish (or even be able) to ask them to
do double or even triple duty or more, refereeing yet again what has
already been refereed, by allowing or encouraging multiple publication of
the same work! Here again, one reliable milestone would have been fine
(with the rest supplemented by post-publication commentary) rather than
overgeneralizing the notion of "publication" while weakening the notion
of refereeing. (None of this will reassure reluctant self-archivers!)

The evaluations for correctness and significance are already separate
for most journals, and they establish their own levels for both. Usually
significance is the main vertical factor in the quality hierarchy.

> This note is a personal perspective on how peer review is likely to evolve
> in the future. It is based primarily on my experience in area such as

> mathematics, physics, computing, and some social sciences.

Andrew, I'm curious: experiences as what in those areas: reader? author?
referee? editor? empirical investigator of peer-review?

It seems to me that the first two are definitely not enough to come to
an objective position on this, maybe not even the first four...

> However,
> I believe there is nothing special about those areas. Although health
> sciences have moved towards electronic publishing more slowly than the
> fields I am familiar with, I do not see much that is special about
> their needs. In particular, I believe that the frequently voiced
> concerns about need for extra scrutiny of research results that might
> affect health practices are a red herring. Yes, decision about medical
> procedures or even diet should be based on solidly established research.
> However, the extra levels of scrutiny are more likely to be obtained by
> more open communication and review systems than we have today.

And a little bit of self-poisoning by the users after the
self-publicizing by the authors, by way of self-correction?

Andrew, I'm afraid I disagree rather profoundly with the position you
are advocating here! I think it is far more anecdotal and speculative
than your other work on publishing and access. I think the conjectures
about peer review are wrong, but worse, I think they will be damaging
rather than helpful to self-archiving and open access. I think my own
article, which you cite, actually premptively considered most of these
points already, more or less along the lines I have repeated in
commentary here. I have to say that I rather wish that you weren't
publishing this -- or at least that you would clearly dissociate it from
self-archiving, and simply portray it as the conjectures they are, from
someone who is not actually doing research on the peer review system,
but mere contemplating hypothetical possibilities.

Or failing that, I wish I could at least write a commentary by way of

Stevan Harnad
Received on Mon Nov 04 2002 - 18:05:03 GMT

This archive was generated by hypermail 2.3.0 : Fri Dec 10 2010 - 19:46:41 GMT