Re: Self-Selected Vetting vs. Peer Review: Supplement or Substitute?

From: Stevan Harnad <>
Date: Tue, 5 Nov 2002 21:56:29 +0000 (GMT)

Andrew Odlyzko wrote:

> Any time a major change
> takes place in method of dissemination of scholarly information,
> changes in peer review are basically unavoidable. They may only
> be changes that make what you call classical peer review better, but
> that is a very unlikely course. It is much more probable that the
> changes will be deeper.

You may or may not be right about the latter. But what I am suggesting
is that if your prediction happens to be wrong, making the prediction
anyway will have a negative, retardant effect on self-archiving and open
access. (Of course, if you are right, then concerns about these changes
will still have a negative, retardant effect on self-archiving.)

For these reasons, I would avoid making any predictions about possible
changes in peer review (other than improved efficiency) as a result of
open access and self-archiving. We are agreed that open access is the
optimal and inevitable endstate in any case. I think we both agree that
the sooner it comes, the better. If making predictions about changes in
peer-review is likely to slow rather than hasten the optimal and
inevitable, it would seem to be preferable not to venture predictions
at this time.

> In other words, self-archiving is the preeminent goal, and we should
> keep quiet about any changes it might bring to peer review in order not
> to frighten the uncommitted?


> How does this differ from somebody a decade or two ago that might have
> promised that electronic publishing would simply mean that journals
> would now be available online, but there would be no disturbing
> innovations such as scholars being confused by uncontrolled preprint
> distribution?

I can't see the point (and I'm not sure what you mean by scholars being
confused by uncontrolled preprint distribution!).

Yes, the journal transition from on-paper to on-line was also a case of
the optimal/inevitable, though a far less radical one than the transition
from toll-access to open-access. If someone, before the transition from
on-paper to on-line, had had some solid evidence or reasoning-based
predictions of untoward consequences that ought to make people think
twice about the transition, or first take some remedial measures, of
course he should have made those known (though I know of no such untoward
consequences, nor of any necessary remedial measures, in that case:
the advent of self-archiving is certainly not an untoward consequence!).

But that is not the case at all in what we are discussing here, namely,
the transition from toll-access to open-access through self-archiving
itself. The only untoward consequence I can see is that speculations that
it would induce radical changes in peer review, whether correct or not,
can only retard open-access. (Nor does it seem to me that you are making
these predictions because you are recommending that people think twice
about the transition, or first take some remedial measures.)

> ao> This system is really a collection
> ao> of many different systems, of varying effectiveness. They guarantee
> ao> neither correctness nor novelty of the results, even among the most
> ao> selective and prestigious journals.
>sh> No human (or nonhuman) judgement can guarantee that. The only relevant
>sh> question -- and it has not been asked or tested, but the default
>sh> assumption until it is tested MUST be for, not against, the causal role
>sh> of peer review in maintaining the current quality level of the research
>sh> literature -- is: How much better or worse is the literature's quality
>sh> with (1) classical peer review, (2) with hypothetical (not yet tested
>sh> and compared) alternatives, or (3) with no peer review at all (which,
>sh> by the way, is NOT tested already by existing pre-refereeing preprint
>sh> quality levels, for the invisible-hand reasons I've elaborated)?
> And as electronic publishing became a possibility, would it not have
> been natural to complain that the only way to maintain the quality of
> scholarly publication was to insist on proven techniques (thus
> ruling out self-archives, and extending the Ingelfinger rule to cover
> all journals)?

Self-archiving was tried extensively, and demonstrated to work. Now we
can confidently say it works and recommend it to everyone. Alternatives
to peer review have not been tried or demonstrated to work. Nothing
prevents people from trying to implement controlled experiments on
alternatives to peer review. But until they are done, and the outcome
known, there is no basis whatsoever for linking them to self-archiving
and open-access.

>sh> Absent the comparative data, there is only speculation (speculation that
>sh> may well put the quality of the current refereed literature at risk if
>sh> it were implemented before successful pre-testing). This is the sort
>sh> of speculation from which I think it is so important to dissociate the
>sh> question of self-archiving, completely. Any implied coupling will simply
>sh> lose us yet another generation of potential self-archivers.
> Again, by this line of reasoning, moving journals online should have been
> carefully dissociated from irresponsible talk aout self-archiving and its
> "pollution" of the literature.

No, the analogy does not hold at all. The self-archivers tried an
experiment. It could have failed, in which case that would have been the
end of it, but as it happened, it was spectacularly successful. And they
did it well before the mass movement of journals online. There is, as
far as I can see, no contingency whatosever between journals moving
online and authors posting their digital texts: It was the invention of
word processing and of the Internet that made the latter possible, not
journals going or not going online. I think this analogy is extremely

A strained version of the analogy that might work would be this:
Publishers might have said: "Let's not go online, because look at
the piracy-damage the xerox-era has done to us: Making online versions
available would invite even worse piracy." I'm rather sure that thoughts
along those lines DID slow down the journal transition to online by a few
years (until proprietary firewalls were in place) but nothing follows from
that for the case we are considering here: The continuity and causal
connection between analog and digital piracy is quite transparent,
but the transition from classical peer review to the hypothetical
alternatives you described certainly is not -- and its causal connection
with self-archiving and open-access is even less clear.

But let's take even that at face value: Suppose that (counterfactually,
to my mind) your predicted outcome, and your suggestion about its causal
connection with open access, were completely correct, and even that the
outcome you hypothesize was the optimal one, and would indeed yield a
research literature of at least the same quality and navigability as the
current one. Even if I myself believed that (which I don't at all, but
suppose it was true and I believed it true too), I still would strongly
urge you not to make that prediction at this time, because it would
not be believed that the outcome would be optimal and it would instead
only serve to confirm the very fears about peer review (wrong-headed,
in the event) that are currently holding people back from self-archiving.

The fact, though, is that no one KNOWS that your prediction is true. So
dilating on it now -- when its truth cannot even be known, and when,
on the face of it, proclaiming it will merely reinforce people's fears and
hesitations about self-archiving -- can hardly serve a useful purpose. (If
I were you, and I could not in good conscience deny my belief in the
causal connection between self-archiving, open-access, and the changes in
peer review that you described, I simply would not express my belief at
all, rather than risk voicing a fallible belief that is almost certainly
going to have a negative effect on something I regard as very positive,
but also certain.)

> sh> Peer-review is not a passive, static filter but an active,
> sh> dynamic, interactive, corrective one.
> So are (even to a greater degree) the many other stages of the
> "scholarly skywriting" continuum.

Perhaps in some cases; but the peer review generates a reliable,
recognizable, quality-level-tag, a recognizable milestone with an
established track record, along the continuum, on which the would-be user
can depend. Without that, it is not at all clear where a particular paper
stands, in quality and usability, along its own continuum...

>sh> Without that dynamic, answerable, pre-correction, and without the
>sh> tried-and-tested quality-label of an established journal to sign-post
>sh> the skyline, I am convinced that the literature would not only quickly
>sh> decline in quality, but it would become un-navigable -- till peer review
>sh> was simply reinvented!
> It is my contention that peer review is being reinvented, or more
> precisely, reshaped. I do not deny the importance of review by peers,
> but do question whether classical peer review is all that important.
> It just has too many warts!

Reinvented or reshaped where, and by whom? As we speak, whether a
self-archiver or not, not a single author of the annual 2,000,000 papers
that appear in any of the hierarchy of 20,000 peer-reviewed journals
published across all disciplines and around the world has stopped
submitting his papers to those journals. Your predictions are merely
speculations. They have not been implemented and tested, and what the
outcome would be if they were tested is not known.

I can only repeat that the occasional cases like the number-theoretic one
you gave -- in which there is a dramatic flurry of dynamic testing and
revision based on informal peer feedback well before the formal peer
review -- are far too rare to use as a model for the annual 2,000,000. Such
examples simply will not scale. They are by no means a systematic
test of paper quality levels in the absence of classical peer review (and
such examples occasionally arose in the paper era too).

Similarly, the fact that often (no one knows how often or how
much) research interactions and advances occur at the stage of the
pre-refereeing preprint exchange stage, before peer review is completed
(8 months, on average, and accelerating, in the Physics Archive )
is simply a reaffirmation of the fact that the "growth" region in
research partly predates the outcome of peer review. This is still an
effect occurring squarely inside a system that is quality-controlled by
and answerable to classical peer review. To predict that this anticipatory
effect -- and overall the quality/usability levels for the literature --
would still be there if classical peer review were not, the controlled
experiment must first actually be done (on a sufficiently large and
representative sample, and long enough to trust it will scale).

To my knowledge, the experiment has not been done. Nothing even remotely
like that has been tried.

>sh> Yet it is precisely this doomsday scenario that is holding would-be
>sh> self-archivers back today, and I'm afraid you may just be reinforcing
>sh> their fears here, Andrew!
> But what I am holding out is the promise of an improved system of
> review by peers.

You are predicting a radical change (which may not take place), and you are
predicting an alternative system (which has never been tried or tested)
that will work at least as well as the present one -- and you are
proposing these by way of assuaging people's worries about putting peer
review at risk by self-archiving.

These are indeed promises, and speculative promises. I doubt that
speculative promises -- even when they come from someone as informed
and authoritative on the economics and dynamics of online publication
as you -- will allay people's fears, if these are holding them back
from self-archiving. The only thing that could (or should) allay those
fears would be (substantial!) empirical evidence from controlled tests
of these hypothetical changes and hypothetical systems showing that the
resulting literature will be of at least the same level of quality and
usability as the current one.

No such tests have been done. No such evidence exists. (So it is best
not to speculate at all.)

>sh> I sense (I am reading this sequentially in real time) that we are about to
>sh> come to the "open peer commentary" alternative to "classical peer review":
> You sense incorrectly. In the extremely short space I had, I could
> not discuss open peer commentary in detail. It is likely to be an
> element of future review systems, but I do not venture to predict
> how important it will be.

But what I mean by open peer commentary here is precisely the
self-corrective peer feedback that you are hypothesizing will take the
place of classical peer review! Not just public comments, but also
direct emails to the author, based on open access to the raw drafts.
Isn't that precisely the substitute for classical peer review that you
are contemplating? Or do you think that in the post-peer-review era it
will simply be a matter of using the unrefereed literature exactly the
way we used the refereed literature, and reporting any problems or progress
with it only in our own (likewise unrefereed) papers? That I would find
an even more far-fetched speculation than the "open peer review" variant
most of peer-review reformers have in mind (and that you certainly also
invoked in your paper)!

>sh> The self-correction in classical peer review is systematic, deliberate,
>sh> and answerable (on a journal by journal basis). The ad-lib
>sh> self-correctiveness of self-appointed sleuths tends more toward an
>sh> opinion poll than expert guidance.
> The "self-correction in classical peer review" is sadly inadequate.
> I wrote at length about this in "Tragic loss or good riddance ...,"
> and there are plenty of more systematic sources of complaints (for
> example, the recent "publish and be damned ..." by David Adams and
> Jonathan Knight in Nature, vol. 419, 24 Oct. 2002, pp. 772-776.
> The supposedly gold standard of classical peer review is made of
> badly corroded pewter! The recent Bell Labs scandal with Jan
> Hendrik Schoen's fraudulent publications (many in Science and
> Nature) is just the tip of the iceberg.

If there is something seriously wrong with peer review, then
alternatives need to be tried and tested. This is the research area of
peer review testing and reform. No alternatives have been tested yet.
There are no empirical or logical or theoretical grounds for simply
ASSUMING that abandoning peer review and posting everything would remedy
the defects of peer review. Even less for assuming that self-archiving
would lead to that outcome. But there ARE prima facie grounds [defeasible
grounds, in my opinion] for worrying about it. And I am afraid that your
own empirically unsupported speculations will simply enhance those worries,
and hence retard the self-archiving.

Peer review is not a gold standard, but I'm sure you will agree that
any alternative would have to ensure at least the same standard, if
not better: Do you think there is this evidence for the promise you are
holding out above?

And while we're at it: What's the evidence that the few cases that
come to our attention are just the tip of the iceberg? Indeed, where
is the evidence that fraud is a serious problem at all? It seems to me
that the test of whether a research finding is important is whether it
leads to further research results or applications. One cannot build
further results or applications on fraud; it collapses. So by that
token, fraud -- at least important fraud -- will always come out. So
there's no iceberg there. Could the rest of the iceberg consist of
unimportant results that no one has bothered to try to build on or apply
yet? Perhaps. But is that, in turn, important -- important enough to be
called (switching metaphors) "badly corroded pewter?

I am not saying that classical peer review cannot stand some improvement,
if improvement is possible (it is, after all, simply human expert
judgment, systematically and answerably applied by expert-appointed
experts, certified with a tag, backed up by a public track-record).
But then let us try and test improvements, not assume them a priori,
and link them causally to something else that is ostensibly quite
different, namely, the attempt to use the newfound potential of the online
medium to maximize access to the peer-reviewed literature, such as it is,
warts and all -- something that can provide huge potential increases in
research visibility, usability, citability, and impact, hence
productivity -- increases that were impossible in the toll-access era.

These potential benefits of open access are tried and true -- you
yourself have attested to them and documented them. The hypothetical
benefits of untested changes in the peer-review system that generated
the literature in question, on the other hand, are merely conjectures.

Why must we mix sure benefits with untested conjectures, especially when
the very voicing of those conjectures is likely to strengthen the worries
of those they have held back from partaking of the sure benefits?

>sh> In this new "system" we would be entrusting all of that to the four
>sh> winds!
> Hardly. We would be able to set our filters any way we wanted. We
> could choose to look only at something that had been vetted by experts
> of a top caliber (or, as an extreme example, only look at papers that
> were at least 10 years old and had been mentioned favorably in half
> a dozen survey articles in journals published by a given field's
> main professional society), or we could accept all the recent posting
> to arXiv and other archives.

I honestly can't see how you imagine this scaling to the annual
2,000,000 papers that currently appear in the classically peer reviewed
journals! Absent the peer reviewed journal, how can I know that a paper
has been "vetted by experts of a top caliber"? What tells me that (as
the journal-name currently does) for those 2,000,000 annual papers? And
what now gets the right-calibre experts vetting the right papers (as
editors formerly did, when they invited them to referee?). Do experts
voluntarily spend their precious time trawling the continuum of raw
papers on the net on their own?

As to the wait-ten-years solution: Even that (unrealistic as it is, for
a new medium that was meant to accelerate rather than retard research
communication) is hardly a sure thing -- until you have told me how it
is eventually assured -- without the intervention of journals and editors
whose job it is to do just that -- that each paper will get the vetting
it needs eventually? I don't see that at all. I see 2,000,000 annual
papers in the sky, god knows where along their own respective
embryological continua, signposted only by links, hits, ad lib
commentaries, citations, and author-name value (which would no doubt
quickly decline in this raw flux). Is it really a sure thing here, that
if I pick a paper posted 10 years ago, it is just as reliable and usable
as it would be in a classically peer-reviewed journal? WHICH JOURNAL?

>sh> Andrew, both of us are frustrated by the slowness with which the
>sh> research community is coming to the realization that open access is the
>sh> optimal and inevitable outcome for them, and that self-archiving is the
>sh> way to get there. But do you really believe that inasmuch as they are
>sh> being held back by fears about peer review this paper will embolden them,
>sh> rather than confirming their worst fears?
> I believe it is imperative to be honest. A move to self-archiving
> will, I am convinced, lead to major changes in peer review, of the
> type I am describing. Not right away, since time scales are
> different, but eventually it will.

It is imperative to be honest with out facts and evidence. It is not at
all clear to me that it is imperative to be honest with our fallible

>sh> Yet it is all completely unnecessary! All that's needed for open access
>sh> is to self-archive, and leave classical peer review alone! Why imply
>sh> otherwise?
> Yes, and we could have promised scholars that electronics would
> only lead to journals moving online, and that nobody would be
> allowed to take advantage of the new freedoms to self-archive
> their articles. That surely would have allayed the concerns
> of many (especially of publishers).

I think we dealt with that analogy once above. I don't think there is a
tertium comparationis between (1) the on-paper to on-line to self-archiving
transition and (2) the toll-access to open access to (nontrivial)
peer-review reform transition. Self-archiving was feasible, and
trivially predictable, on an individual basis. The peer review sea-changes
you mention here a far more hypothetical (and, in my opinion, just plain
wrong -- but in any case, untested, undemonstrated).

>sh> You are making predictions and conjectures, which is fine. But why link
>sh> them to open-access and especially the current unfortunate reluctance to
>sh> self-archive? Speculations will not relieve fears, especially not
>sh> speculations that tend to confirm them.
> I will deemphasize the link in my next revision, but will leave some
> reside of it there. Anything else I feel would not be responsible.

It is up to you, but I do not understand why your conscience tells you
you need to share your speculations (especially when they risk alienating
the majority who are still leery about self-archiving!). Is there not a
saying about honesty in business that it would be a lie to deny it if you
know that someone across the street is selling your product for half your
price, but if the customer doesn't ask, honesty does not require you
to tell him! Well, that's still not it: Surely, if you don't know that
someone across the street is selling it for less, but you merely guess
that it is possible that he MIGHT be selling it for less, then surely
"honesty" is not quite the right descriptor for the policy of sending
every would-be customer across the street, just in case your guess
is right!

And in this case, even that isn't quite it, because I am at least as
convinced that your conjecture is false as you are that it is true,
and I think I have here given a few rather strong reasons (especially
that it is completely untested) for you to send your customers to my
side of the street instead (until you find the empirical evidence)!

>sh> The law-review case, about which I have written and puzzled before,
>sh> is an anomaly, and, as far as I know, there are many legal scholars
>sh> who are not satisfied with it (Hibbitts included). (Not only are
>sh> law-reviews student-run, but they are house organs, another anomaly in the
>sh> journal-quality hierarchy, where house-journals tend to rank low, a kind
>sh> of vanity-press.) I think it is highly inadvisable to try to generalize
>sh> this case in any way, when it is itself unique and poorly understood. In
>sh> any case, it certainly will not be reassuring to professors who are
>sh> contemplating whether or not they should self-archive, that doing so
>sh> may mean that whereas they are marking their students essays on
>sh> tuesdays and thursdays, if they self-archive their own papers, their
>sh> students may be marking them on wednesdays and fridays, instead of the
>sh> qualified editor-mediated peers of times past.
> The law review case may be "poorly understood," but so is the whole
> classical peer review system. It does, however, serve as a counterexample
> to many extreme claims about what kind of review is needed. That many
> scholars are not satisfied with it is nothing special. The same can
> be said of classical peer review.

The classical peer-review system has 20,000 journals and 2,000,000
articles annually attesting to the (hierarchical) quality-levels it
delivers. Alternatives have to have evidence that they can deliver at
least the same quality levels. The few hundred college law reviews are a
special case. They are not peer reviewed (hence not counted in the 20K
above); they are house-journals rather than independent ones; and there
are very specific grumbles about its quality -- in explicit comparison
with peer-reviewed journals of legal and related scholarship with which
they do not compare favorably (except perhaps for the most elite law
schools, but that only because, being the house organs, they get the
top house scholars).

So, no, I would say it does not serve as a counterexample at all. What
would serve as a counterexample would be taking, say, a top, middle and
low level journal in your field (and in mine) and passing over its "peer
review" to students, and seeing whether that would maintain the same
quality level across the years -- and then to phase out the students
altogether, and let raw submissions all appear on the web, for
self-selected vetters to patrol, and see what that does to quality,
and navigability, and usability, and impact...

>ao> The growing role of interdisciplinary
>ao> research might lead to a generally greater role for non-peers in reviewing
>ao> publications.
>sh> I can't follow this at all. Interdisciplinary work requires review by
>sh> peers from more disciplines, not from non-peers. ("Peer" means qualified
>sh> expert.)
> If I, as a mathematician, need to rely on some results from physics,
> I may end up criticizing the presentation and methodology of a
> physics paper even without understanding all the physics that is
> involved.

So who IS qualified to judge the soundness of an interdisciplinary
math/physics paper then, a chemist? A sports coach?

> It is a weak analogy I would not want to push too far, but note that
> many music teachers and sports coaches are very successful, and
> train top stars in their areas, without being able to perform at
> their students' level.

This all seems rather weak and unrepresentative when what it must scale
up to is all 20,000 journals in all fields, whether interdisciplinary or
not. As editor I have consulted referees who do not publish much but are
known masters of their field. It is not the referee's publication count or
impact factor that matters but their expertise.

> ao> However, in most cases only peers are truly qualified to
> ao> review technical results. However, peer evaluations can be obtained,
> ao> and increasingly are being obtained, much more flexibly than through the
> ao> traditional anonymous journal refereeing process.
>sh> That is not my experience. It seems that qualified referees, an
>sh> overharvested resource, are becoming harder and harder to come by. They
>sh> are overloaded, and take a long time to deliver their reports. Is the
>sh> idea that they will be more available if approached some other way? Or
>sh> if they self-select? But what if they all want to review paper X, and no
>sh> one -- or dilettantes -- review papers A-J?
> You help make my case. Classical peer review typically is too slow,
> and it is getting harder to run. Self-selection is a major antidote.

It doesn't scale! There are a few papers many people would be happy to
referee, and there are many papers few people would be willing to
referee -- unless specifically asked by a trusted editor, for an
established journal, and with a high presumption that the author will be
made answerable and the effort not wasted. Do you really imagine
self-selection for the annual 2,000,000? Are you not too focussed on a
small and anecdotal sample? What is the "force" that will ensure that
the 2,000,000 get their due via self-selection? And when? And how will
we know it?

> Yes, it is not ideal, as indeed, interests of potential referees
> won't be uniformly distributed, but I will settle for that if I can't
> get anything better.

You can get something better now, with classical peer review. The burden
on you is to show that this alternative would be at least as good. Would
it? How?

> As the primality example later on shows, it
> is the most important articles that are likely to get the fastest
> and most thorough scrutiny, and that is as it should be.

Indeed. But alas it does not scale to the annual 2,000,000, by
definition. (And it is the cases where the name-value of the author, or
even of the title, is not a sufficient "cue" that it belongs in the circle
of the "most important": those are the real test cases. How does the
anarchic self-selection system pick that up? Otherwise, you are simply
generalizing from the highly unrepresentative sample of the known elite:
If they were the only ones we had to worry about, maybe we wouldn't need
peer review at all. But what percentage of the 2,000,000 do you think
that covers?)

> If I am looking for something in
> psychology, an area I know very little about, and find a relatively
> recent archives paper that has not been published, but is referenced
> favorably by Stevan Harnad and several other famous figures, should
> I not be willing to accept it as of good quality?

What I want to know is how Stevan Harnad found that paper, among the
2,000,000 (and to avoid infinite regress, we must not assume that he
was guided by a still more famous figure!) and decided it was worth his
time to read, and the risk to use and cite it?

> I would dispute the claimed strong correlation between rejection
> rates and quality. Having served on the editorial board of what
> is usually regarded as one of the three most prestigious journals
> in mathematics, I can say that its rejection rate was actually
> lower than of several lower quality journals I have served on.
> The reason was self-selection.

I am aware of that. But look at what constrains that self-selection and
makes it possible: That journal has an established track record of
publishing all and only the highest quality research. It WOULD have
rejected the papers that appear in 2nd tier journals if they had been
foolishly submitted to the journal in question.

Now, I ask you, in a system where the only "self-selection" is to put
all 2,000,000 raw drafts willy-nilly up in the sky, how is quality
supposed to sort itself out? How do authors find referees at the right
level? And how will that level be sign-posted?

Yet THAT is the real test that you do not actually even consider as a
thought-experiment in these speculations based on tiny, elite subsets,
and positive evidence only! Not only will none of that scale, but even
for THAT effect, the invisible hand of peer review had to be there.
Maybe I am conceding too much if I agree that the elite don't really
need the constraint of being answerable to classical peer review. Maybe
they do (or did earlier in their careers). But in any case, surely the
rest of us do!

> Aside from a moderate fraction
> of crank submissions (something like 10 to 20%), the overwhelming
> majority were of very high quality. Authors knew of the journal
> standards, and did not bother to submit run-of-the mill papers.
> This is just one anectodal piece of evidence, but from what
> I have heard from other editors, is not all that atypical.

Not atypical? How can a self-selection at the very top of the hierarchy be
typical of the whole hierarchy? Do people have an equally unerring
sense that their destiny is tier 2 rather than tier 3? And if, mirabile
dictu, they do, do their prospective self-selecting vetters also have this
unerring matching sense (especially bearing in mind that the hierarchy
gets fatter as you go down)?

>sh> Or failing that, I wish I could at least write a commentary by way of
>sh> rebuttal!
> Why don't you propose it to the editors? (BTW, mine is just one of
> several short contributions they have solicited. I have not seen
> any of the others.)

I'll be happy to propose it if you allow me. May I send the editor a copy
of these two exchanges, by way of a sketch of where we differ? (Which
journal is it, by the way, and what is the email of the editor?)

Stevan Harnad
Received on Tue Nov 05 2002 - 21:56:29 GMT

This archive was generated by hypermail 2.3.0 : Fri Dec 10 2010 - 19:46:41 GMT