Re: Alma Swan: The OA citation advantage

From: Stevan Harnad <amsciforum_at_GMAIL.COM>
Date: Wed, 17 Mar 2010 15:05:53 -0400

On Mon, Mar 15, 2010 at 9:53 PM, Philip Davis <> wrote:
> Stevan,
> First of all, I did not state in my critique of the Swan report
> ( that meta-analysis was Alma's idea, but that
> this  was your suggestion (as posted to liblicense-l,
> sigmetrics-l, and other  listservs).
> Secondly, you keep trying to divert criticism of your colleague's
> work  by critiquing my own work, as if  *"your best defense is a
> good  offense."*  You've posted 5 rapid responses to the BMJ 2008
> paper and  another rapid response to the BMJ editorial.
[ ]
> I've responded to your  concerns and have better things to do than
> engage in an endless  discussion with you when there is
> absolutely no hope of changing your  mind.  You can continue to
> plaster the Internet with your critiques and  astonishment that I
> haven't responded if this makes you feel better.  I  have
> students to teach and a dissertation to write.
> --Phil Davis

The message below is forwarded from David Wilson, with permission
[references and links added]:

[See also (thanks to Peter Suber for spotting this study!): Wagner, A.
Ben (2010) Open Access Citation Advantage: An Annotated Bibliography.
Issues in Science and Technology Librarianship. 60. Winter 2010]

        Date: March 17, 2010 11:17:10 AM EDT (CA)
        From: David Wilson dwilsonb --
        Subject: Re: Comment on Meta-Analysis
        To: harnad --


Interesting discussion. Phil Davis has a limited albeit common view of
Within medicine, meta-analysis is generally applied to a
small set of highly homogeneous studies. As such, the focus is on the
overall or pooled effect with only a secondary focus on variability in
effects. Within the social sciences, there is a strong tradition of
meta-analyzing fairly heterogeneous sets of studies. The focus is clearly
not on the overall effect, which would be rather meaningless, but rather on
the variability in effect and the study characteristics, both methodological
and substantive, that explain that variability.

I don't know enough about this area to ascertain the credibility of his
criticism of the methodologies of the various studies involved. However, the
one study that he claims is methodologically superior in terms of internal
validity (which it might be)
is clearly deficient in statistical power. As
such, it provides only a weak test. Recall, that a statistically
nonsignificant finding is a weak finding -- a failure to reject the null and
not acceptance of the null.

Meta-analysis could be put to good use in this area. It won't resolve the
issue of whether the studies that Davis thinks are flawed are in fact flawed.
It could explore the consistency in effect across these studies and whether
the effect varies by the method used. Both would add to the debate on this

    [Lipsey, MW & Wilson DB (2001) Practical Meta-Analysis. Sage.]

Best, Dave

David B. Wilson, Ph.D.
Associate Professor
Chair, Administration of Justice Department
George Mason University
10900 University Boulevard, MS 4F4
Manassas, VA  20110-2203
Phone/Fax: 703.993.4701
> Stevan Harnad wrote:
>> Phil,
>> Thanks for the helpful feedback.
>> I'm afraid you're mistaken about meta-analysis. It can be a
>> perfectly appropriate statistical technique for analyzing a large
>> number of studies, with positive and negative outcomes, varying
>> in methodological rigor, sample size and effect size. It is a way
>> of estimating whether or not there is a significant underlying
>> effect.
>> I think you may be inadvertently mixing up the criteria for (1)
>> eligibility and comparability for a meta-analysis with the
>> criteria for (2) a clinical drug trial (for which there rightly
>> tends to be an insistence on randomized control trials in
>> biomedical research).
>> Now I would again like to take the opportunity of receiving this
>> helpful feedback from you to remind you about some feedback I
>> have given you repeatedly on your own 2008
>> study -- the randomized control trial that you suggest has been
>> the only methodologically sound test of the OA Advantage so far:
>> You forgot to do a self-selection control condition. That would
>> be rather like doing a randomized control trial on a drug -- to
>> show that the nonrandom control trials that have reported a
>> positive benefit for that drug were really just self-selection
>> artifacts -- but neglecting to include a replication of the
>> self-selection artifact in your own sample, as a control.
>> For, you see, if your own sample was too small and/or too brief
>> (e.g., you didn't administer the drug for as long an interval, or
>> to as many patients, as the nonrandom studies reporting the
>> positive effects had done), then your own null effect with a
>> randomized trial would be just that: a null effect, not a
>> demonstration that randomizing eliminates the nonrandomized drug
>> effect. (This is the kind of methodological weakness, for
>> example, that multiple studies can be weighted for, in a
>> meta-analysis of positive, negative and null effects.)
>> [I am responding to your public feedback here, on the liblicense
>> and SERIALST lists, but not also on your SSP Blog, where you
>> likewise publicly posted this same feedback (along with other,
>> rather shriller remarks) because I am assuming
>> that you will again decline to post my response on your blog, as
>> you did the previous time that you publicly posted your feedback
>> on my work both there and elsewhere --
>> refusing my response on your blog on the grounds that it had
>> already been publicly posted elsewhere!...]
>> -- Stevan Harnad
>> PS The idea of doing a meta-analysis came from me, not from Dr.
>> Swan.
Received on Wed Mar 17 2010 - 19:14:22 GMT

This archive was generated by hypermail 2.3.0 : Fri Dec 10 2010 - 19:50:07 GMT