- Contemporary messages sorted: [ by date ] [ by thread ] [ by subject ] [ by author ] [ by messages with attachments ]

From: Stevan Harnad <harnad_at_ecs.soton.ac.uk>

Date: Tue, 19 Sep 2006 18:48:36 +0100

On Tue, 19 Sep 2006 l.hurtado_at_ED.AC.UK wrote:

*> --Charles Oppenheim insists that he's able to prove a significant
*

*> statistical correlation of some sort between RAE results in a variety
*

*> of fields, including Humanities subjects. It would be good to verify
*

*> this. I presume that online publication(s) are . . . available
*

*> somewhere or will be?
*

Several are (all *should* be! Charles?). Look in OAIster and Google Scholar.

*> --It will be interesting to see the specifics behind the claims. I'm
*

*> not clear what is meant by this "correlation". Does it mean that those
*

*> depts given a 5* also show up with . . . what, a much higher number of
*

*> their publications getting cited in the selected venues, or the
*

*> individuals in the dept being cited more frequently, or . . . whatever?
*

Correlation (positive) means high goes with high and low goes with low.

So the higher ranked departments have a higher total number of citations

of the papers published by the submitted researchers and the lower ranked

departments have a lower number.

In a word, the papers of higher RAE-ranked departments are cited more.

*> And what does "significant" correlation mean? (And please, I hope
*

*> that any publications don't give me that "regression to the mean"
*

*> technospeak, as I'm not a statistician, but I can follow logic. Give
*

*> me in plain English what is being counted and compared and how.)
*

The statistical significance of the correlation (which is *not* the same as its

size or importance) is the probability that the correlation just happened by

chance. If this probability is less the 0.001 or 0.001 it is fairly safe to assume

that it is not just a chance accident. And since the same correlation keeps being

found, across fields, the likelihood that these are all happening by accident is

negligible.

The size of the effect is another question: Correlation coefficients vary from -1

to +1. A correlation of 0 is no correlation at all. A simple way to think of the

size of the correlation is to square the correlation coefficient. That tells you

what percentage of the variation in the values of one variable is predictable from

the variation of the values in the other variable. For example, if height and

weight have a (statistically significant) correlation of 0.5 (and the average

height is 5 feet, standard deviation 6 inches, and the average weight is 150

pounds, standard deviation 50 pounds) then if you tell me that someone is 6 feet

tall, then I can predict that he weights 250 pounds, and I will be about 25% right

(because 0.5 squared is 0.25 or 25%). If the correlation was instead 0.9, then I

could make the same prediction and be 81% right.

Another example is the correlation between barometric pressure and rain. If the

correlation is 0.5, then tell me the barometric pressure and I can predict how much

it will rain, with 25% accuracy; if the correlation is 0.9, I can predict with 81%

accuracy. (Another way to express the accuracy is by putting a +/- take range

around the predicted value: that range is broader with lower correlations and

narrower with higher correlations.)

And that's how it is with citations as predictors of RAE ranks.

*> --I also note a somewhat different tone in Stevan's comments, which
*

*> seem to me to admit more forthrightly that "we ain't there yet" when it
*

*> comes to the wherewithall actually to conduct an across-the-board
*

*> analysis of the kind being mooted. Charles seems to suggest that he's
*

*> able to do this now. Or do I misundertand things?
*

I think Charles and I are in 100% agreement:

(1) All evidence so far is that RAE ranks can be predicted from citation

counts for all disciplines tested so far.

(2) There are still disciplines to be tested.

(3) Citation counts are not the only possible metrics.

(4) RAE panels should definitely be scrapped in favour of metrics in all

fields except those where no metric can be shown to correlate sufficiently

closely with the RAE rankings.

*> In any case, I hope that all parties understand the importance of
*

*> making sure that all of us affected by any metrics approach understand
*

*> it and can see its superiority and full feasibility. I certainly ain't
*

*> there yet on any of these matters, but let's see what rolls out. If
*

*> Stevan and Charles can put it together and show the rest of us how it
*

*> works well, I'll go for it. I just want to be shown (I originated from
*

*> Missouri).
*

Y'all hold onto yir hats; ya ain't seed nothin' yet!

Stevan Harnad

*> Quoting "C.Oppenheim" <C.Oppenheim_at_LBORO.AC.UK>:
*

*>
*

*> > My answer is that it is statistically significantly correlated with the RAE
*

*> > results, based upon long, intensive peer group assessment by a group of
*

*> > experts. As Stevan Harnad has frequently commented, what is remarkable is
*

*> > that despite the fact that journals are relatively unimportant in the
*

*> > humanities, the correlation still works.
*

*> >
*

*> > I stress that my approach is purely pragmatic. I'm not suggesting a cause
*

*> > and effect, simply that there is a strong correlation.
*

*> >
*

*> > Charles
*

*> >
*

*> > Professor Charles Oppenheim
*

*> > Head
*

*> > Department of Information Science
*

*> > Loughborough University
*

*> > Loughborough
*

*> > Leics LE11 3TU
*

*> >
*

*> > Tel 01509-223065
*

*> > Fax 01509-223053
*

*> > e mail C.Oppenheim_at_lboro.ac.uk
*

*> > ----- Original Message -----
*

*> > From: <l.hurtado_at_ED.AC.UK>
*

*> > To: <AMERICAN-SCIENTIST-OPEN-ACCESS-FORUM_at_LISTSERVER.SIGMAXI.ORG>
*

*> > Sent: Tuesday, September 19, 2006 3:22 PM
*

*> > Subject: Re: Future UK RAEs to be Metrics-Based
*

*> >
*

*> >
*

*> > Sorry, Charles (if I may), but your response betrays a misunderstanding
*

*> > of my concern and question. Please try to hear me before giving an
*

*> > answer:
*

*> > You're assuming that picking up items that happen to be cited in a
*

*> > selection of journals is somehow adequate, and it's THIS that I'm
*

*> > concerned about. What is your BASIS for your assumption about my field?
*

*> > Yes, of course, you can collect everything cited in a given set of
*

*> > journals, in principle. But is that the same thing as a representative
*

*> > picture of how scholarship is being treated in a field such as mine in
*

*> > which journals are not necessarily the principle medium in which
*

*> > scholarship is established or exhibited? This readily illustrates my
*

*> > concern about what assumptions go into experiments or "empirical"
*

*> > studies before they are run.
*

*> >
*

*> > So, QUESTION: What is your empirical basis for the assumption that
*

*> > simply monitoring a given set of journals is sufficient for any/all
*

*> > fields. You haven't addressed this yet.
*

*> > Larry
*

*> >
*

*> > Quoting "C.Oppenheim" <C.Oppenheim_at_LBORO.AC.UK>:
*

*> >
*

*> >> The question betrays a misunderstanding of how citation indexes work.
*

*> >> Citation indexes scan the journal literature for citations to all media,
*

*> >> not
*

*> >> just other journals. So it makes no difference what vehicle the
*

*> >> humanities
*

*> >> scholar disseminated his/her output in, the item will get picked up by the
*

*> >> citation index.
*

*> >>
*

*> >> The notion that a group of informed scholars could come up with a ranking
*

*> >> list over 30 minutes is an appealing one, but the fact remains that the
*

*> >> UK's
*

*> >> RAE takes about a year to collect and analyse the data, together with many
*

*> >> meetings of the group of scholars, before decisions are made. one reason
*

*> >> for this tedious approach is to avoid legal challenges that the results
*

*> >> were
*

*> >> not robustly reached.
*

*> >>
*

*> >> Charles
*

*> >>
*

*> >> Professor Charles Oppenheim
*

*> >> Head
*

*> >> Department of Information Science
*

*> >> Loughborough University
*

*> >> Loughborough
*

*> >> Leics LE11 3TU
*

*> >>
*

*> >> Tel 01509-223065
*

*> >> Fax 01509-223053
*

*> >> e mail C.Oppenheim_at_lboro.ac.uk
*

*> >> ----- Original Message -----
*

*> >> From: <l.hurtado_at_ED.AC.UK>
*

*> >> To: <AMERICAN-SCIENTIST-OPEN-ACCESS-FORUM_at_LISTSERVER.SIGMAXI.ORG>
*

*> >> Sent: Tuesday, September 19, 2006 12:34 PM
*

*> >> Subject: Re: Future UK RAEs to be Metrics-Based
*

*> >>
*

*> >>
*

*> >> My scepticism (which is capable of being satisfied) is not toward the
*

*> >> *idea* that there may well be a correlation between the frequency with
*

*> >> which scholarly work is cited and the wider estimate of that
*

*> >> scholar/dept. I am dubious that it has been demonstrated that
*

*> >> conducting such an analysis *can be done* for all disciplines,
*

*> >> particularly at least some Humanities fields. My scepticism rests upon
*

*> >> the bases I've iterated before. I apologize if I seem to be replaying a
*

*> >> record, but it's not yet clear to me that my concerns are effectively
*

*> >> engaged.
*

*> >> --Humanities scholarly publishing is more diverse in venue/genre than
*

*> >> in some other fields. Indeed, journals are not particularly regarded
*

*> >> as quite so central, but only one among several respected and
*

*> >> frequented genres, which include multi-author books, and (perhaps
*

*> >> particularly) monographs.
*

*> >>
*

*> >> QUESTION: Are the studies that supposedly show such meaningful
*

*> >> correlations actually drawing upon the full spread of publication
*

*> >> genres appropriate to the fields in view? (I'd be surprised but
*

*> >> delighted were the answer yes, because I'm not aware of any mechanism
*

*> >> in place, such as ISI in journal monitoring, for surveying and counting
*

*> >> in such a vast body of material.
*

*> >>
*

*> >> I'm not pushing at all for the labor-intensive RAE of the past.
*

*> >> Indeed, if the question is not how do individual scholars stack up in
*

*> >> comparison to others in their field (which the RAE actually wasn't
*

*> >> designed to determine), but instead how can we identify depts into
*

*> >> which a disproportionate amount of govt funding should be pumped, then
*

*> >> I think in almost any field a group of informed scholars could readily
*

*> >> determine the top 5-10 places within 30 minutes, and with time left
*

*> >> over for coffee.
*

*> >>
*

*> >> I'm just asking for more transparency and evidence behind the
*

*> >> enthusiasm for replacing RAE with "metrics".
*

*> >>
*

*> >> Larry
*

*> >>
*

*> >> Quoting "C.Oppenheim" <C.Oppenheim_at_LBORO.AC.UK>:
*

*> >>
*

*> >>> The correlation is between number of citations in total (and average
*

*> >>> number
*

*> >>> of citations per member of staff) received by a Department over the RAE
*

*> >>> period (1996-2001) and the RAE score received by the Department following
*

*> >>> expert peer review. Correlation analyses are done using Pearson or
*

*> >>> Spearman
*

*> >>> correlation coefficients. The fact that so few humanities scholars
*

*> >>> publish
*

*> >>> journal articles does not affect this result.
*

*> >>>
*

*> >>> A paper on the topic is in preparation at the moment.
*

*> >>>
*

*> >>> What intrigues me is why there is so much scepticism about the notion.
*

*> >>> RAE
*

*> >>> is done by peer review experts. Citations are also done by (presumably)
*

*> >>> experts who choose to cite a particular work. So one would expect a
*

*> >>> correlation between the two, wouldn't one? What it tells us is that high
*

*> >>> quality research leads to both high RAE scores AND high citation counts.
*

*> >>>
*

*> >>> I do these calculations (and I've covered many subject areas over the
*

*> >>> years, but not biblical studies - something for the future!) in a totally
*

*> >>> open-minded manner. If I get a non-significant or zero correlation in
*

*> >>> such
*

*> >>> a study in the future, I will faithfully report it. But so far, that
*

*> >>> hasn't
*

*> >>> happened.
*

*> >>>
*

*> >>> Charles
*

*> >>>
*

*> >>> Professor Charles Oppenheim
*

*> >>> Head
*

*> >>> Department of Information Science
*

*> >>> Loughborough University
*

*> >>> Loughborough
*

*> >>> Leics LE11 3TU
*

*> >>>
*

*> >>> Tel 01509-223065
*

*> >>> Fax 01509-223053
*

*> >>> e mail C.Oppenheim_at_lboro.ac.uk
*

*> >>> ----- Original Message -----
*

*> >>> From: <l.hurtado_at_ED.AC.UK>
*

*> >>> To: <AMERICAN-SCIENTIST-OPEN-ACCESS-FORUM_at_LISTSERVER.SIGMAXI.ORG>
*

*> >>> Sent: Monday, September 18, 2006 8:37 PM
*

*> >>> Subject: Re: Future UK RAEs to be Metrics-Based
*

*> >>>
*

*> >>>
*

*> >>> Well, I'm all for empirically-based views in these matters. So, if
*

*> >>> Oppenheim or others have actually soundly based studies showing what
*

*> >>> Stevan and Oppenheim claim, then that's to be noted. I'll have to see
*

*> >>> the stuff when it's published. In the meanwhile, a couple of further
*

*> >>> questions:
*

*> >>> --Pardon me for being out of touch, perhaps, but more precisely what is
*

*> >>> being measured? What does journal "citation counts" refer to?
*

*> >>> Citation of journal articles? Or citation of various things in journal
*

*> >>> articles (and why privilege this medium?)? Or . . . what?
*

*> >>> --What does "correlation" between RAE results and "citation counts"
*

*> >>> actually comprise?
*

*> >>>
*

*> >>> Let me lay out further reasons for some skepticism. In my own field
*

*> >>> (biblical studies/theology), I'd say most senior-level scholars
*

*> >>> actually publish very infrequently in refereed journals. We do perhaps
*

*> >>> more in earlier years, but as we get to senior levels we tend (a) to
*

*> >>> get requests for papers for multi-author volumes, and (b) we devote
*

*> >>> ourselves to projects that best issue in book-length publications. So,
*

*> >>> if my own productivity and impact were assessed by how many journal
*

*> >>> articles I've published in the last five years, I'd look poor (even
*

*> >>> though . . . well, let's say that I rather suspect that wouldn't be the
*

*> >>> way I'm perceived by peers in the field).
*

*> >>> Or is the metric to comprise how many times I'm *cited* in journals?
*

*> >>> If so, is there some proven correlation between a scholar's impact or
*

*> >>> significance of publications in the field and how many times he happens
*

*> >>> to be cited in this one genre of publication? I'm just a bit
*

*> >>> suspicious of the assumptions, which I still suspect are drawn (all
*

*> >>> quite innocently, but naively) from disciplines in which journal
*

*> >>> publication is much more the main and significant venue for scholarly
*

*> >>> publication.
*

*> >>> And, as we all know, "empirical" studies depend entirely on the
*

*> >>> assumptions that lie at their base. So their value is heavily framed
*

*> >>> by the validity and adequacy of the governing assumptions. No
*

*> >>> accusations, just concerns.
*

*> >>> Larry Hurtado
*

*> >>>
*

*> >>> Quoting Stevan Harnad <harnad_at_ECS.SOTON.AC.UK>:
*

*> >>>
*

*> >>>> On Mon, 18 Sep 2006, Larry Hurtado wrote:
*

*> >>>>
*

*> >>>>> Stevan and I have exchanged views on the *feasibility* of a metrics
*

*> >>>>> approach to assessing research strength in the Humanities, and he's
*

*> >>>>> impressed me that something such *might well* be feasible *when/if*
*

*> >>>>> certain as-yet untested and undeveloped things fall into place. I note,
*

*> >>>>> e.g., in Stevan's addendum to Oppenheim's comment that a way of
*

*> >>>>> handling
*

*> >>>>> book-based disciplines "has not yet been looked at", and that a number
*

*> >>>>> of other matters are as yet "untested".
*

*> >>>>
*

*> >>>> Larry is quite right that the (rather obvious and straightforward)
*

*> >>>> procedure of self-archiving books' metadata and cited references in
*

*> >>>> order to derive a comprehensive book-citation index (which would
*

*> >>>> of course include journal articles citing books, books citing books,
*

*> >>>> and books citing journal articles) had not yet been implemented or
*

*> >>>> tested.
*

*> >>>>
*

*> >>>> However, the way to go about it is quite clear, and awaits only OA
*

*> >>>> self-archiving mandates (to which a mandate to self-archive one's book
*

*> >>>> metadata and reference list should be added as a matter of course).
*

*> >>>>
*

*> >>>> But please recall that I am an evangelist for OA self-archiving, because
*

*> >>>> I *know* it can be done, that it works, and that it confers substantial
*

*> >>>> benefits in terms of research access, usage and impact.
*

*> >>>>
*

*> >>>> Insofar as metrics are concerned, I am not an evangelist, but merely an
*

*> >>>> enthusiast: The evidence is there, almost as clearly as it is with the
*

*> >>>> OA impact-advantage, that citation counts are strongly correlated with
*

*> >>>> RAE rankings in every discipline so far tested. Larry seems to pass over
*

*> >>>> evidence in his remark about the as yet incomplete book citation data
*

*> >>>> (ISI has some, but they are only partial). But what does he have to say
*

*> >>>> about the correlation between RAE rankings and *journal article
*

*> >>>> citation
*

*> >>>> counts* in the humanities (i.e., in the "book-based" disciplines)?
*

*> >>>> Charles will, for example, soon be reporting strong correlations in
*

*> >>>> Music. Even without having to wait for a book-impact index, it seems
*

*> >>>> clear that there are as yet no reported empirical exceptions to the
*

*> >>>> correlation between journal article citation metrics and RAE outcomes.
*

*> >>>>
*

*> >>>> (I hope Charles will reply directly, posting some references to his and
*

*> >>>> others' studies.)
*

*> >>>>
*

*> >>>>> This being the case, it is certainly not so a priori to say that a
*

*> >>>>> metrics approach is not now really feasible for some disciplines.
*

*> >>>>
*

*> >>>> Nothing a priori about it: A posteriori, every discipline so far tested
*

*> >>>> has shown positive correlations between its journal citation counts and
*

*> >>>> its
*

*> >>>> RAE rankings, including several Humanities disciplines.
*

*> >>>>
*

*> >>>> The advantage of having one last profligate panel-based RAE in parallel
*

*> >>>> with the metric one in 2008 is that not a stone will be left unturned.
*

*> >>>> If there prove to be any disciplines having small or non-existent
*

*> >>>> correlations with metrics, they can and should be evaluated otherwise.
*

*> >>>> But let us not assume, a priori, that there will be any such
*

*> >>>> disciplines.
*

*> >>>>
*

*> >>>>> I emphasize that my point is not a philosophical one, but strictly
*

*> >>>>> whether as yet a worked out scheme for handling all Humanities
*

*> >>>>> disciplines rightly is in place, or capable of being mounted without
*

*> >>>>> some significant further developments, or even thought out adequately.
*

*> >>>>
*

*> >>>> It depends entirely on the size of the metric correlations with the
*

*> >>>> present RAE rankings. Some disciplines may need some supplementary forms
*

*> >>>> of (non-metric) evaluation if their correlations are too weak. That is
*

*> >>>> an
*

*> >>>> empirical question. Meanwhile, the metrics will also be growing in power
*

*> >>>> and diversity.
*

*> >>>>
*

*> >>>>> That's not an antagonistic question, simply someone asking for the
*

*> >>>>> basis for the evangelistic stance of Stevan and some others.
*

*> >>>>
*

*> >>>> I evangelize for OA self-archiving of research and merely advocate
*

*> >>>> further development, testing and use of metrics in research performance
*

*> >>>> assessment, in all disciplines, until/unless evidence appears that there
*

*> >>>> are exceptions. So far, the objections I know of are all only in the
*

*> >>>> form of a priori preconceptions and habits, not objective data.
*

*> >>>>
*

*> >>>> Stevan Harnad
*

*> >>>>
*

*> >>>>> > Charles Oppenheim has authorised me to post this on his behalf:
*

*> >>>>> >
*

*> >>>>> > "Research I have done indicates that the same correlations
*

*> >>>>> > between
*

*> >>>>> > RAE scores and citation counts already noted in the sciences and
*

*> >>>>> > social sciences apply just as strongly (sometimes more strongly)
*

*> >>>>> > in the humanities! But you are right, Richard, that metrics are
*

*> >>>>> > PERCEIVED to be inappropriate for the humanities and a lot of
*

*> >>>>> > educating is needed on this topic."
*

*> >>>>
*

*> >>>
*

*> >>>
*

*> >>>
*

*> >>> L. W. Hurtado, Professor of New Testament Language, Literature & Theology
*

*> >>> Director of Postgraduate Studies
*

*> >>> School of Divinity, New College
*

*> >>> University of Edinburgh
*

*> >>> Mound Place
*

*> >>> Edinburgh, UK. EH1 2LX
*

*> >>> Office Phone: (0)131 650 8920. FAX: (0)131 650 7952
*

*> >>>
*

*> >>
*

*> >>
*

*> >>
*

*> >> L. W. Hurtado, Professor of New Testament Language, Literature & Theology
*

*> >> Director of Postgraduate Studies
*

*> >> School of Divinity, New College
*

*> >> University of Edinburgh
*

*> >> Mound Place
*

*> >> Edinburgh, UK. EH1 2LX
*

*> >> Office Phone: (0)131 650 8920. FAX: (0)131 650 7952
*

*> >>
*

*> >
*

*> >
*

*> >
*

*> > L. W. Hurtado, Professor of New Testament Language, Literature & Theology
*

*> > Director of Postgraduate Studies
*

*> > School of Divinity, New College
*

*> > University of Edinburgh
*

*> > Mound Place
*

*> > Edinburgh, UK. EH1 2LX
*

*> > Office Phone: (0)131 650 8920. FAX: (0)131 650 7952
*

*> >
*

*>
*

*>
*

*>
*

*> L. W. Hurtado, Professor of New Testament Language, Literature & Theology
*

*> Director of Postgraduate Studies
*

*> School of Divinity, New College
*

*> University of Edinburgh
*

*> Mound Place
*

*> Edinburgh, UK. EH1 2LX
*

*> Office Phone: (0)131 650 8920. FAX: (0)131 650 7952
*

*>
*

Received on Tue Sep 19 2006 - 18:57:18 BST

Date: Tue, 19 Sep 2006 18:48:36 +0100

On Tue, 19 Sep 2006 l.hurtado_at_ED.AC.UK wrote:

Several are (all *should* be! Charles?). Look in OAIster and Google Scholar.

Correlation (positive) means high goes with high and low goes with low.

So the higher ranked departments have a higher total number of citations

of the papers published by the submitted researchers and the lower ranked

departments have a lower number.

In a word, the papers of higher RAE-ranked departments are cited more.

The statistical significance of the correlation (which is *not* the same as its

size or importance) is the probability that the correlation just happened by

chance. If this probability is less the 0.001 or 0.001 it is fairly safe to assume

that it is not just a chance accident. And since the same correlation keeps being

found, across fields, the likelihood that these are all happening by accident is

negligible.

The size of the effect is another question: Correlation coefficients vary from -1

to +1. A correlation of 0 is no correlation at all. A simple way to think of the

size of the correlation is to square the correlation coefficient. That tells you

what percentage of the variation in the values of one variable is predictable from

the variation of the values in the other variable. For example, if height and

weight have a (statistically significant) correlation of 0.5 (and the average

height is 5 feet, standard deviation 6 inches, and the average weight is 150

pounds, standard deviation 50 pounds) then if you tell me that someone is 6 feet

tall, then I can predict that he weights 250 pounds, and I will be about 25% right

(because 0.5 squared is 0.25 or 25%). If the correlation was instead 0.9, then I

could make the same prediction and be 81% right.

Another example is the correlation between barometric pressure and rain. If the

correlation is 0.5, then tell me the barometric pressure and I can predict how much

it will rain, with 25% accuracy; if the correlation is 0.9, I can predict with 81%

accuracy. (Another way to express the accuracy is by putting a +/- take range

around the predicted value: that range is broader with lower correlations and

narrower with higher correlations.)

And that's how it is with citations as predictors of RAE ranks.

I think Charles and I are in 100% agreement:

(1) All evidence so far is that RAE ranks can be predicted from citation

counts for all disciplines tested so far.

(2) There are still disciplines to be tested.

(3) Citation counts are not the only possible metrics.

(4) RAE panels should definitely be scrapped in favour of metrics in all

fields except those where no metric can be shown to correlate sufficiently

closely with the RAE rankings.

Y'all hold onto yir hats; ya ain't seed nothin' yet!

Stevan Harnad

Received on Tue Sep 19 2006 - 18:57:18 BST

*
This archive was generated by hypermail 2.3.0
: Fri Dec 10 2010 - 19:48:30 GMT
*