Poynder interview

Continuing evolution in the world of scientific journal publishing

Richard Poynder wonders whether there is still a place for traditional print publishing in the world of scientific research.

Scientific journal publishing is not a comfortable business to be in today. With constant journal price inflation, growing frustration over the time it takes to negotiate the peer-review and publication processes and, some feel, an unsympathetic hearing from publishers, scientists are in revolt.

Who can blame them? The US-based Association of Research Libraries (ARL) estimates that serials spending was 152 per cent higher in 1998 than a dozen years earlier — despite a seven per cent decline in the number of titles that libraries are able to get for their money.

In effect, claims Stevan Harnad (below), professor of Cognitive Science at the University of Southampton, academic research is being "held hostage" by an ever smaller group of commercially-driven publishers.

As journal prices reach the point where libraries can no longer afford to renew all their subscriptions, and the internet offers ever more attractive alternative publishing solutions, the whole raison d’être of the traditional journal publishing model has, for many, become open to debate.

The pre-print phenomenon

To address the time-to-publish issues, scientists have increasingly been turning to pre-print servers. Here they can post early drafts of their articles online, making them available to colleagues in a more timely fashion, and at no cost.

The most successful pre-print server — the Los Alamos National Laboratory Physics Archive (www.arxiv.org) — was launched in August 1991. Founded by physicist Paul Ginsparg, arXiv.org today hosts over 140,000 papers, is mirrored in 15 countries, and adds some 2500 additional articles each month.

Since then hundreds of other subject-specific pre-print servers have been set up, notably Harnad’s CogPrints archive (www.cogprints. soton.ac.uk), which publishes papers in the cognitive sciences, NCSTRL (www.cs-tr.cs.cornell. edu), covering computer science research reports, NDLTD (www.ndltd.org), a digital library of electronic theses and dissertations, and RePEc (www.repec.org), an economics initiative.

But as dissatisfaction over journal subscriptions continues to grow a number of new publishing initiatives are shifting the emphasis from posting exclusively pre-print articles, to peer-reviewed papers too.

Pre-print to post-print

Last year, for instance, the US National Institutes of Health created PubMed Central (www.pubmedcentral.nih.gov). Covering the life sciences, the mission of PubMed Central is to "archive, organise and distribute peer-reviewed reports from journals, as well as reports that have been screened but not formally peer-reviewed."

With some publishers proving reluctant to participate, however, services like PubMed Central could be hard-pressed to offer breadth of coverage. Moreover, publishers’ ability to withhold peer-reviewed articles from such initiatives is a painful reminder to authors of the ‘Faustian pact’ they have to engage in: giving up copyright in order to be published.

"Using the principles of copyright and ‘intellectual property’ to the point where ideas cannot be sensibly used for the good of all is a real issue," says Heinz-Rüdiger Oberhage, a physicist at the University of Essen, in Germany.

A number of recent initiatives, therefore, aim to side-step publishers entirely. Last year ARL founded the Scholarly Publishing and Academic Resources Coalition, or SPARC (www.arl.org/ sparc/home/index.asp?page=0) — charged with encouraging the creation of new journals (print and online) offering lower subscription rates than traditional journals.

"Research used to be gifted to societies by authors and returned to the community in the form of low-cost journals," proclaims the SPARC web site. "Now researchers — whose work is paid for by the university or the government — increasingly give away their research to commercial journals, which then charge universities hefty subscription fees in order to buy it back."

Likewise, last May, the Current Science Group founded BioMed Central (www.biomedcentral. com) — a new, internet-only publishing house intent not only on posting refereed papers on the web, but managing the entire peer-review process too.

Eprint: the do-it-yourself solution

Harnad, however, espouses a different approach. He has long exhorted scientists to continue publishing in traditional journals, but to retain copyright in their articles, and publish them in self-administered eprint archives as well. "What this is all about is freeing the current refereed literature," he says. "Taking the literature that has been behind a financial firewall for 100 years in the paper era, and removing that firewall."

Until recently this was a more idealistic, than practical, proposal — since finding self-published papers on the web was no mean task. However, the recent creation of the Open Archives Initiative (OAI) in 1999 has made Harnad’s plan more achievable.

Founded by a group of librarians and computer scientists, including Paul Ginsparg, Richard Luce, research library director/Library Without Walls project leader at the Los Alamos National Laboratory, and Herbert Van de Sompel, head of library automation at Ghent University, OAI aims to develop and promote the technical tools and organisational structures to support the interoperability of eprint archives. Thus researchers, authors and educational institutions will be able to self-archive their papers, and then link them into one global, distributed and multi-disciplinary archive.

"Open Archives is a standard protocol for tagging text in such a way that it become interoperable," says Harnad. "So all the Open Archive protocol compliant archives, wherever they are physically, will be harvested every night into one big virtual archive."

The publishers’ perspective

Derk Haank (right), CEO of Elsevier Science, agrees that publishers have got themselves in a difficult situation. "In the good old days when you had an academic journal everybody had their own subscription. But journals became bigger and bigger, and every time they became bigger the price was increased. And every price increase lost us a few subscribers. It was a spiral with no winners."

However, he insists that far from heralding their demise, the internet will prove the saviour of journal publishers. Why? Because it allows them dramatically to increase the pace of publishing, and provide desktop access to every scientist and researcher. "Instead of charging higher prices for fewer and fewer people, we can sell electronically to more and more people, at lower prices. We can allow our customers to access many more journals than they had in the paper world — for the same price."

"False, false, false," responds Harnad peremptorily. "Reed Elsevier is saying ‘we will lower costs, and we will pass it on to the consumer, and in the end you will be paying a lot less’. But you will still be paying. We, on the other hand, are talking about zero costs: the user should not have to pay a penny to access the literature."

Future challenges

Clearly, however, the new-style publishing initiatives will not inevitably succeed. For a start, there is a credibility hurdle. "An article in a respected print journal still provides greater value than an electronic-only journal," points out Heinz-Rüdiger Oberhage.

Moreover, says Mark Doyle (above), manager for product development at the American Physical Society, funding is problematic. "Who should support the open archives and make sure they remain viable in the future?" he asks rhetorically. "The US Government has had a hard time coming up with a long-term funding solution for arXiv.org, even though it requires a small amount of money and is of immense utility."

For Haank the proposition that academics could manage their own peer-review is risible. "We have 1100 journals," he says. "So we deal with some 22,000 high-willed, high-profile individuals with big egos who do not automatically co-operate with each other. Editorial boards can get very violent. Our main task is making sure nobody gets killed."

But if the various new initiatives prove successful, what future can journal publishers expect? Redundancy, predicts Eberhard Hilf, of the Institute for Science Networking at the Carl von Ossietzky University in Oldenburg. "Today all our articles are first sent to the arXiv.org central pre-print database, and then to a publisher for refereeing, but there is really no need for the latter function any more. The model for the future is: publish first, and the refereeing and evaluation will then be done by the public."

The problem for publishers, argues Detlef Görlitz, a physicist at the University of Hamburg, is that there is now a contradiction at the heart of their business. "The organisation of a good peer-reviewing system is the central expertise that publishers provide. But the marketing and distribution of journals via subscriptions hinders free scientific communication."

Harnad’s solution is simple: down-size the publishers, turning them into service bureaux offering peer-review and editing services alone. "Today the world is paying approximately US$2000 per article in subscriptions and licence fees. The proportion of that which is necessary for the peer-review is $200 — so give back the $2000 to libraries, and let them re-channel 10 per cent of it to pay for the peer-review costs."

On the contrary, says Haank, the growth of pre-print and eprint servers will allow publishers to extend their role further down the chain of creation. "Our vision is one in which there will be a seamless flow of articles from draft, to work in progress, to nearly finished article, to accepted article. Whether we stop at the peer-review stage, or get involved in the pre-prints — or even at the academic workbench stage where the article is written — will depend on our coming up with a service that everybody appreciates."

Grim for secondary publishers

But while primary publishers face an uncertain future, the situation is even more grim for secondary publishers, says Doyle. "The primaries are being forced to introduce new services that have traditionally been handled by the secondaries as more and more material becomes available in open archives. With automated protocols for sharing information, secondaries will have less of a claim to unique data, or to being the only multi-journal portals."

David Goodman, biology librarian at Princeton University, agrees. "Services that do not add access beyond that provided by the title and abstracts are useless now the abstracts are online. I do not see, for instance, how a citation indexing company like ISI can long continue in its current form, especially at its current cost."

Whatever the future holds, Doyle expects big changes. The main concern, he says, "is that the (inevitable) transition to fully open archives will happen too quickly, and that publishers’ revenue streams will dry up before we have been able to stabilise a new business model. Transformations may proceed so quickly that traditional peer-review disappears with nothing to replace it."

Richard Poynder can be contacted at: richard.poynder@journalist.co.uk