Subscribe to MeaseyLab Blog by Email

Getting rid of publishers

31 May 2018

A new model for publishing without publishers - the blockchain journal

The problem

There are growing problems with publishers. Publishing has become very expensive for scientists, and the public who fund science; unwittingly funding many publishers. I’ve discussed the history of scientific publishing before (here), and explained how commercial publishers have had a role right from the very beginning. This relationship continues to the present day, but while the origins saw learned societies commissioning publishers and then distributing their content, today the publishers have climbed into the driving seat conceiving and owning many of the current journal titles. This is not to say that there are no scholarly societies that still dictate terms to publishers (The British Ecological Society, BES, do this very well), but they are few and far between, and even when society journals are involved, the goliath publishing companies easily outweigh any control that they might have once possessed (and I’m speaking from personal experience – see here and here).

The result is that publishers have become gigantic corporations that now dictate to the scientists that produce, edit and review all of the content. They charge incredibly high fees to anyone who might want to read the publicly funded content. The budgets of university libraries run into USD 100 000s just to access content. The result is that many universities can’t afford all of the content that their researchers need. Publicly funded content, and by that I mean that you dear reader are paying for the original science of the content in your taxes (yes, you all pay taxes, even if it’s only VAT), and then you pay a second time for the researchers (who themselves produced the content) to access the content. Who benefits from the fact that you pay twice? The publishers. Why aren’t you upset about this? Probably because you are unaware. But if you are upset, then join in the discussion to decide how to emancipate ourselves from the publishers who are merrily munching on through this publicly funded cash cow.

What do publishers do?

The publishers would claim that they do an awful lot. All they really do it pay for the layout and printing of journals. These days ‘printing’ really means hosting electronic pdfs only, as there’s very little paper that’s printed, and you can be sure that paper subscribers now pay the extra cost of any additional fees. The layout from the manuscript (most often a MS Word doc) into a pdf does take some skill and talent, although nothing like what you might expect given how much the publishers charge. You can be sure that they don’t pay much for this service as almost all layout is done in India, Bangladesh, Sri Lanka, etc. Quality can be good, but more often quality control is completely lacking. Most authors have stories of how manuscripts have come back mangled, although my own impression is that the worst days appear to be over.

What is publishing then?

Once publishers have ‘printed’ the manuscript, they ‘publish’ it by placing it behind a paywall on their website. I would be the first to admit that there are massive costs in doing this properly, and big journal companies have invested a lot to do this very well. The electronic hosting of journals is (in my opinion) truly excellent, except for that paywall. However, once they’ve set this system up, adding another 10 or 20 journals comes at practically no cost compared to the revenue that each one can be expected to gain.

To get behind that paywall, university libraries need to subscribe. Publishers bundle journals together and sell subscriptions at very high prices. If you are inside the university IP address, this access should be seamless. If you are outside, you might need to log in through your university’s library.

So far, the publisher hasn’t produced any content. The scientific content has been produced by the scientist at the cost of the public purse. The editing and peer review (see here) has all been done by the scientists, which has also cost the public purse but has been completely free to the publisher. OK, so there are some small costs associated with manuscript handling software subscriptions that the publishers normally pay. The publisher has also paid for the typesetting (although they’ve done this as cheaply as possible - see above), and they’ve paid for the severs that distribute the pdfs maintaining that all important paywall. What else? Nothing else. Now they simply charge everyone to look at the content (and because it’s by subscription), actually charge everyone whether or not they are looking at the content.

'Open access is one of the best scams that publishers have come up with' 

But don't take my word for it. Read this excellent 2017 article in the Guardian by Is the staggeringly profitable business of scientific publishing bad for science? In it, Buranyi makes the point that the profit margins of academic publishers are in excess of 40%. Something that even drug dealers, pimps and the mafia struggle to achieve. This situation seriously needs to change.

Open Access

Yes, there’s a new scam in town. Open access appeared to be a great initiative that acknowledged that everything should be free to view. Neither scientists nor the public that fund them should be barred from accessing the knowledge that they produce. What could be wrong with this?

So what is open access?

Open access is probably one of the best scams that the publishers have come up with to date. Now the scientists pay for making their own content open for anyone to read. They pay a once off fee to the publishers to typeset the manuscript and host it on their site without a paywall. And how much do the publishers want for this service. Prices start from USD 1000 and go up to around USD 10 000.

The money comes from the funds that would otherwise be reserved for conducting science. So now the money for research goes directly into the pockets of the publishers upfront. Money that ultimately comes from you as tax payers goes directly to publishers. Still happy?

So does that mean that these journals are now free?

No. Some articles in the journals are free, but the universities are expected to subscribe to those same journals at ever increasing prices because much of the rest of the content is still behind the paywall. This is becuase most authors cannot afford to pay the exhorbitant fees charged by the journals (although some countries now have this payment as mandatory, they and their scientists are still in a minority). There are some journals that are entirely open access. This is all well and good (see here for the PeerJ model). But paying for open access has not reduced the cost of access to scientific journals for libraries. This cost constantly goes up. Open access was a brilliant scam by the publishers, because for much of this content we pay not twice but thrice!

The blockchain journal - a solution?

So we need a new way, without publishers – what do I suggest?

First, we have to forget about the layout. There is a real cost to typesetting of scientific papers, and if we are to get rid of the publishers, then I think we need to start forgetting about the fancy layout to which we’ve become accustomed. This will mean putting some vanity on the back burner, or allowing individuals to make their own manuscripts attractive. It’s actually not that hard to do this with many of the LaTex tools freely available online (find out more here). It is even the sort of thing that we could pay our own graduate students to do.

Next, we need to work out the distribution issues. This is actually really simple. Our libraries already have everything that we need to distribute our manuscripts. They maintain and handle servers. They handle thousands of requests from users every day, distributing their own content (e.g. theses) to users all over the world. So our own university libraries could become our distributors.

I also suggest that to make the distribution truly international, in the same way that we’ve become used to journals being global, we can make use of blockchain technology to have all university libraries host all papers for a particular journal (I suggested this back in October 2017, and the idea has been growing on me). The buy-in could be at the level of the journal editor’s home institutions allowing for local, national and international titles as the editors are distributed. The reputation then sits back with the editors of the content and their institutions that employ them and host their content.

Lastly, we have to take back control. We need to start this new model moving. How to do this? I think it will take a society that is already in a commanding position to leave publishers and use the money that they’ve earned to setup the basis of the blockchain journal. It will take some initial investment, but organisations (like BES) have already made so much money from their deals with publishers that they can well afford to leave them behind.

Once the revolution has started. Watch the scientists leave the publishers behind. We really don’t need them. They have been taking your money for far too long. They have had their good times. Once we don’t depend on them, we may even be able to go back to using their services, but at a more reasonable rate that doesn’t cost us the price of our own research.

  Lab  Writing

Time spent at UNICAMP with the Toledo Lab

23 May 2018

Last stop in Brazil is UNICAMP

This week, I'm visiting the lab of Prof. Felipe Toledo: LaHNAB at the University of Campinas, UNICAMP.

I was invited to give a talk in their Bioforum series

Thanks to Raquel Salla for this image.

I even got certificated!

It's been great fun to meet all of his students and postdocs who've really looked after me well in true Brazilian fashion.

  Frogs  Lab  meetings

The future of peer review is a comments page?

22 May 2018

We need to change our current publishing model

I have written about publishing a few times (here and here), but recent conversations with colleagues have prompted me to write again. In essence, we need to change the way we publish science to pull back from the overt greed of publishers that is consuming funds for research. The public funds scientific endeavour through their taxes, the scientists produce the content, edit and conduct peer review. Then somehow, publishers have convinced us that they should charge for this content and even own it. This morally reprehensible situation must change, but how?

We need to explore all options, and in this blog I consider what science would be like if we went entirely preprint.

  

Do we really need peer review?

As far back as 2002, William Arms suggested that openly soliciting comments on the web might be an alternative for peer review of scholarly articles. Sixteen years later, this has come to pass in the form of preprints. 

There is a growing trend for publishing preprints. Preprints are simply manuscripts that are submitted to an online server and available for all to view. Physicists started first with this phenomenon, but biologists have been hot on their heels and there are now a number of prominent pre-print archives including BioRxiv and PeerJ Preprints.

Each of these sites maintains the open access manuscripts, and allows other users to post comments (partial or complete peer-reviews) of these online manuscripts.

There are problems in the world of publishing. Mostly related to the greed of publishers who demand large sums of money for content that they do not pay to produce, but charge for access. We must look for alternatives to the current model, so could we replace publishing with an open platform like BioRxiv?

Could these comments pages really replace peer review?

Peer review is held as a gold standard in scientific publishing, and there’s certainly a lot to that. It ensures that published material has been read and its contents assessed independently. But peer review is fallible, because scientists are humans.

  • Not all reviewers can assess all parts of a paper, especially papers that cover several disciplines
  • Not all editors will choose reviewers that are independent and objective. Depending on the framework set up for the journal, friendly reviewers can be chosen or critical reviews removed. Perhaps the inverse is more common, although you are less likely to see these manuscripts published.
  • Poor peer review is a growing problem.
  • Lastly, and not least, there is an increasing difficulty in finding peers who are prepared to review manuscripts. (See the Perry et al editorial: “The Peer in Peer Review” published in 2012, which was a plea to the herpetological community to accept reviewing as a necessary duty).

In 2003, Stefano Mizzaro proposed changing peer review to the format that we now see in preprint journals. Let every reader become a reviewer. 

In the preprint model, the first three problems (above) might all be overcome as no-one chooses the reviewers. Instead they choose themselves, are motivated to do the work. Their competence to cover all aspects of the manuscript is not assured, but one assumes that independently motivated reviewers will only comment on parts that they are able to assess.

  

All of this is very good, but will people actually read and comment?

A quick look at the sites will tell you a lot about the level of reviewing that is currently going on in biosciences pre-prints. A quick look through the top 10 articles on BioRxiv zoology section confirmed my suspicions. Plenty of tweets about the articles, but none of them had any comments. Indeed, a further trawl through PeerJ Preprints also found no comments. 

Further, I’d suggest a greater move to this culture might produce comments for well known labs, a certain amount of trolling for labs with ongoing disputes or rivalries, making this kind of comment review a sort of trial by popularity. But I don’t see a situation where potential reviewers will take time-out once a week (for example) and hunt for manuscripts that have received no comments. It seems far more likely that the authors will have reciprocal agreements with other groups to review each other’s manuscripts. This nepotistic tendency then puts us back into the area of peer review that we’ve been working hard to overcome now for sometime (double-blind reviewing, editors codes of ethics, etc.)

   

Are preprints published?

As they each have a DOI (Digital Object Identifier), they are in their own way already published.

Another point is that these articles are picking up citations. And there is a new concern that these articles are being cited, even when they are subsequently available through a published journal. This is one of my personal concerns with using a pre-print service. I’m happy to put the paper out there for public comment, but the idea that it’ll remain there and that readers won’t necessarily be re-directed to the peer-reviewed version does concern me.

Another question is what happens to manuscripts that are placed on pre-print servers, are then sent out for review but not published because they are fundamentally flawed? It’s not as if the reviews are not made, but there is no automatic link to the reviews by the journal that conducted them.

Whether or not there is a paper inflation (see blog here), there is certainly an ever increasing number of papers. The rejection rate is not insignificant, and while many of the papers are not rejected because they are flawed (they may well go on to be published in another journal), there are certainly a lot of manuscripts out there with fundamental flaws. These are often sent for peer review, but those reviews pointing out the errors won’t necessarily make it back to the comments page on the pre-print server. I think that this is a serious problem. The reviewers have spent time and effort and the very reason they do this is so that manuscripts with fundamental flaws don’t find their way into the literature. However, pre-print servers have, perhaps unwittingly, found a loophole that allows manuscripts that are not scientifically robust a backdoor to citations. 

  

But if they are fundamentally flawed, shouldn’t everyone be able to spot it?

No. Reviewers are chosen with great care because the area is in their particular domain. They have insights that not everyone will be aware of and these are an important aspect of the purpose of peer review.

I edit for the journal PeerJ. Although there can be various reasons to be rejected from PeerJ, normally it means that your paper is not scientifically sound. As PeerJ has no selection for impact, rejection does not normally mean that it can be simply submitted to another journal. I have noticed that manuscripts that I have rejected from PeerJ are available still as PeerJ-preprints without any comment on their failure to go through peer review. In my opinion, this is not good as it essentially ignores the input given by both reviewers and editors. The article appears as if it has had no comments or attention, when this is not the case. In a system where we move to relying more on pre-prints, why would we want to ignore chosen peer reviewers for whom this article was within their specialist area?

Moreover, I note that the pre-print in question is also receiving citations (according to Google Scholar), again raising concerns that rejection by peer review is not a hurdle to entering the scientific literature.

    

In my opinion, comments pages won’t replace peer review.

If we end up abandoning our current way of publishing in favour of a comments page, I think that we’ll all be worse off.

Having said this, I acknowledge that the current system is broken and that we need to find a new solution.

My feeling is that the system we have is not faulty prior to the involvement of the publishers. We simply need to overcome the vanity of having our manuscripts set out in a pretty way. Once we’re happy to accept unformatted manuscripts as the way science is presented, I think we’ll be able to move on without the involvement and the exorbitant costs that the publishers extort from us.

  Lab  Writing

A few days with Laboratório de Ecofisiologia e Fisiologia Evolutiva

18 May 2018

Time at Laboratório de Ecofisiologia e Fisiologia Evolutiva at the University of São Paulo

Great to spend some time in the lab of Carlos Navas: Laboratório de Ecofisiologia e Fisiologia Evolutiva at the University of São Paulo. I gave a talk to the lab “Does an invasion come from a single population”, and in turn was treated to some insights into the current research in his lab.

 

It was a great few days at USP, that also saw me connect with Fernando Ribiero-Gomes, Vânia Regina Pivello and Taran Grant.

  Frogs  Lab

Gio's JEB paper gets lots of attention

11 May 2018

Gio's paper get's featured in a Journal of Experimental Biology inside JEB feature

A great write up from Kathryn Knight in 'inside JEB' this month on Gio's new JEB paper: Rapid adaptive response to a Mediterranean environment reduces phenotypic mismatch in a recent amphibian invader

Of the many remarkable things about this invasion of toads in South Africa's extreme southwest, this paper emphasises the very short time over which adaptive plasticity occurs: within two decades. 

Creative Commons Licence
The MeaseyLab Blog is licensed under a Creative Commons Attribution 3.0 Unported License.