Subscribe to MeaseyLab Blog by Email

Volunteering to remove invasive plants

13 July 2021

Impacts of volunteers on invasive plants

As a volunteer, it can sometimes be very disheartening to work clearing alien invasive plants, because they pop back up so quickly and the task seems so much bigger than you are. But, in a recently published study by Nolwethu Jubase, we show that not only do volunteers make a significant impact on the problem, but they get a lot more from it than just cutting down aliens. 

Being faced with a never ending barrage of invasive plants might seem enough to get your spirits down and give up. But in the Western Cape, volunteer groups are strong and derive great satisfaction from ridding the area of invasive species. Nolwethu estimated that their work amounts to nearly 5 300 ha of land cleared of aliens per year with labour costs equivalent to ZAR 5.1 million! This is a significant input into the fight against invasive plants, but the groups could do with better support. Currently, volunteers put their own monies into work needed at the sites, but need training for many of the aspects of removal and use of herbicides.

Of the many problems that the groups face, support from authorities seems particularly lacking. There is a great need for coordinating the groups so that their activities fit into the bigger picture. This role needs to be taken on by a national authority, such as SANBI. Particular problems faced by the groups included the movement of governmental organised alien clearing. Sometimes land cleared by volunteers is then cleared by government workers the next week. 

On the up side, volunteers get a lot out of their time spent removing invasive woody aliens from the fynbos of the Western Cape. They learn and get inspiration from others in the group and receive feelings of satisfaction, happiness and a sense of achievement. 

Read more about the study:

Jubase, N., Shackleton, R.T., Measey, J. (2021) Motivations and contributions of volunteer groups in the management of invasive alien plants in South Africa’s Western Cape province. Bothalia - African Biodiversity & Conservation 51(2), a3. pdf


Invaded - Klein Swartberg

01 July 2021

10 years of monitoring the Rough Moss Frog

It was 2011 when I first visited the Klein Swartberg, near Caledon, to carry out aSCR recordings to measure density of calling male Rough Moss frogs, Arthroleptella rugosa, with Andrew Turner from CapeNature. Some of you will remember the visit of Debra Stark in 2015. Debra also conducted recordings of Rough Moss frogs on the mountain (see here).

Since then, the mountain burned and recovered, and we have been back most years to measure density of the frogs. In 2013, the mountain looked the best I've ever seen it. The fire that went through in February 2012 had taken out most of the large pines, and the seepage where the frogs call was free of invaders (see top image below). 

Last week we were back with Oliver Angus (lower image) who is looking at the aSCR data from all years for his Honours project in the MeaseyLab. That same seep was not only invaded by pines, Pinus pinaster, but the frogs were no longer calling from what had been a stronghold for them. 

Happily, they are still calling at other sites on the mountain. 

In this image you see a male Rough Moss Frog that was ~20 mm long (SVL). For their small size they make quite a good noise that we can use to measure their population size.

Stay tuned for more news on this project...

  aSCR  Frogs  Lab

Recycling text

29 June 2021

Recycling text - new guidelines clarify a thorny issue

I have written elsewhere on this blog about plagiarism. Plagiarism is when you copy text from a source document somebody else has written, and paste it into your own document. You will be aware that plagiarism is not acceptable either for any documents that you hand in at the University or for anything that you want to publish (see blog entry on plagiarism). 

  • But what if the source document that you want to copy from is something that you've written yourself?
  • Does this still count as plagiarism?
  • Or is it text recycling?

In some new guidelines, recently published, Hall et al (2021) help demystify text recycling in its different formats and explain what is permissible when and why.

Text recycling 



Reusing text that you have written but not published, for example in your proposal or thesis.


Use of already published text that becomes more obscure when you attempt to reword it, such as technical settings in your methods.


Using text published in one format on the same subject but for a different audience. For example, using some text published in a paper for a popular article. 


Repeating published text wholesale for intention to publish again for the same audience.

Developmental recyclingis when you are reusing text that you have written for example between your proposal and something you intend for publication or in an ethics application that you also want to use in your thesis. All of this sort of developmental recycling is permitted and actually encouraged. I would further encourage you to use the opportunity of recycling this text to develop it and refine it further, condensing and improving where you can. 

Generative recyclingis where you take pieces of already published text for example from the methods when it does not make sense to change the text or actually makes it more obscure to reword it in order to avoid plagiarism. In my experience this doesn't amount to more than a few sentences describing technical settings on equipment. However this will depend strongly on your own subject area and may amount to larger chunks of text. In my previous advice on generative recycling I suggested that it is usually possible to reword most of the methods sections of papers. I reiterate here that this is the most preferable outcome and that you avoid any text recycling at all. You should really only be generatively recycling material if you cannot avoid it. These are situations where the text becomes more obscure by your attempts to reword it. There are some extra guidelines for generative recycling where you should have been an author preferably the lead author on the original text and that you make it transparent that the text has been recycled to readers (via a citation) and you may also want to declare it in your cover letter if there are no journal guidelines. Also make sure that any co-authors are aware.

Adaptive recyclingis where you are using published text for example for my paper as the basis for content in a popular article online or in a magazine or op-ed. I think that this kind of text recycling is quite unnecessary because you almost certainly need to reword your text for a different audience. There maybe times such as figure legends where you need to reuse text that was already published. If you do find yourself in such a position then check with the copyright owner of the material that you are able to reuse the text that you want without legal issues.

Duplicate recyclingis where large tracts of texts are essentially the same for the same message and audience. This is never likely to be sanctioned as it suggests that you are attempting to publish the same work twice. It will not be legal or ethical.

Read More:

Hall, S, Moskovitz, C, and Pemberton, M (2021) “Understanding text recycling. A Guide for Researchers” Text Recycling Research Project: 

  Lab  Writing

Agamas bite hard

22 June 2021

What is the relationship between bite force, morphology, and diet in southern African agamids?

If you've ever picked up an agama, then you know that they bite hard. Back in January 2017, Nick Tan came to the MeaseyLab on a European Commission Erasmus Mundus Masters Course (International Master in Applied Ecology) to conduct fieldwork for his MSc, submitted in July of the same year.

Nick already published two papers from his thesis (see here), but this latest paper out today in BMC Ecology & Evolution looks at the relationship between bite force, morphology, and diet across a number of agamids in southern Africa. He found that although head morphology and bite force relate to each other, they don't have strong or expected relationships with the ecology of the species. For example, rock agamas that have particularly flat heads for fitting under rocks, actually bite very hard. In general species with greater in-levers for jaw closing have a greater bite force and are associated to an increase of hard prey in the diet.

If you've never conducted any performance work, then it's worth watching this video and seeing how much work goes into getting every datapoint. Especially when the animals bite hard...

The bigger they are - the harder they bite!

This is a video by Nick that shows what happens when Giovanni got bitten. 

Nick's work built on fieldwork done by Anthony Herrel, Bieke Vanhooydonck and the reptile team back in 2008.

Great to see this work being published!

Read Nick's work here:

Tan, W.C., Measey, J., Vanhooydonck, B. & Herrel, A. (2021) The relationship between bite force, morphology, and diet in southern African agamids. BMC Ecol Evo 21,  126.

Tan, W.C., Vanhooydonck, B., Measey, J. & Herrel, A. (2020) Morphology, locomotor performance, and habitat use in southern African agamids Biological Journal of the Linnnean Society PDF

Tan, W.C., Herrel, A. & Measey, J. (2020) Dietary observations of four southern African lizards (Agamidae). Herpetological Conservation and Biology  15(1), 69-78 pdf


Prediction markets

14 May 2021

Solving the attractiveness of incredible results with prediction markets

The idea that science is in crisis is one that has been building now for at least two decades. Evidence for this crisis revolves around studies that provide evidence of publication bias and especially the lack of repeatability of high profile studies. The findings appear bemusing because all studies are subjected to peer-review before being published and therefore go through some kind of quality check. Certainly this would suggest that it would not be easy to pick which studies are replicable and which are not. Yet this does not appear to be the case.

A study that gave subjects the possibility of predicting which studies were replicable and which were not found that it was possible to predict an advance of any replication studies being made. Moreover they found that once replication studies were made they followed the prediction market (Dreber et al 2015). Thus, this suggests that individuals in peer review are not particularly good at determining whether or not the study is replicable, but that a prediction market is.

Incredible results

Humans have a bias toward wanting to believe significant results (Tivers 2011), even when the potential for these to be the result of aType I error is quite high. Perhaps the positive feedback gained from incredible results by the media (traditional and social) gives the impetus to drive selection. But a new study suggests that, all else being equal (including gender bias, author seniority, etc.), these incredible results also generate more citations (Serra-Garcia & Gneezy 2021). 

As we are aware, citations are a form of currency in current day science. Increased citations to journals (within 2 years of publication) give them higher Impact Factors, which in turn allow them to leverage better manuscripts and higher APCs. Increased citations to authors allow them to compete in a competitive job market, opening the door to tenure, grants and awards. We should be aware of the increasing number ofretractions associated with fraud, which has placed the perpetrators in advantageous jobs.

The research by Serra-Garcia & Gneezy (2021) is of particular note as they only selected publications from two journals,NatureandScience, meaning that this journal playing field was very similar. They used the dataset from Dreber et al (2015) allowing them to see which of the studies was actually repeatable, and the ones that weren’t being literally incredible. They found that incredible studies received more citations, even after the replication studies (see Dreber et al 2015) showed that they lacked credibility. Moreover, after the failure to replicate, only 12% of these additional citations reported their incredible nature and hence the increased citations are not generated from those that report on failure to replicate. 

The recognition that incredible results are attractive to high-ranking journals and those who cite research in their own fields helps to lift the veil on the way in which today’s science has a positive feedback forchancers and crooks. We are susceptible to scientific fraud because we appear to be drawn by the incredible, presumably because the credible simply doesn’t seem exciting enough. Given that as individuals we perform poorly, can we use prediction markets to give us the edge on our inbuilt biases?

Finding a use for prediction markets 

Prediction markets are simply crowdsourcing to determine the outcome of a particular event, in this case whether or not a study was replicable. However, there is a gambling twist that borrows from the stock market. For example, you might think that there is an 85% chance that the study is replicable, and this is how you enter the market. Once all participants place their predictions in, a consensus prediction is reached, and now the trading begins. If the consensus prediction is at 0.62, and you really believe that the chance is 0.85 you should buy stocks valued at 0.62 because if you are right you will make money. However, if the consensus stock is 0.95, then you would be better off selling your share of 0.85 if you really believe that there’s a 0.15 chance the replication will fail. Like the real market, there is no reason for this market to be static. For example, one of the authors of the original study could give a talk, and during the talk participants start buying or selling their stock as extra confidence or skepticism is gained. Likewise, during the questions an astute member of the audience may rattle the author, resulting in a fall of the ‘price’ or consensus outcome. 

Potential uses of prediction markets

If individual reviewers aren’t good at spotting incredible results, perhaps this should be passed to a crowdsourcing platform to determine whether or not high profile studies should be published in high profile journals. As all of the editorial board team members have expertise in the journal’s content, perhaps they could make up the panel of experts that judge on the replicability of each issue’s content. Over time, each editorial board team members’ quality would be recorded, and over time their ‘usefulness’ might be quantitatively valued. 

Prediction markets are likely to work well when complex decisions have to be taken by changing a small committee of people with limited information to a much larger group with better collective experience. 

Choosing student projects to fund

Each year a limited number of people apply for bursaries to conduct projects in invasion biology at the CIB. The number of bursaries available is between a tenth and a quarter of the number of applicants. The committee (of 5 people) examines each application on a number of criteria from the application including qualities of the student, the project, the focus and past performance of the advisor. Knowledge of the panel is imperfect as they don’t know all of the information behind each application. Occasionally, phone calls during decision meetings are made to fill in blanks, but decisions are made on scoring each project with projects that have the top scores getting funded. 

Enter the prediction market.Now a larger number of people can get involved - this could be the entire Core Team of the CIB, or the core team and all existing students. Some will have much better information than the original panel, and will have some impetus to trade with greater confidence. Those with less information are less likely to participate or buy less stock. Advisors who have several student applications are similarly forced to either split their stakes evenly, or potentially back a preferred application over one they consider less likely to succeed. Once the projects are funded (based on the outcome of the prediction market), their success or otherwise will be gauged by whether the degree is gained by the student within the time allotted. For students that fail to produce in time (or meet any set of milestones) will be regarded as having failed, and the payout will be for those that had stock predicting this. 

Other potential uses

A similar position is faced by anyone looking at applications from stakeholders that have imperfect knowledge, but where a community of experts exists that have better collective knowledge. Outcomes of proposed projects need to have clear milestones, and on a timescale whereby the participants are still likely to be around to see the return of their knowledge investment. There are a lot of potential uses within the academic environment including:grant applications; hiring committees; etc.

Recognition of weakness

Once we have recognised where we are likely to perform badly at decision making, we should be willing to look to other solutions to improve our performance. There is clearly some distaste in the idea of a money market on making decisions, so could this instead form part of the reputation of those that take part? Would you be willing to wager your scientific reputation on the outcome of a hire, a student bursary or a grant application?


Dreber, A, T Pfeiffer, J Almenberg, S Isaksson, B Wilson, Y Chen, BA Nosek, and M Johannesson. Using Prediction Markets to Estimate the Reproducibility of Scientific Research. Proceedings of the National Academy of Sciences112, no. 50 (December 15, 2015): 15343–47.

Measey, John.How to Write a PhD in Biological Sciences: A Guide for the Uninitiated, 2021.

Serra-Garcia, M, and U Gneezy. Nonreplicable Publications Are Cited More than Replicable Ones. Science Advances7, no. 21 (May 1, 2021): eabd1705.

Trivers, Robert.The Folly of Fools: The Logic of Deceit and Self-Deception in Human Life. 1st edition. New York, NY: Basic Books, 2011.

Want to read more?

This blog is the basis of two books currently in press, but you can read them free online now: 

  Lab  Writing
Creative Commons Licence
The MeaseyLab Blog is licensed under a Creative Commons Attribution 3.0 Unported License.