Subscribe to MeaseyLab Blog by Email

Fire toadlets

30 January 2021

After the fire: how Rose's Mountain toadlets survive fire in the fynbos

One persistent riddle in this part of the world is how animals survive the regular fires that regularly engulf and rejuvenate the native fynbos vegetation. It's been know for a long time that the plants have evolved sophisicated techniques to use the fire to survive or repopulate areas. But most surveys after fires suggest that lots of animals perish. So how do populations survive?

The 2015 fire on the Cape peninsula allowed us the opportunity to visit our long term monitoring site for Rose's mountain toadlets, Capensibufo rosei  (see previous blog posts on this species here and here). Within 10 minutes of arriving at the winter breeding site, we spotted a toadlet hopping through the ashes. As I filmed it (see below), it suddenly disappeared into a hole. Stripping away all the vegetation allowed us to see that the toadlets use underground burrows of Cape gerbils to shelter from the harsh mediterranean summer sun. Presumably, being in these holes during a quick fire allows them to survive being burnt to a crisp. 

In a note that was published this week (Measey et al., 2021), we record what we found that day as well as a survey that was conducted by Francois Becker who was doing his MSc on these toadlets at the time (see also Becker et al., 2018). The results present a startling view of exactly how local these toadlets are found around their breeding site: all within ~200 m. This is very surprising and provides some insight into their enigmatic decline (see Cressey et al 2015). 

In addition to invaluable life-history information on this IUCN Critically Endangered species, we explain the importance of making natural history observations following extreme events. 

Read the full article here:

Measey, J., Becker, F. & Tolley, K.A. (2021) After the fire: assessing the microhabitat of Capensibufo rosei (Hewitt, 1926). Herpetology Notes14: 169-175. pdf

  Frogs  Lab

Transparency from your proposal onwards

22 January 2021

Starting out transparent

The scientific method requires us to pose a falsifiable hypothesis, design a controlled, rigorous and repeatable methodology, and report and interpret our results honestly. In theory, it’s all pretty straight forward. I’d like to think that in a world where science is adequately funded, we’d have already reached a situation where transparency was endemic in the scientific system. However, science has not been so lucky, and where there are shortcuts, some people will take them in order to get ahead of others. One of the objectives of this blog is to demystify what happens in biological sciences. In other words, the demystification in this blog is only really needed because so much of what happens does so behind closed doors, and out of the gaze of less privileged scientists. As you read through this blog, you will see countless examples of where things are not as they should be. I have always tried to insert solutions when I highlight problems, but these are admittedly piecemeal, rarely proven, and even in a best case scenario might be best thought of as temporary solutions. The logical antidote to all these shady scientific dealings is to turn to transparency.

We would do better moving to open and transparent science. The open science movement is gaining traction (Sullivan et al 2019), and I hope that it will become the norm in the very near future. Those of you who are reading this now are likely to be students during an interim period prior to open science becoming mainstream. Therefore, you will be instrumental in adopting this process as soon as possible to ensure that science becomes a fairer place for scientists from all backgrounds, and with all results. 

I try to consistently encourage you toward transparent and open practices in science. Here my aim is to introduce this topic and provide an overview of the main areas in which you can currently make a difference by opening up your research. As the Centre for Open Science explains (COS: see here), this will take a shift in the culture of the scientific community, which is why you need to understand and adopt a transparent approach from the outset of your research career. There are much better and more comprehensive guides and online resources so I encourage you to look through the literature cited here. Although I advocate hard for the COS, please be aware that this is not the only set of guidelines 


You will spend a considerable amount of time at the beginning of your PhD studies planning what you want to do. In many institutions, this will take the form of a formal proposal. It will be agreed with your advisor, and may well pass the inspection of a committee. You will put a lot of effort into reading the literature in order to ask the best hypotheses possible. You will design your experiments with rigor and control, and potentially redesign them after your advisor and committee comment. The effort put into your proposal is totally disproportionate if it’s never looked at again. Yet, this document is of historical importance, because it says what you think before ever doing the experiment and collecting or analysing the data. Thus, it can in future prove that you were not conforming to confirmation bias, for example by HARKing: rewriting your hypothesis after getting your results in order to get a significant result (Kerr 1998; Forstmeier et al 2017). 

Confirmation bias is a problem in science because of the way that science is published (see part 4). In brief, articles for journals with higher impact are regularly selected based on significant results. If you don’t have that significance, they are unlikely to want to take your manuscript. For this reason, many scientists have sought to have significant results to report, and this is called confirmation bias. Confirmation bias is bad because it violates the assumption that we are answering the hypothesis that we started with, or it could cause scientists to manipulate their data until a significant result is achieved. For example another form of confirmation bias isphacking: repeating tests with different approaches in order to a significant result. Rubin (2020) gives a good list and set of explanations. As there are a lot of known ways in which researchers have been thought to deliberately or accidentally report false-positive findings, one solution can be archiving your intentions, referred to as preregistration (Forstmeier et al 2017). If you register your proposal (or any research plan), you can present this historical document to a journal (probably 4 to 5 years later) to show that you have done what you planned to do. 

What have you got to lose? With a good proposal that you feel confident with, you have nothing to lose. By registering your proposal, you are most likely to gain, especially in the future if more journals require preregistered hypotheses and methods in order to submit to them. What this does mean is that you will need to do a good job of your proposal. This can be daunting when starting a new PhD, especially with respect to the analyses if you are not from a strong analytical background. Your proposal period should be time to make sure that your knowledge of analyses that you will do is sufficient. I would suggest that the best way to make sure that you are proficient is to obtain a working knowledge of how to handle a dataset such as the one you are going to generate in your experiment during your proposal period. If your lab already has similar datasets, borrow one. If not, generate some data that you can use. 

What if new analyses appear during the period, or you are forced to change your experimental protocol. Certainly, most analytical software will be updated over the course of your studies, and some might be superseded. None of this is a real problem to your preregistered content. Logging the software that you intend to use will demonstrate the approach and type of analyses that you intend to take. While it is unlikely that you will ever be held to account for minor changes to equipment or protocol, things do change along the way. It doesn’t mean that you won’t be able to report your results, or that your research won’t be viable. But it does mean that you will need to be transparent about why it changed. For this reason, it is a good idea to document the changes to your proposed research plan and why they happened. It is surprisingly easy to forget! There are some great tools on different platforms for adding this information, together with a timestamp so that it’s clear when it was done. 

Will preregistration of research eliminate the bias from science? It is probably too early to tell (Rubin 2020), but it is certainly a good place to start (Nosek et al 2018). The more researchers know and subscribe to transparency in their research, the more it will shift the culture in science for the better (Forstmeier et al 2017).

What Platform to use?

When choosing where to archive your proposal (or any of your data, analyses, etc.) there are lots of platforms to choose from:Bitbucket,Figshare,Github,OSF,Zenodoand the list will undoubtedly grow. Making a decision about what you are going to use now may not require that you stick with this same platform for you entire career, but there are some things that you should consider:

Here are some of the considerations that you should take on board:

  • What do people in your lab and institution use? It’ll be easier to use the same platform as your advisor and other lab members
  • Some platforms require a subscription, check whether your institution is a member.
  • Avoid using any platform that is tied to a publisher. 
    • Although it’s not possible to future proof this (as publishers have been shown to buy up anything they think will help them control the academic market), you can check how the platform is set up and opt for those that are non-profit organisations. 
    • Other important aspects are “open source”, free to use and access. 
  • Can the platform function for more than one aspect of transparency?
    • As the need for transparency in science grows, it will become important to have more aspects of projects archived. Does the platform that you’ve chosen cover all the stages from conception to publication?
  • Is it easy to use?
    • Some of the platforms will be more intuitive to use, while others require a steep learning curve. Consider how friendly they may be to other collaborators, older advisors, etc.
  • Are the archives easily compatible with other platforms?
    • Working across platforms might be important for your project, especially if you start with collaborators that are already using different platforms. 

The platform you choose should work for you (rather than having you work for it). If you are someone who loves to have everything ordered and organised, then you’ll love seeing this all laid out. If you aren’t, then these platforms are going to be a massive help to getting all of your plans sorted. Make updating your platform a habit. For example, you could make sure that notes taken during meetings with your advisor on different projects are logged onto the platform. This way you both have a record of what decision was made when. Remember that you can choose what you make accessible to the outside world. 

Transparency as you move forwards

There are a whole lot more transparency criteria that you will need to be aware of later on in your PhD when it comes to publishing. You will come across these in part 4 of this blog. Becoming familiar with the entire process now will be to your advantage, so I encourage you to read more about this project. Sullivan et al (2019) provide a nice overview about how to get started, but be sure to consult documentation at OSF.


Forstmeier, W., Wagenmakers, E.J. and Parker, T.H., 2017. Detecting and avoiding likely false‐positive findings–a practical guide.Biological Reviews, 92(4), 1941-1968.

Kerr, N. L. (1998). HARKing: Hypothesizing after the results are known. Personality and Social Psychology Review, 2, 196-217.

Nosek, B.A., Ebersole, C.R., DeHaven, A.C. and Mellor, D.T., 2018. The preregistration revolution. Proceedings of the National Academy of Sciences, 115(11), pp.2600-2606.

Rubin, M. (2020). Does preregistration improve the credibility of research findings?The Quantitative Methods in Psychology, 16(4), 376–390.

Sullivan, I., DeHaven, A. and Mellor, D., 2019. Open and reproducible research on open science framework. Current Protocols Essential Laboratory Techniques, 18(1), p.e32.

  Lab  Writing

Welcome Siya!

08 January 2021

The MeaseyLab welcomes Siya Aggrey to

start his PhD on invasive House Crows

In our current state of turmoil surrounding the global pandemic of COVID-19, it seems amazing that Siya Aggrey has managed to make it from his home in Uganda to South Africa for the start of the academic year. Siya was recruited at the end of 2019 to receive a CIB bursary to study the invasion of House Crows in Cape Town. Days after getting his visa at the start of 2020, Siya was prevented from travelling as the pandemic spread across the globe and the university was closed. 

But Siya has managed to survive the year of COVID (2020) by conducting field studies in rural Uganda on COVID, AIDS and Ebola. He's also managed to remain preductive by adding to his increasing number of publications in 2020.

Siya will be working on invasive House Crows, which have also been having a great year by expanding their distribution vastly in South Africa. We are really looking forward to getting some more information on this situation as we move forward on their eradication programmes.


Finally I know my Erdős number

04 January 2021

My Erdős Number below 6 degrees of separation

Paul Erdős was a Hungarian mathematician who authored or co-authored hundreds of publications across a wide variety of disciplines. He was so prolific that a certain level of prestige is associated to having co-authored a paper with him. The idea for the Erdős Number came from another mathemetician, Casper Goffman. This has now turned into a kind of science cult, in the same way that the Bacon number (actually a newer concept stemming from the Erdős Number) has in acting (see here). Those who have co-authored with Erdős are given an Erdős number of 1. Someone who has never written a paper with Erdős, but who has co-authored a paper with one of his co-authors are given an Erdős number of 2, and so on with increasing degrees of separation. As noted by Lenski, while the ranks of those with 6 degrees of separation from Erdős and Bacon are quite high, very few people have an Erdős-Bacon Number. 

For a long time, I've been pestering my mathatically inclined collaborators to find out what their Erdős number is so that I can plot my own. It turns out that I need not have bothered as my line is more direct than I had thought. 

My first attempt equalled six degrees of separation:

Daniel Kleitman published many papers with Erdős, and has an Erdős number of 1

Lior Pachter has an Erdős number of 2, having co-authored a publication with Daniel Kleitman.

Richard Lenski has an Erdős number of 3, having co-authored a publication with Lior Pachter.

Jonathan Losos has an Erdős number of 4, having co-authored a publication with Richard Lenski.

Anthony Herrel has an Erdős number of 5, having co-authored a publication with Jonathan Losos.

I therefore have an Erdős number of 6, having co-authored a publication with Anthony Herrel

Thus my collaborators have either an Erdős number of 7 or lower (if they've got a more direct line to Erdős). It could be that my Erdős number is lower, or will get lower. Stay tuned for any updates!

Here are the relevant publications:

Beerenwinkel, N., Pachter, L., Sturmfels, B., Elena, S.F. and Lenski, R.E., 2007. Analysis of epistatic interactions and fitness landscapes using a new geometric approach. BMC Evolutionary Biology7(1), 1-12.

Blount, Z.D., Lenski, R.E. and Losos, J.B., 2018. Contingency and determinism in evolution: Replaying life’s tape. Science362  (6415).

Erdös, P. and Kleitman, D.J., 1968. On coloring graphs to maximize the proportion of multicolored k-edges. Journal of Combinatorial Theory5(2), pp.164-169.

Kleitman, D. and Pachter, L., 1998. Finding convex sets among points in the plane. Discrete & Computational Geometry19(3), pp.405-410.

Measey, G.J., & Herrel, A. 2006. Rotational feeding in caecilians: putting a spin on the evolution of cranial design. Biology Letters 2, 485-487.

But then my friend [and collaborator] Ben Stevenson [Erdős number 5] pointed out that the American Mathematical Society has their own Erdős calculator

David [who was the first person I asked many years back] it turns out has an Erdős number of 4. As David and I published a few times, my Erdős number is actually 5.

Here are the relevant publications

Antille, A., Kersting, G., Zucchini, W. (1982) Testing symmetry. J. Amer. Statist. Assoc. 77, 639–646.

Borchers, D. L., Zucchini, W., Heide-Jørgensen, M. P., Cañadas, A., Langrock, R. 2013 Using hidden Markov models to deal with availability bias on line transect surveys. Biometrics 69 (3), 703–713.

Erdős, P., Janson, S., Łuczak, T., Spencer, J. (1996) A note on triangle-free graphs. Random discrete structures (Minneapolis, MN, 1993),  117–119, IMA Vol. Math. Appl., 76, Springer, New York, 

Janson, S., Kersting, G. 2011 On the total external length of the Kingman coalescent. Electron. J. Probab. 80, 2203–2218.

Measey, G.J., Stevenson, B., Scott, T, Altwegg, R and Borchers, D. (2017) Counting chirps: acoustic monitoring of cryptic frogs. Journal of Applied Ecology 54: 894–902 pdf

  Lab  Writing

Two talks at SICB2021

02 January 2021

Two very different talks for Society for Integrative and Comparative Biology (SICB) 2021

I’m co-author on two very different talks at this years’ SICB conference, held online. 

The first talk by Carla Madelaire takes the work she and Adriana did on CORT back in 2019 to another level by asking whether saliva assays work as well as blood. This work has particular relevance on repeating CORT levels on individuals. Great to see this being presented!

The second study uses some data collected by myself and Anthony Herrel many years back. We collected some burrowing data on various caecilian species. Now Aurélien Lowie, from Ghent University, has taken this and other data to look at how a number of different live caecilians perform in relation to their skull shape. 

See more on the SICB 2021 website (links below!).

Corticosterone levels in the saliva as a measure of stress in toads

CB Madelaire, D Dillon, AMG Barsotti, J Measey, FR Gomes, CL Buck

Glucocorticoids have been widely used as a physiological marker of stress, and elevated baseline glucocorticoids levels in vertebrates have been associated with environmental changes. The use of minimally invasive sampling techniques and analysis of non-traditional sample types to monitor stress in wild populations has increased due to the importance of understanding how animals respond to environmental disturbances. The use of saliva samples can be a powerful tool to monitor both endocrine shifts and responses to stressors in wild populations. This sampling method does not require a large amount of manipulation and it can be used to sample smaller species, contributing to an increase of studies in environmental endocrinology and conservation efforts of understudied species. This study validated corticosterone (CORT) measurements in the saliva of the guttural toad (Sclerophrys gutturalis) using samples collected in the field and after a standardized stress protocol. We show that small amounts of saliva (0.018±0.028 g) are sufficient to quantify CORT. Salivary CORT levels were higher after exposure to a standardized stress protocol when compared to field levels of CORT, indicating that saliva samples can reflect biologically meaningful levels of CORT in the guttural toad. Because levels of salivary and plasma CORT were not correlated in either the field sampled animals or following exposure to acute stress, we conclude that CORT in the saliva and plasma might show different response dynamics to stimuli.


conservation physiology, glucocorticoid, acute stress, anura, salivary glucocorticoids

9:47 AM - 9:48 AM SAST on Saturday, January 2

Under pressure: the relationship between cranial shape and in vivo maximal burrowing force in caecilians (Gymnophiona)

Co-Author(s): A Lowie, A Herrel, B De Kegel, M Wilkinson, GJ Measey, JC O'Reilly, N Kley, P Gaucher, J Brecko, T Kleinteich, D Adriaens

Caecilians are elongate and limbless amphibians. Except one aquatic family, they all have an at least partially fossorial lifestyle. It has been suggested that they evolved sturdy compact skulls with fusion of ancestrally separate bones and tight sutures as an adaptation for head-first burrowing. Although their cranial osteology is well described, relationships between form and function remain poorly understood. In this study, we report data on in vivo burrowing forces for more than 120 specimens belonging to 13 different species. Over 80 caecilians were µCT-scanned and their skulls segmented. Using fixed and semi-sliding anatomical landmarks, we performed 3D geometric morphometrics to quantify skull variability across species. Finally, using correlation tests, linear models and two-blocks partial least squares, we investigated the relationships between the overall cranial shape and in vivo burrowing force in caecilians. Surprisingly, results show that despite differences in the head morphology across species, there is no relation between overall skull shape and this particular measure of burrowing performance. Although a phylogenetic signal may partly obscure the results, our conclusions join previous studies using biomechanical models and suggest that any differences in their degree of fossoriality are not driving the correlated adaptive evolution of head shape and maximal burrowing force. As the cranium has multiple functions such as feeding, and houses major sensory organs, or respiratory systems, further studies are needed to fully understand the selective pressures shaping the evolution of skull form.


amphibian, burrowing, geometric morphometrics, gymnophiona, skull, performance, head-first burrowers, head shape

10:05 AM - 10:06 AM SAST on Saturday, January 2

Creative Commons Licence
The MeaseyLab Blog is licensed under a Creative Commons Attribution 3.0 Unported License.