Can an AI write my PhD?
The emergence of artificial intelligence (AI) platforms brings with it new issues around writing generally, including writing your PhD thesis. These bots are already widely available in early 2023, and there are already publications that list ChatGPT as an author (Mijwil, Aljanabi & ChatGPT, 2023; O’Connor & ChatGPT, 2023) in acknowledgement that a large amount of the text therein had been generated by the bot.
More AI writing tools are going to be coming online in the near future, Google’s LaMDA and Microsoft’s Bing are very close to release. Microsoft plan to integrate their Bing platform directly into their office suite, meaning that as an author you may unwittingly use AI generated automated suggestions inside your word processor without realising it. Thus, you should be interested in knowing whether such tools can be used to help you in your own writing, if you haven’t already tried (Owens, 2023), with the tantalising possibility that they could be used to replace your needing to write.
In this blog post (and in a chapter in How to Write a PhD) I set out some ideas about how you might use AI to help you write, as well as some things that you should consider if you do go that route.
What is ChatGPT?
I decided to ask ChatGPT to contribute to this chapter by describing itself. My input text: “I am writing a chapter for a book on academic writing and I want you to describe ChatGPT in your own words”. Here’s what it said:
As a language model based on the GPT-3.5 architecture, ChatGPT is a sophisticated artificial intelligence program that has been trained on a massive amount of text data from various sources. It is designed to understand natural language and generate responses that are contextually relevant and coherent.
ChatGPT uses deep learning algorithms to analyze text inputs and predict the most probable next word or phrase, based on patterns and associations it has learned from its training data. This allows it to generate human-like responses to a wide range of questions and prompts, from simple queries to complex conversations.
One of the unique features of ChatGPT is its ability to understand and generate text in multiple languages, making it a valuable tool for communication and collaboration across different cultures and regions. Additionally, it has been used for a wide variety of applications, including language translation, content creation, and customer service.
Overall, ChatGPT represents a significant breakthrough in artificial intelligence research and has the potential to transform the way we communicate and interact with technology.
Output from ChatGPT cannot be trusted
The first, and perhaps most important issue, about using ChatGPT outputs in your own writing is that you cannot trust it to generate text that is factually correct or unbiased. Indeed, OpenAI who own and administer ChatGPT readily admit that their bot can produce factually incorrect statements and is often biased in its output (Owens, 2023).
Because ChatGPT uses data on the internet to generate the basis of the text that it generates, it is prone to providing the internet consensus about popular topics, and studies have already shown that there is a bias in what is produced (McGee, 2023). This is something that OpenAI have already acknowledged, but that they are working to improve bias with user feedback (Owens, 2023).
This means that you should be checking any AI text that is generated before you add it to your chapter or manuscript.
Plagiarism by AI
There is the chance that your AI tool is going to plagiarise text that it finds on the internet, and so you may unwittingly be pasting the words of another author into your own text. Some authors consider that even if the AI is programmed to avoid plagiarism, using this text in any way should be considered as plagiarism (Salvagno, Taccone & Gerli, 2023). However, on balance, my own view is that any attempt to be overly restrictive on the use of AI generated text is bound to fail as AI itself becomes increasingly more sophisticated and widespread. Instead, I share the view (Lim et al., 2023) that it is better to embrace and learn how to use AI as a tool.
Perhaps a more fundamental consideration is whether AI will require us to redefine what we mean by plagiarism. Certainly, this is a philosophical consideration that right now (at the start of 2023) you will need to decide for yourself, although as more AI tools come online and are more commonly available, I think that many journal guidelines will be prescriptive on their use.
Plagiarism can be thought of as taking someone else’s writing without attribution. Another definition of plagiarism is pretending that someone else’s work is your own. While these two definitions may appear interchangable at first sight, when considering use of text from an AI, you might only infringe the attribution rule while the AI is programmed to infringe the pretence. In other words, if the AI is not a person, then following the first definition of AI you have not taken someone else’s writing. However, you might consider that as the AI itself was the product of someone else’s work, then they are the person (or group of people) that generated that text, and so the work should be attributed to them. This then brings you to the second definition of plagiarism. But what if that group of people don’t ask for any attribution or acknowledgement? Should you still give them as an author or put them into the acknowledgements? Moreover, could this team really be said to have generated the text when they have created software that searches text written by others on the internet and then collate and rewrite it? The huge number of content creators on the internet can never be acknowledged individually.
Another way of thinking about this might be to consider human interactions in writing text. If, for example, I help some colleagues with the English text of a manuscript and they offer to place my name in the acknowledgements, but I tell them that there is really no need. Should they still acknowledge me even though I have told them not to?
A further example might be the use of different layers in GIS that have been generated by different people. Many journals now insist that there is attribution to these layers in the legend of the figure or in the acknowledgements. We might legitimately ask whether this is necessary when those layers are freely available and have non-attribution Creative Commons licence (see here)?
Clearly, the definition of plagiarism will need some work in the light of new abilities for AIs to write text.
To me, the possibility that by using AI generated text verbatim you could be using someone else’s words means that you should avoid this. The same goes whether this is a translation from a large language platform or a new chat bot style AI. Using such platforms as tools is certainly acceptable, while using their text verbatim is probably an example of false attribution.
Ethics
In addition to the attribution of authorship, and potential plagiarism issues, you should also be aware of the ethical component of using text from an AI tool.
The kind of ethics points that come up are similar to those that are raised by paper mills (Salvagno, Taccone & Gerli, 2023). This issue is covered comprehensively elsewhere: see Chapter on when you should be an author in Measey (2022).
Positive aspects of using AI
It will not come as any surprise that many authors often find it difficult to get started. The classic image is starting at a blank piece of paper, or these days a flashing cursor on an empty screen. In this book I provide a lot of suggestions about how to get started with your writing task:
but can ChatGPT provide another opportunity when you don’t know how to start?
Will others detect ChatGPT if I use it?
Some people claim that they can detect the output from ChatGPT as it lacks the depth and insight that original authors usually have. In other words, the aim of ChatGPT is to produce the words (in a grammatically correct manor), while the aim of an author is to transmit an idea to the reader. As you might expect and hope, AI is not at the point where it can generate the same intent to communicate.
There will likely be better AI chat bots in the future, and ChatGPT itself is constantly getting feedback from users that should improve its own output.
Transparency
However you use an AI platform, you should be transparent to your advisor and any colleagues that you publish with about the exact level of use that was involved in generating your text. As a general rule with writing and academia, transparency is the best policy. Some authors have already called for regulations (Salvagno, Taccone & Gerli, 2023). Be aware that journals do update their instructions to authors, so you may need to look for statements on AI text generation. Similarly, familiarise yourself with your institutions requirements for thesis submission, and be careful you do not transgress any recently added rules.
COPE have already provided guidelines
The Committee on Publication Ethics (COPE) have already provided some guidelines to their members (mostly publishers) on the use of AI (COPE, 2023) which you can find here. In general, these rules emphasise the importance of oversight and transparency in the use of AI tools in decision making.
Publishers using AI bots
Publishers are very interested in using the text from AI bots to generate lay summaries of articles that are published on their platforms. Certainly, there is evidence that scientists are already using the bots to create their own summaries (Owens, 2023). This is certainly a possible creative use for AIs, but I would be concerned that without careful curation they may be prone to producing factually incorrect or misleading content. If you plan to use AI software to popularise your own science, then I suggest that you carefully read anything that is created and ensure that the text is correctly attributed when you use it.
Last note
This text was written in RMarkdown, without the use of suggestive prompts. AI generated text is written as quotes in the above text. All other text is my own.
It will be interesting to look back on this chapter in a decade and see the changes that have emerged in that time.
I would like to thank OpenAI and ChatGPT for generating the quoted text in this chapter. I’d also acknowledge TurnItIn for their plagiarism check.
COPE. 2023. Artificial intelligence (AI) in decision making.
Lim WM, Gunasekara A, Pallant JL, Pallant JI, Pechenkina E. 2023. Generative AI and the future of education: Ragnarök or reformation? A paradoxical perspective from management educators.
The International Journal of Management Education 21:100790. DOI:
10.1016/j.ijme.2023.100790.
McGee R. 2023a. Capitalism, Socialism and ChatGPT. DOI:
10.13140/RG.2.2.30325.04324.
Measey J. 2022. How to publish in Biological Sciences: A guide for the uninitiated. Boca Raton, Florida: CRC Press.
Mijwil M, Aljanabi M, ChatGPT. 2023. Towards Artificial Intelligence-Based Cybersecurity: The Practices and ChatGPT Generated Ways to Combat Cybercrime.
Iraqi Journal For Computer Science and Mathematics 4:65–70. DOI:
10.52866/ijcsm.2023.01.01.0019.
O’Connor S, ChatGPT. 2023. Open artificial intelligence platforms in nursing education: Tools for academic progress or abuse?
Nurse Education in Practice 66:103537. DOI:
10.1016/j.nepr.2022.103537.
Owens B. 2023. How Nature readers are using ChatGPT.
Nature 615:20–20. DOI:
10.1038/d41586-023-00500-8.
Salvagno M, Taccone FS, Gerli AG. 2023. Can artificial intelligence help for scientific writing?
Critical Care 27:1–5. DOI:
10.1186/s13054-023-04380-2.
Seghier ML. 2023. ChatGPT: Not all languages are equal.
Nature 615:216–216. DOI:
10.1038/d41586-023-00680-3.
This text is an excerpt from the book:
How to Write a PhD in Biological Sciences - which you can find Open Access
here