Should scientists be allowed to use ChatGPT to write their research papers or reports – and, if so, should they be properly cited?
Amid the recent hype surrounding ChatGPT – a large language model (LLM) that is openly accessible for public use – Nature recently set some ground rules for authors using such tools in their research (1). The publisher stated that it won’t allow LLMs to be a credited author on a research paper because AI tools cannot take accountability for the work, which is required for authorship. Nature also warned that researchers who use these tools in their research must share details in the methods or acknowledgements sections.
Nature’s decision has, for some scientists, raised more questions than answers: “What prompts this decision? Is there any sort of a concerted effort such as a meeting of the minds to investigate these tools to identify issues and weaknesses? Or is this a decision made based on the fear of something new and unknown? Is this a temporary ban, put in place until more information is gathered?” asks Rebecca A. Millius, Deputy State Medical Examiner at Oregon State Police, USA.
ChatGPT has sparked debate amongst scientists and researchers, with some arguing that transparency should be maintained in the scientific process, while many others see no need to cite the use of ChatGPT either as a listed author or in the acknowledgements. For example, patient advocate Michele Mitchell “wholeheartedly supports the use of ChatGPT,” seeing no need to reference it because “it is a tool just like the ‘review’ feature in Word.”
Michal Tzuchman, Co-Founder and CMO at Kahun, also sees ChatGPT simply as a tool. “The authors hold the ultimate responsibility for the text they publish, regardless of its source,” she says. “The integrity of the scientific and academic community is built on trust and ethics, and I do not believe that using language models poses a threat to these values. I do not see a need to include [ChatGPT] in the list of authors, because the authors of the paper are the researchers who conducted the study – not the tools used to write its conclusions.”
Several schools and universities worldwide have banned the use of ChatGPT over fears of plagiarism – should we be holding academics to the same standards? Offering a perspective from outside of scientific circles, a contributor wishing to be referenced as G.R. Whale suggests that honesty may be the best policy as we increasingly have more AI tools at our disposal. He says he wouldn’t mind researchers using these tools “so long as they are properly cited – allowing the reader to draw their own conclusions.” He believes this can be achieved by putting proper processes in place and updating existing guidelines. “It might be more important to have the scientific community agree on how to cite AI, if it is not already in whichever style guide(s) your community usually uses.”
The current debate over ChatGPT’s place in science seems centered around whether it should be used to write papers and help with other admin tasks, but might it also threaten creativity? “For someone who has a 30+ year career in public health, research, and academia, I see where ChatGPT – or similar tools – may provide outstanding assistance when it comes to checking for accuracy on references, computer language coding, or other similar tasks,” says Rodney E. Rohde, Regents’ Professor at Texas State University System, USA. “However, I have concerns about how tools may be used to help generate actual ‘creative processes.’ Don’t get me wrong, I’m happy to have more outstanding technology become a digital fact-checker on my projects, but I need to understand that research creativity will not be impacted by this type of tool. We will also need human oversight of regulations regarding things like academic honesty for a thesis, dissertation, research grants, patent development, or other intellectual property.”
On doctor’s orders
Although tools like ChatGPT have only recently taken off in the public eye, AI has been a hot topic of conversation in many medical professions for years. For example, pathologists and laboratory medicine professionals have access to a range of AI technologies to support routine diagnostic decisions and prognostication.
“I recently brought up a few discussions regarding ChatGPT during clinical service with some colleagues and we sampled its AI in drafting a few pathology reports – answering some parallel questions and making point clarifications during routine pathology sign-out. None of it was used as an actual diagnostic pathology resource; however, anecdotally it was impressive,” says Constantine E. Kanakis, Resident Physician and Educator in Loyola Medicine’s Department of Pathology and Laboratory Medicine, USA.
I asked Tzuchman what ethical implications arise for using ChatGPT in science and medicine, but she feels the problem isn’t necessarily ethical. “It’s more that ChatGPT-like tools weren’t built to perform clinical reasoning and mimic clinical thinking. Rather, they are language-based models that know how to predict based on words and context. Therefore, although these tools consist of a lot of data, they lack the clinical know-how of diagnosing or treating a patient.”
Kanakis agrees with this sentiment – recognizing the value of ChatGPT, but also its limitations. “I read this amazing walkthrough of the AI software model used in GPT (the current version) – it’s a fascinating evolution (2),” he says. “Personally, I think the implications are very promising as long as we remember that it doesn’t ‘know’ anything – and nor is it a search engine per se. It is merely a tool to organize available, already accessible information. It might even impact science communication; for example, ‘translate this pathology report to a US 5th grade reading level.’”
Maybe it’s not a be-all and end-all tool or a threat to replace skilled professionals, but it at least has the potential to support communication between doctors and patients. “Medical information can often be presented in a manner that is challenging to comprehend. Tools like ChatGPT can aid in making the information more accessible, bridging the gap between the scientific jargon used by doctors and the terminology used by patients,” says Tzuchman. “Using such tools can improve communication with patients and enhance outcomes when combined with clinically proven digital technologies.”
However, caution should still be exercised when using the tool; ChatGPT can still produce misleading medical information and advice, with one well-known magazine already letting misinformation slip through the cracks of its first AI-generated article (3). For use as a decision support tool, Tzuchman warns that “future regulations [will be] needed to determine how to validate the accuracy of these models.”
I understand this debate is multifaceted – so if you have a different opinion to those offered here, I’d love to hear from you. Send your thoughts to olivia.gaskill@texerepublishing.com