Scholarly Writing: Ethical Concerns Persist with Generative A.I.

     ChatGPT, an artificial intelligence chat bot from the company OpenAI, came into the spotlight in 2022. ChatGPT is one of a few generative text aggregators available to the public (Dehouche, 2021; Rutter & Mintz, 2023). 

     Generative text renderers such as ChatGPT, can generate collections of information, and some schools are banning the tool from its devices and networks altogether (Korn & Kelly, 2023).

     Some ways that generative text can be used include the following (Dehouche, 2021. Korn & Kelly, 2023; Rutter & Mintz, 2023; Washburn, 2023).

  • Biographical references 
  • Bibliography citations
  • Lesson plan creation
  • Student assessment 
  • Define terms and explain challenging concepts  
  • Solve math equations 
  • Course syllabi 
  • Explore debate topics through theoretical lenses 
  • Render written text in various styles including descriptive and argumentative 
  • Writing samples in job application packets
  • Research reports
  • Speeches
  • Medical reports

     Some educators, scientists, and other professionals are questioning the ethical practices of using generative A.I. (Hagendorff, 2020; Korn & Kelly, 2023; Kozma, 2024; Kozma, et al., 2023; Mollick, 2023).  Generative A.I. may function best in idea generation, brainstorming, and turbo-charged searching. 

      A measured, research-based approach is needed to emerging counter research that authenticity may be missing (Williams, 2024). “AI has a tendency to deceive us, even when there are guardrails in place…This should give us pause and an opportunity to reflect on the morality of our everyday transactions and discourse. Are we so focused, as a people, on self-interest that deception is a foundational feature of our culture?” warns Dr. Robert Kozma, Emeritus Principal Scientist, SRI International and Author (2024).  

     Ethical concerns  persist in the area of  academic writing (Kozma, 2023; Mollick, 2023; Teague, 2023). Although the marketing for ChatGPT and its variants indicates that it generates original writing, it does not do this because it is solipsistic, or existing only within itself and therefore not reflecting peer-reviewed sources (Teague, 2023). 

      Instead, artificial intelligence chatbots, such as ChatGPT, assemble and render content based on sources indexed online, based on the prompts it is provided. This process is similar to compiling a Playlist, mixed , or mixed tape. The sources used in compilation may or may not be copyright-free and they may not be peer-reviewed. Sometimes, the claims and sources composed by A.I. do not exist in an A.I. process known as hallucinations (Alkaissi & McFarlane, 2023; Athaluri, et al., 2023; Emsley, 2023; Salvagno, et al., 2023).

     The lack of peer-reviewed source citation is a pivotal concern. Accuracy and methodical review are necessary components of scholarly writing. Continued advocacy and research is needed to inform potential ethical practices. The hallucinations composed by generative A.I. indicate a disturbing ethical concern of deliberate counterfeit writing, replete with falsifications (Dell’Acqua, 2022; Teague, 2024).  In Dell’Acqua’s pertinent caution, A fundamental mistake I see people building AI information retrieval systems making is the assumption that, if they provide links to original documents as part of the AI answer, people will check sources & correct hallucinations. Our work shows that doesn’t happen, if the AI is generally good, people ‘fall asleep at the wheel’ and just trust the AI answers” (2022).


Alkaissi, H., & McFarlane, S. I. (2023). Artificial hallucinations in ChatGPT: implications in scientific writing. Cureus, 15(2).

Athaluri, S. A., Manthena, S. V., Kesapragada, V. K. M., Yarlagadda, V., Dave, T., & Duddumpudi, R. T. S. (2023). Exploring the boundaries of reality: investigating the phenomenon of artificial intelligence hallucination in scientific writing through ChatGPT references. Cureus, 15(4).

Dehouche, N. (2021). Plagiarism in the age of massive generative pre-trained transformers (GPT-3). Ethics in Science and Environmental Politics, (2), 17–23.  

Dell’Acqua, F. (2022). Falling asleep at the wheel: Human/AI collaboration in a field experiment on HR recruiters. 

Emsley, R. (2023). ChatGPT: these are not hallucinations–they’re fabrications and falsifications – Editorial. Schizophrenia, 9(1), 52.

Hagendorff, T. (2020). The ethics of AI ethics: An evaluation of guidelines. Minds and machines, 30(1), 99-120.

Korn, J. & Kelly, S. (2023). New York City public schools ban access to AI tool that could help students cheat. CNN Business.

Kozma, R. (May, 2024). AI systems are getting better at tricking us. [Shared Content]. LinkedIn.

Kozma, R., Alippi, C., Choe, Y., & Morabito, F. C. (Eds.). (2023). Artificial intelligence in the age of neural networks and brain computing. Academic Press.

Mollick, E. (2023). Centaurs and cyborgs on the jagged frontier. One Useful Thing.

Quora (2023). Etymology of the word solipsism.

Rutter, M.P. & Mintz, S. (2023). ChatGPT: Threat or menace? Higher Ed Gamma.

Salvagno, M., Taccone, F. S., & Gerli, A. G. (2023). Artificial intelligence hallucinations. Critical Care, 27(1), 180.

Teague, H. (June, 2023). The Solipsism of generative AI. 10RepLearning blog.

Washburn, B. (2023) How Teachers can use ChatGPT to assess students and provide feedback. Brittany,provide%20feedback%20efficiently%20and%20accurately.

Williams, R. (May, 2024 ). AI systems are getting better at tricking us. MIT Technology Review.


Citation for this blog post: Teague, H. (May, 2024). Scholarly writing: Ethical concerns persist with generative A.I. 10RepLearning blog.

Original Post May 29, 2024; Updated June 1, 2024