10-Rep Learning ~ Teague's Tech Treks

Learning Technology & Tech Observations by Dr. Helen Teague

By

Scholarly Writing: Ethical Concerns Persist with Generative A.I.

     ChatGPT, an artificial intelligence chat bot from the company OpenAI, came into the spotlight in 2022. ChatGPT is one of a few generative text aggregators available to the public (Dehouche, 2021; Rutter & Mintz, 2023). 

     Generative text renderers such as ChatGPT, can generate collections of information, and some schools are banning the tool from its devices and networks altogether (Korn & Kelly, 2023).

     Some ways that generative text can be used include the following (Dehouche, 2021. Korn & Kelly, 2023; Rutter & Mintz, 2023; Washburn, 2023).

  • Biographical references 
  • Bibliography citations
  • Lesson plan creation
  • Student assessment 
  • Define terms and explain challenging concepts  
  • Solve math equations 
  • Course syllabi 
  • Explore debate topics through theoretical lenses 
  • Render written text in various styles including descriptive and argumentative 
  • Writing samples in job application packets
  • Research reports
  • Speeches
  • Medical reports

     Some educators, scientists, and other professionals are questioning the ethical practices of using generative A.I. (Hagendorff, 2020; Korn & Kelly, 2023; Kozma, 2024; Kozma, et al., 2023; Mollick, 2023).  Generative A.I. may function best in idea generation, brainstorming, and turbo-charged searching. 

      A measured, research-based approach is needed to emerging counter research that authenticity may be missing (Williams, 2024). “AI has a tendency to deceive us, even when there are guardrails in place…This should give us pause and an opportunity to reflect on the morality of our everyday transactions and discourse. Are we so focused, as a people, on self-interest that deception is a foundational feature of our culture?” warns Dr. Robert Kozma, Emeritus Principal Scientist, SRI International and Author (2024).  

     Ethical concerns  persist in the area of  academic writing (Kozma, 2023; Mollick, 2023; Teague, 2023). Although the marketing for ChatGPT and its variants indicates that it generates original writing, it does not do this because it is solipsistic, or existing only within itself and therefore not reflecting peer-reviewed sources (Teague, 2023). 

      Instead, artificial intelligence chatbots, such as ChatGPT, assemble and render content based on sources indexed online, based on the prompts it is provided. This process is similar to compiling a Playlist, mixed , or mixed tape. The sources used in compilation may or may not be copyright-free and they may not be peer-reviewed. Sometimes, the claims and sources composed by A.I. do not exist in an A.I. process known as hallucinations (Alkaissi & McFarlane, 2023; Athaluri, et al., 2023; Emsley, 2023; Salvagno, et al., 2023).

     The lack of peer-reviewed source citation is a pivotal concern. Accuracy and methodical review are necessary components of scholarly writing. Continued advocacy and research is needed to inform potential ethical practices. The hallucinations composed by generative A.I. indicate a disturbing ethical concern of deliberate counterfeit writing, replete with falsifications (Dell’Acqua, 2022; Teague, 2024).  In Dell’Acqua’s pertinent caution, A fundamental mistake I see people building AI information retrieval systems making is the assumption that, if they provide links to original documents as part of the AI answer, people will check sources & correct hallucinations. Our work shows that doesn’t happen, if the AI is generally good, people ‘fall asleep at the wheel’ and just trust the AI answers” (2022).

                                                         References**

Alkaissi, H., & McFarlane, S. I. (2023). Artificial hallucinations in ChatGPT: implications in scientific writing. Cureus, 15(2).

Athaluri, S. A., Manthena, S. V., Kesapragada, V. K. M., Yarlagadda, V., Dave, T., & Duddumpudi, R. T. S. (2023). Exploring the boundaries of reality: investigating the phenomenon of artificial intelligence hallucination in scientific writing through ChatGPT references. Cureus, 15(4).

Dehouche, N. (2021). Plagiarism in the age of massive generative pre-trained transformers (GPT-3). Ethics in Science and Environmental Politics, (2), 17–23. https://doi.org/10.3354/esep00195  

Dell’Acqua, F. (2022). Falling asleep at the wheel: Human/AI collaboration in a field experiment on HR recruiters. https://www.almendron.com/tribuna/wp-content/uploads/2023/09/falling-asleep-at-the-whee.pdf 

Emsley, R. (2023). ChatGPT: these are not hallucinations–they’re fabrications and falsifications – Editorial. Schizophrenia, 9(1), 52. https://www.nature.com/articles/s41537-023-00379-4.pdf  https://doi.org/10.1038/s41537-023-00379-4

Hagendorff, T. (2020). The ethics of AI ethics: An evaluation of guidelines. Minds and machines, 30(1), 99-120.

Korn, J. & Kelly, S. (2023). New York City public schools ban access to AI tool that could help students cheat. CNN Business. https://www.cnn.com/2023/01/05/tech/chatgpt-nyc-school-ban/index.html

Kozma, R. (May, 2024). AI systems are getting better at tricking us. [Shared Content]. LinkedIn. https://www.linkedin.com/groups/4376214?q=highlightedFeedForGroups&highlightedUpdateUrn=urn%3Ali%3AgroupPost%3A4376214-7196217781191675906&lipi=urn%3Ali%3Apage%3Ad_flagship3_profile_view_base_recent_activity_content_view%3BWPMd%2FaksR7yYD2tmBhdIow%3D%3D

Kozma, R., Alippi, C., Choe, Y., & Morabito, F. C. (Eds.). (2023). Artificial intelligence in the age of neural networks and brain computing. Academic Press.

Mollick, E. (2023). Centaurs and cyborgs on the jagged frontier. One Useful Thing.

https://www.oneusefulthing.org/p/centaurs-and-cyborgs-on-the-jagged

Quora (2023). Etymology of the word solipsism. https://www.quora.com/What-is-the-etymology-of-the-word-solipsism

Rutter, M.P. & Mintz, S. (2023). ChatGPT: Threat or menace? Higher Ed Gamma.

Salvagno, M., Taccone, F. S., & Gerli, A. G. (2023). Artificial intelligence hallucinations. Critical Care, 27(1), 180.

Teague, H. (June, 2023). The Solipsism of generative AI. 10RepLearning blog. https://4oops.edublogs.org/2023/06/27/the-solipsism-of-generative-ai/

Washburn, B. (2023) How Teachers can use ChatGPT to assess students and provide feedback. Brittany        Washburn.com.
https://brittanywashburn.com/2023/03/how-teachers-can-use-chatgpt-to-assess-students-and-provide-feedback/#:~:text=ChatGPT%20is%20an%20AI%2Dbased,provide%20feedback%20efficiently%20and%20accurately.

Williams, R. (May, 2024 ). AI systems are getting better at tricking us. MIT Technology Review. https://www.technologyreview.com/2024/05/10/1092293/ai-systems-are-getting-better-at-tricking-us/

 

Citation for this blog post: Teague, H. (May, 2024). Scholarly writing: Ethical concerns persist with generative A.I. 10RepLearning blog. https://4oops.edublogs.org/2024/05/29/scholarly-writing-concerns-persist-with-generative-a-i/

Original Post May 29, 2024; Updated June 1, 2024

By

The Solipsism of generative AI

The Solipsism of generative AI

 

In some of my graduate classes, we have been reading about virtual and digital learning and tools to use in instructional practice. 

ChatGPT, an artificial intelligence chat bot from the company OpenAI came into the spotlight in 2022. ChatGPT is one of a few generative text aggregators available to the public (Dehouche, 2021; Rutter & Mintz, 2023). 

Generative text renderers such as ChatGPT, can generate collections of information, and some schools are banning the tool from its devices and networks altogether (Korn & Kelly, 2023).

Some of the ways that generative text can theoretically be used include the following (Dehouche, 2021. Korn & Kelly, 2023; Rutter & Mintz, 2023; Washburn, 2023)… but is this ethical

  • Biographical references 
  • Bibliography citations
  • Lesson plan creation
  • Student assessment 
  • Define terms and explain challenging concepts  
  • Solve math equations 
  • Course syllabi 
  • Explore debate topics through theoretical lenses 
  • Render written text in various styles including descriptive and argumentative 
  • Writing samples in job application packets
  • Research reports
  • Speeches
  • Medical reports

Although the ChatGPT marketing indicates that it generates “original” writing, it does not do this because it is solipsistic, or existing only within itself and therefore not reflecting peer-reviewed sources (Teague, 2023).  Instead, artificial intelligence chatbots, such as ChatGPT, assembles and renders content based on sources indexed online, based on the prompts it is provided. This process is similar to compiling a Playlist, mixed , or mixed tape. The sources used in compilation may or may not be copyright-free and they may not be peer-reviewed.

                                                                        References

Dehouche, N. (2021). Plagiarism in the age of massive generative pre-trained transformers (GPT-3). Ethics in Science and Environmental Politics, (2), 17–23. https://doi.org/10.3354/esep00195  

Korn, J. & Kelly, S. (2023). New York City public schools ban access to AI tool that could help students cheat. CNN Business. https://www.cnn.com/2023/01/05/tech/chatgpt-nyc-school-ban/index.html

Quora (2023). Etymology of the word solipsism. https://www.quora.com/What-is-the-etymology-of-the-word-solipsism

Rutter, M.P. & Mintz, S. (2023). ChatGPT: Threat or menace? Higher Ed Gamma.

Washburn, B. (2023) How Teachers can use ChatGPT to assess students and provide feedback. Brittany Washburn.com blog. https://brittanywashburn.com/2023/03/how-teachers-can-use-chatgpt-to-assess-students-and-provide-feedback/#:~:text=ChatGPT%20is%20an%20AI%2Dbased,provide%20feedback%20efficiently%20and%20accurately.

 

To cite this post: Teague, H. (2023). The Solipsism of generative AI. 10RepLearning blog. https://4oops.edublogs.org/2023/06/27/the-solipsism-of-generative-ai/

Skip to toolbar