In an era dominated by technological advancements, generative artificial intelligence (AI) is revolutionizing the way we discover, create, and reference information. This page serves as a resource for our Walden community, providing insights into the scope and limitations of AI tools for information retrieval as well as guidance for their use in academic writing.
Generative AI produces impressive writing;
its research skills, though, leave a lot to be desired.
CODE: HERO IMAGE
Artificial Intelligence (AI) tools such as ChatGPT get their information from training on a “large language model” (LLM). Essentially, the tool reads the content in the language model (such as millions of pages of websites including Wikipedia, Reddit, etc) and then learns how to generate responses based on what it has read. It’s vital to understand any limitations that the tool might have. For example, the version of ChatGPT3 launched in November 2022 was trained on a large language model that included content up to 2021; so, any question to ChatGPT3 that requires information from after 2021 will be faulty. Microsoft and Google are using Generative AI tools that connect to their web search; that allows the tool to use current information in responses. Even with current information, though, AI tools lack the ability to critically evaluate sources and cannot fact-check. Any AI responses will need to be carefully proofed and fact-checked to avoid embarrassing errors.
Experts in AI and in information are recognizing these issues, as well as the potential for intentional mis-use of information. As new tools are created, the issue of how information is used and shared will be vitally important. For more, see:
In generating text answers based on a prompt, the AI tool may put together what it “believes” to be a source (such as a journal article). These sources often turn out not to be real items. AI tools build sources using predictability based on the large language model on which they are trained. Hallucinated items are created when the AI builds a new reference out of journal and article titles and citation information. To avoid potential embarrassment and loss of credibility, it is important to validate sources generated by AI tools. Too often this validation will not happen due to users working under short deadlines, increasing the risk of faulty research.
One quick way to validate sources is to search Google Scholar using the article’s title or other citation information. Using the Walden Library’s Google Scholar search offers a convenient way to search; if the article is real, it also offers access to full-text if available in the Walden Library.
AI tools work far better if the user has already identified legitimate sources. These sources can then be “fed” to the AI as part of the prompt. AI tools seem much more adept at summarizing an existing source or incorporating the source's information into a response.
Use of generative AI in the scholarly research process requires:
JavaScript: No Index
AI tools can be helpful in the research and writing processes. If the use of an AI tool informs your writing or if you use the output of an AI tool in your writing for Walden assignments, it will be important to acknowledge the role of the AI tool.
The American Psychological Association (APA) Style staff recommends that if you use an AI tool in your research and writing process, you should disclose and explain the use of that tool (McAdoo, 2023). Additionally, individual AI tools may have use guidelines that students need to be aware of. For example, OpenAI (2023), the creator of ChatGPT, provided the following conditions for the use of its tools:
In academic writing, we ensure clarity and transparency by citing and referencing when we use others’ ideas in our writing. Similarly, we should be clear and transparent when we use the output of an AI tool in our writing. As a Walden student, if you use an AI tool in the writing process, follow APA’s guidelines as well as the individual AI tool’s guidelines, as applicable, in describing and citing its use.
Keep in mind that we generally use and cite sources in our writing to establish and present authoritative and credible evidence for a position; because of that, citing the output of an AI tool in a Walden assignment may be rare, as AI tools are not always trained using legitimate and credible source material.
For more on the topic, see How do I Evaluate Sources to See if They Are Credible or Reliable?
References
McAdoo, T. (2023, April 7). How to cite ChatGPT. APA Style Blog. https://apastyle.apa.org/blog/how-to-cite-chatgpt
OpenAI. (2022, November 14). Sharing & publication policy. https://openai.com/policies/sharing-publication-policy
Many generative AI tools create outputs that are only available to the user of the tool. As McAdoo (2023) noted in an APA Style Blog post, “Quoting ChatGPT’s text from a chat session is, therefore, more like sharing an algorithm’s output; thus, credit the author of the algorithm with a reference list entry and the corresponding in-text citation” (para. 4), and this will similarly be the case with other AI tools (e.g., Bard, GrammarlyGO).
Author. (Date). Title (Version) [Description]. Source.
OpenAI. (2023). ChatGPT (May 24 version) [Large language model]. https://chat.openai.com/chat
Parenthetical citation: (OpenAI, 2023)
Narrative citation: OpenAI (2023)
When Walden students cite and reference an AI tool, they should also include an AI Engagement Reference, which is a log or transcript of the interaction with the AI tool, in an appendix on a new page after the reference(s) list. See formatting tips on APA Appendices.
For this week’s discussion post, in one or two paragraphs, share about your process identifying and narrowing your research topic as well as your plan to continue the research process for your topic into next week. Include your plan to use Walden’s research and prewriting resources referenced in the course materials.
My research interest is around the impact of time spent in nature and its relationship to mental health. However, I was struggling to narrow my focus on this topic because while I have seen and heard a lot of informal and anecdotal evidence, I’m not yet an expert on the scholarly research. To help me brainstorm possibilities, I employed OpenAI’s ChatGPT tool by asking it to provide examples of how to narrow the topic of mental health and time spent in nature. It provided a list of possible topics, including this one: “Exploring the relationship between duration and frequency of time spent in nature and mental health outcomes” (OpenAI, 2023; see Appendix for full transcript). This focus on duration and frequency in nature is interesting to me because...
Reference
OpenAI. (2023). ChatGPT (May 3 version) [Large language model]. https://chat.openai.com/chat
Appendix
AI Engagement Reference
Transcript from May 8, 2023, chat session with ChatGPT
Me
I'm writing a paper on mental health and time spent in nature. Please help me narrow my research topic.
ChatGPT
Certainly, I'd be happy to help you narrow your research topic on mental health and time spent in nature.
Here are a few potential research topics to consider:
These are just a few potential research topics to consider. To further narrow your focus, you might want to consider your research question, available resources, and the scope of your paper.
Thank you. Help me find peer-reviewed sources about exploring the relationship between duration and frequency of time spent in nature and mental health outcomes.
Certainly, here are a few peer-reviewed sources that explore the relationship between duration and frequency of time spent in nature and mental health outcomes:
These sources should provide a good starting point for your research on the relationship between duration and frequency of time spent in nature and mental health outcomes. You can use these articles to find additional sources by looking at their reference lists and using keywords to search for related articles.
Guidelines for Using ChatGPT and Other AI Tools in Writing and Research