Skip to Main Content

In an era dominated by technological advancements, generative artificial intelligence (AI) is revolutionizing the way we discover, create, and reference information. This page serves as a resource for our Walden community, providing insights into the scope and limitations of AI tools for information retrieval as well as guidance for their use in academic writing.

Generative AI produces impressive writing;

its research skills, though, leave a lot to be desired.

TIMED MODAL POP-UP FOR AI WEBINAR SERIES

Scope of Information Searching

Artificial Intelligence (AI) tools such as ChatGPT get their information from training on a “large language model” (LLM). Essentially, the tool reads the content in the language model (such as millions of pages of websites including Wikipedia, Reddit, etc) and then learns how to generate responses based on what it has read. It’s vital to understand any limitations that the tool might have. For example, the version of ChatGPT3 launched in November 2022 was trained on a large language model that included content up to 2021; so, any question to ChatGPT3 that requires information from after 2021 will be faulty. Microsoft and Google are using Generative AI tools that connect to their web search; that allows the tool to use current information in responses. Even with current information, though, AI tools lack the ability to critically evaluate sources and cannot fact-check. Any AI responses will need to be carefully proofed and fact-checked to avoid embarrassing errors.

Experts in AI and in information are recognizing these issues, as well as the potential for intentional mis-use of information. As new tools are created, the issue of how information is used and shared will be vitally important. For more, see:

Hallucination of Information Sources

In generating text answers based on a prompt, the AI tool may put together what it “believes” to be a source (such as a journal article). These sources often turn out not to be real items. AI tools build sources using predictability based on the large language model on which they are trained. Hallucinated items are created when the AI builds a new reference out of journal and article titles and citation information. To avoid potential embarrassment and loss of credibility, it is important to validate sources generated by AI tools. Too often this validation will not happen due to users working under short deadlines, increasing the risk of faulty research.

One quick way to validate sources is to search Google Scholar using the article’s title or other citation information. Using the Walden Library’s Google Scholar search offers a convenient way to search; if the article is real, it also offers access to full-text if available in the Walden Library.

AI tools work far better if the user has already identified legitimate sources. These sources can then be “fed” to the AI as part of the prompt. AI tools seem much more adept at summarizing an existing source or incorporating the source's information into a response.

Recommendations for Use

Use of generative AI in the scholarly research process requires:

  1. Validation of any sources produced by the tool in a response
  2. Critical evaluation of the response, including statistics and conclusions

Writing With AI Tools

JavaScript: No Index

AI tools can be helpful in the research and writing processes. If the use of an AI tool informs your writing or if you use the output of an AI tool in your writing for Walden assignments, it will be important to acknowledge the role of the AI tool.

The American Psychological Association (APA) Style staff recommends that if you use an AI tool in your research and writing process, you should disclose and explain the use of that tool (McAdoo, 2023). Additionally, individual AI tools may have use guidelines that students need to be aware of. For example, OpenAI (2023), the creator of ChatGPT, provided the following conditions for the use of its tools:

  • The published content is attributed to your name or company. (para. 4)
  • The role of AI in formulating the content is clearly disclosed in a way that no reader could possibly miss, and that a typical reader would find sufficiently easy to understand. (para. 4)
  • People should not represent API-generated content as being wholly generated by a human or wholly generated by an AI, and it is a human who must take ultimate responsibility for the content being published. (para. 5)

In academic writing, we ensure clarity and transparency by citing and referencing when we use others’ ideas in our writing. Similarly, we should be clear and transparent when we use the output of an AI tool in our writing. As a Walden student, if you use an AI tool in the writing process, follow APA’s guidelines as well as the individual AI tool’s guidelines, as applicable, in describing and citing its use. 

Citing the output of an AI tool may be rare

Keep in mind that we generally use and cite sources in our writing to establish and present authoritative and credible evidence for a position; because of that, citing the output of an AI tool in a Walden assignment may be rare, as AI tools are not always trained using legitimate and credible source material.

For more on the topic, see How do I Evaluate Sources to See if They Are Credible or Reliable?

References

McAdoo, T. (2023, April 7). How to cite ChatGPT. APA Style Blog. https://apastyle.apa.org/blog/how-to-cite-chatgpt

OpenAI. (2022, November 14). Sharing & publication policy. https://openai.com/policies/sharing-publication-policy

APA Citations and References for AI Tools

Many generative AI tools create outputs that are only available to the user of the tool.  As McAdoo (2023) noted in an APA Style Blog post, “Quoting ChatGPT’s text from a chat session is, therefore, more like sharing an algorithm’s output; thus, credit the author of the algorithm with a reference list entry and the corresponding in-text citation” (para. 4), and this will similarly be the case with other AI tools (e.g., Bard, GrammarlyGO).

Reference List Entry

Reference Elements

Author. (Date). Title (Version) [Description]. Source.

Notes

  • The author is the organization that created the tool or model.
  • The date is the publication year of the version used, which will often be the current year you’re using it, as the tools are regularly being updated. 
  • The title is the name of the tool or model.
  • Add the tool's version date in parentheses after the title, followed by a description in brackets.
    • To identify the version, which may be presented with a number (e.g., Version 2) or a date (e.g., May 24 version), check the tool’s website or supplementary information. For example, when using ChatGPT, the version information can be seen below the textbox where users input prompts.
    • The description should align with how the publisher or maker of the tool describes it and should be a brief phrase, such as [Large language model] or [Large multimodal model].
  • The source is the direct link to the tool or model.

Example

OpenAI. (2023). ChatGPT (May 24 version) [Large language model]. https://chat.openai.com/chat

Citations

Parenthetical citation: (OpenAI, 2023)

Narrative citation: OpenAI (2023)

AI Engagement Reference in an Appendix

When Walden students cite and reference an AI tool, they should also include an AI Engagement Reference, which is a log or transcript of the interaction with the AI tool, in an appendix on a new page after the reference(s) list. See formatting tips on APA Appendices.