Skip to Main Content

Generative AI: Supporting AI Literacy, Research, and Publishing

How Large Language Models Work

In a simple sense, Large Language Models (LLMs) work by statistically predicting the next most likely words or output, and they prioritize fluency over accuracy. While GPT-4 is an improvement over previous models, it's still not perfect. Although LLMs generate text, they don't actually understand it, nor do they know anything about the real world. Remember, these are “stochastic parrots” (Bender et. al), not all-knowing magic beings or anything like a human intelligence no matter how fluent they are!

The following video from Gartner articulates this, as well as the the power - and other pitfalls - of these tools.

Further Reading

Use Cases for Generative AI

Generative AI tools can be useful for:

  • Defining well-known terms and topics
  • Starting research, for things where initial accuracy isn’t critical (you just need some keywords and ideas), that you know well, or that are well-described (though be careful because of “hallucinations” - basically all LLMs lie) - FACT CHECK!!!
  • Identifying top scholars in a field or subject area, especially if you’re looking for diverse scholars
  • Brainstorming topic ideas, keywords, subject headings or possible databases to search 
  • Narrowing down your research question. The more specific you are with your prompts, the better
  • Summarizing and simplifying dense text (though its accuracy and utility may vary)
  • Translating common languages
  • Generating audiovisuals and code
  • Classifying or analyzing large datasets or organizing info

They are not recommended for:

  • Use if prohibited by your instructor, PI or faculty mentor, the journals you or your collaborators plan to submit to, or other stakeholders
  • Current (or future) events or things that change quickly unless it’s connected to Internet (e.g. ChatGPT is only trained on data up to Sept. 2021 but Bing Chat will give you search links)
  • Topics that you don’t know well without verifying from multiple sources or where accuracy is critical (because of “hallucinations” - basically all LLMs lie) - FACT CHECK!!!
  • Topics that are ill-defined, obscure, non-mainstream, non-Western, or personal - basically things that don’t have a lot of information online about them or that  AI can’t “know” (How to Get an AI to Lie To You)
  • Sensitive or potentially harmful or offensive topics
  • Anything private or confidential
  • Generating whole papers
  • Generating citations and bibliographies

Potential Benefits and Harms

Artificial intelligence advocates and critics have identified a host of potential benefits and harms caused by LLMs. These should be considered when using Generative AI tools. Although some of the following potential benefits and potential harms originated from a Lifelong Literacy Institute (LILi) webinar, Using ChatGPT to Engage in Library Instruction? Challenges and Opportunities, this list contains a broader understanding of potential benefits and harms:

Potential Benefits:

  • Translation capabilities and support for multilingual learners
  • Ability to break down concepts and improve reading comprehension (or simplify complicated text)
  • Ability to quickly generate text, images, code, etc.
  • Ability to automate time-consuming or tedious work
  • Support for those with disabilities
  • Usefulness in getting started and getting feedback
  • "Force multiplier" for effort

Potential Harms:

  • May encourage reliance on tech rather than critical thinking and discourage critically learning, researching, and writing
  • Privacy, copyright, and other ethical issues
  • Tendency to "hallucinate" or lie
  • Potential for widespread misinformation, disinformation, and harmful or derogatory speech
  • Bias in results - absence or removal of material about marginalized groups
  • Could encourage academic dishonesty

Because we're in the Wild West of Generative AI, and we are all still learning about the impacts of this technology, this list of benefits and harms is likely to expand over time.

Academic Integrity Considerations

William & Mary's Honor Code is a time-honored standard built upon the premise of mutual trust, that community members will not "lie, cheat, or steal, either in...academic or personal life." While the 2022-23 Student Handbook doesn't explicitly mention Generative AI, schools and faculty may have their own standards for how - or if - ChatGPT and other Generative AI tools can be used to complete assignments. To ensure compliance with the Honor Code (and avoid any student conduct violations), all students should:

  1. Review the syllabus for each course to determine if and/or to what extent Generative AI is permitted; AND
  2. Request and gain permission from individual faculty members prior to using GenerativeAI to complete assignments or conduct research.

Students who are engaging in undergraduate research, or who are working as a Teaching Assistant, Graduate Assistant, or work-study student, should consult with their direct supervisor and Faculty Advisor to determine any limitations of using Generative AI for these purposes.