When encountering new information, the AI-literate citizen interrogates the tool, its outputs, how it is represented by the media/field experts, and considers the following questions:
Librarians Sandy Hervieux and Amanda Wheatley, both of McGill University, created a framework - "The ROBOT Test" for evaluating AI tools (especially as these technologies and their respective outputs are presented by the media).
Being AI Literate does not mean you need to understand the advanced mechanics of AI. It means that you are actively learning about the technologies involved and that you critically approach any texts you read that concern AI, especially news articles.
You can use the ROBOT Test tool when reading about AI applications to help consider the legitimacy of the technology.
__________
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Hervieux, S. & Wheatley, A. (2020). The ROBOT test [Evaluation tool]. The LibrAIry. https://thelibrairy.wordpress.com/2020/03/11/the-robot-test.
ChatGPT and other Generative AI tools have been known to "hallucinate"; that is, they can produce false information and can do so convincingly. For example, GPT3, 3.5, and 4 have been known to fabricate fake citations to works that don't exist. The unsuspecting user then incorporates these fake citations into their own outputs, and unknowingly promotes misinformation.
To be AI-literate, users must constantly question and fact-check the information that Generative AI tools like ChatGPT produce. While this can be cumbersome, not exercising due diligence can have serious repercussions.
Although there are multiple tools on the market currently advertising the capability to differentiate content that has been produced by Generative AI from that of humans, no single tool can - without human intervention and critical thinking - do so at this time. Additionally, in a submitted (not yet peer-reviewed) paper to the International Journal for Educational Integrity, researchers from multiple European research institutions conducted a study on the efficacy of these tools in detecting AI-generated content and concluded "that the available detection tools are neither accurate nor reliable and have a main bias towards classifying the output as human-written rather than detecting AI-generated text."
That said, the following resources may have utility - in conjunction with critical thinking and information literacy skills, as well as the ROBOT Test - in assisting users with deducing the likelihood of AI involvement.