Skip to Main Content

Generative AI: Supporting AI Literacy, Research, and Publishing

Ethical Issues

Generative AI has already radically changed the information and technology landscape. Yet multiple ethical issues related to access, biases, copyright and intellectual property, data integrity and transparency, privacy, misinformation, and research and open science must be considered when using - or requiring the use of - these tools. 

Further Reading

Access and Privilege

In order to use many updated Generative AI tools, vendors such as OpenAI, Microsoft, and Google require subscriptions, which can be costly to the individual. This pay-to-play model for common Generative AI tools like GPT-4 and DALL-E creates a barrier by excluding individuals who are unable to afford the monthly subscription cost for individual use and don't have access to the most up-to-date tool through their respective institution.

Further Reading


All information (and people) are biased, but how these biases will be consistently displayed by ChatGPT, etc. is unknown. We do know that Generative AI's training data relies on corpuses and datasets curated from across the open web (which is driven largely by content from North American and Western countries). Additionally, given the lack of diversity in the technology industry, these tools will yield social, cultural, and political biases that reflect not only their underlying training data, but the biases of their programmers. This can result in the erasure of people from marginalized groups, including those from the majority world.

Watch this video from Kriti Sharma, AI technologist and expert, to learn more about AI biases related to gender and race.

Further Reading

Copyright & Intellectual Property

One of the ongoing issues related to Generative AI and its training data involves the unauthorized ingestion of both open access and copyrighted works into training data. Several lawsuits have been filed by Getty and a collective of authors and publishers, who allege that their works were included without authorization, and that these authors aren't receiving credit (and royalties) for their works through Generative AI outputs. There are also challenges associated with understanding of authorship and eligibility for copyright protection, a right that has traditionally been extended only to humans. All of this can have academic, legal, and financial implications for corporations, institutions, and individuals who are content creators, and who may be unknowingly plagiarizing or creating derivative works using copyrighted material via Generative AI.



Further Reading

Economics & Labor

While AI can automate repetitive tasks and allow employees more time for collaboration and developing creative solutions, there are potentially negative issues associated with Generative AI use in industry. The first involves market competition and the potential devaluing of skill sets (e.g., graphic design) that, up until this point, were highly specialized. This devaluation could result in job loss and deprivation of livelihood that results from replacing human workers with Generative AI technology. Additionally, there are concerns related to labor conditions in the development of Generative AI tools, particularly concerning workers who primarily live in the Global South and who were paid low wages and experienced psychological issues related to the explicit graphic content they were required to view as part of the training process.

Further Reading


Since ChatGPT's release, many individuals have expressed concerns about individual privacy and how usage data are being stored and utilized by Generative AI tools to learn and produce new content. Privacy implications include the potential for increased corporate and digital surveillance, negative impacts for individuals in marginalized groups, and problems related to lack of consent.

Further Reading

Research & Open Science

In January 2023, Nature - widely considered the premier journal in science and technology - published an editorial highlighting ChatGPT's threat to open science. Notably, the article remarked, "The big worry in the research community is that students and scientists could deceitfully pass off LLM-written text as their own, or use LLMs in a simplistic fashion (such as to conduct an incomplete literature review) and produce work that is unreliable." They also identified several submitted articles that had identified ChatGPT as a co-author, highlighting the lack of accountability by Generative AI for the content it generates. Finally, they required that any scholar who employed Generative AI tools document the use of those tools as part of their introduction, methods, and/or acknowledgements sections.


Further Reading