Skip to Main Content

Generative AI: Supporting AI Literacy, Research, and Publishing

What is Artificial Intelligence?

There's no agreed upon definition of artificial intelligence, though our modern concept of artificial intelligence is often attributed to mathematician Alan Turing, who in 1950 as part of his seminal work Computing Machinery and Intelligence posed the question, "Can machines think?" Professor John McCarthy, an expert in artificial intelligence and professor of Stanford University, defines artificial intelligence as "the science and engineering of making intelligent machines."

While definitions of artificial intelligence vary by academic discipline, one shared truth remains: Ideas about machine learning have evolved over time to culminate in myriad areas of study, research, applications, innovation, literature, film, and more that celebrate - and scrutinize - artificial intelligence and its impact on human intelligence.

Narrow and General AI

There are two different categories of AI: Narrow AI and General AI.

Narrow AI:

  • Designed to perform a single or limited task very well
  • Does not have generalized intelligence
  • Examples:
    • Chatbots
    • Self-driving cars
    • Face/speech recognition
    • Games like chess or Go

General AI:

  • Has broad intellectual abilities like humans
  • Can reason, make judgements, solve problems across a wide range of domains
  • May exist within the next 5-10 years
  • If achieved, examples could be:
    • Robots that can perform a variety of physical and mental tasks
    • Truly intelligent virtual assistants
    • Computer systems that can reason and think at human levels

ChatGPT and other generative AI tools are closer to Narrow AI on the spectrum - they excel at specific tasks but have some difficulty transferring that ability to other tasks without human intervention. Yet, they possess far broader cognitive abilities than previous chatbots and will undoubtedly pave the way for a quicker advent of General AI, which still remains elusive.

[The text above was generated with the assistance of Anthropic Claude. 7/12/23]

Related Terms - Glossary

In order to interrogate and ethically use AI, it's essential to have a basic vocabulary. The following are common terms in the machine learning and AI landscape.

 

Algorithms
"An algorithm is a set of instructions for solving a problem or accomplishing a task" (Investopedia). Algorithms are replicable processes that involve set rules. Examples of common algorithms include: Google's search algorithm, recipes, and the process for tying your shoes.

Automation
"Automation is the use of technology to perform tasks with where human input is minimized" (IBM). There are different types of automation: Basic automation, process automation, and intelligent automation. Examples of automation include: Household thermostats, 3D printing, creating email routing rules, and robotic process automation (RPA).

Deep Learning
"Deep learning is a subset of machine learning (ML), where artificial neural networks—algorithms modeled to work like the human brain—learn from large amounts of data" (Oracle). Automated YouTube captioning is an example of deep learning technology. 

Large Language Models
"Large language models (LLMs) are deep learning algorithms that can recognize, summarize, translate, predict, and generate content using very large datasets" (Nvidia). ChatGPT, Google Bard, DALL-E, and other Generative AI tools are examples of large language models.

Machine Learning
"The use and development of computer systems that are able to learn and adapt without following explicit instructions, by using algorithms and statistical models to analyze and draw inferences from patterns in data" (Oxford Languages). Machine learning is fundamental to automation, deep learning, and large language models.

Natural Language Processing
"Natural language processing (NLP) is a form of artificial intelligence (AI) that allows computers to understand human language, whether it be written, spoken, or even scribbled" (Coursera). To be effective, NLPs are constructed using colossal text corpuses, which usually exceed 500,000+ words. Common natural language techniques include sentiment analysis (determining if something is positive or negative or neutral), text summarization (creating a summary of a large body of text for readers), and topic modeling (grouping texts with similar topics).

Neural Network
"A neural network is a method in artificial intelligence that teaches computers to process data in a way that is inspired by the human brain. It is a type of machine learning process, called deep learning, that uses interconnected nodes or neurons in a layered structure that resembles the human brain. It creates an adaptive system that computers use to learn from their mistakes and improve continuously" (Amazon).

Robotics
"T
echnology dealing with the design, construction, and operation of robots in automation" (Merriam-Webster). Everyday examples can include robot vacuums, robotic process automation (RPA) 'bots, and virtual personal assistants (i.e., Siri, Alexa).

Training Data
The underlying dataset that trains the algorithm. The content - and biases - of this data, as well as the ways in which it is sourced, and the biases of the programmer (as humans, we all have biases) - all permeate the machine learning model. It's important to be aware of this training data, its sourcing, and its limitations when considering machine learning models.

Pop Culture Representations of AI

Literature

The following films, TV series, and works of literature are notable for their inclusion of and focus on artificial intelligence. This is a growing list of materials that are available through William & Mary Libraries.

 

Film & Television

Further Reading