Skip to Main Content
Reed Library

AI Literacy

Critical Thinking With AI

Being AI-literate means applying critical thinking whenever you interact with AI content. AI can be persuasive. It produces confident answers, realistic images, and human-like text. This section offers strategies to evaluate AI-generated information, scrutinize and verify AI outputs and avoid being misled.

Hallucinations and Verification

One of the biggest pitfalls of tools like ChatGPT is they sometimes produce incorrect information that sounds perfectly plausible. These mistakes are often called “hallucinations”, where the AI literally makes things up. For instance, ChatGPT might generate a fake quote or a bogus reference that looks real. Researchers have found that AI tools like ChatGPT, Copilot, etc. often provide fabricated data that appears authentic​. (MIT Sloan Teaching & Learning Technologies, 2024) Never assume an AI’s answer is 100% correct. Double-check facts and sources from reliable references (textbooks, credible websites, scholarly articles). If an AI provides a citation, try to find the original source – oftentimes you’ll discover the AI cited something that doesn’t actually support its statement, or even a source that doesn’t exist!

AI text may have other inaccuracies. Watch out for specific details that you know to verify (dates, statistics, names). AI might, for example, swap historical dates or mix up two different research studies. If something looks slightly off or unfamiliar, investigate further. Developing a healthy skepticism of AI output is key. It's best to apply general information literacy practices, like cross-verifying unusual claims against trusted sources and checking where citations come from.

When using artificial intelligence, it is important to evaluate the tool itself and the tool’s output critically. Ask yourself these questions:

What is the purpose of the tool?
How is this tool funded? Does the funding impact the credibility of the output?
What, if any, ethical concerns do you have about this tool? 
Does the tool asks you to upload existing content such as an image or paper? If so, are there copyright concerns? Is there a way to opt out of including your uploaded content in the training corpus? 
What is the privacy policy? If you are assigning this tool in a class, be sure to consider any FERPA concerns.
What corpus or data was used to train the tool or is the tool accessing? Consider how comprehensive the data set is (for example, does it consider paywalled information like that in library databases and electronic journals?), if it is current enough for your needs, any bias in the data set, and algorithmic bias.
If reproducibility is important to your research, does the tool support it?
Is the information the tool creates or presents credible? Because generative AI generates content as well as or instead of returning search results, it is important to read across sources to determine credibility.
If any evidence is cited, are the citations real or "hallucinations"?

Images and Media

AI can generate not just text but also images, audio, and video. Deepfakes, or very convincing media designed to impersonate a real person, can be particularly challenging to identify. If you see a sensational image or video (like a “photo” of an event that never happened), consider that it could be AI-generated. Look for signs like unnatural details (extra fingers on a hand in an AI image, or strange reflections) and corroborate with other sources. Don’t share or trust shocking content until you verify it through reliable news outlets. Being AI-literate means asking questions like: who created this? For what purpose?

When Not to Use AI

A part of critical engagement is knowing when not to use AI. For example, AI might be great at generating a quick summary of a topic, but not very good at giving deep insight or accurate references. It's best to use AI as a supplement, not a crutch. Always maintain a mindset of “trust, but verify” with any AI-provided information.

Always confirm with your instructor which uses or tools are are acceptable for any given assignment.

MIT Sloan Teaching & Learning Technologies. (2024, November 12). When AI Gets It Wrong: Addressing AI Hallucinations and Bias - MIT Sloan Teaching & Learning Technologies. https://mitsloanedtech.mit.edu/ai/basics/addressing-ai-hallucinations-and-bias/