Before you use AI-generated content, ask yourself:
Fact-check the Information. Look up key facts, dates, and claims in:
Check for missing or fake sources. AI often hallucinates citations, creating fake articles or authors. If AI gives you a source, verify that:
Review for bias or unfair language. Ask yourself:
You don't have to fact-check alone. Try using:
AI tools can generate text, images, and summaries in seconds but that doesn't mean the content is accurate, fair, or appropriate for academic use.
AI-generated content is often:
Here are some issues to watch for when using AI tools:
Problem | What it Means |
---|---|
Inaccuracies & Hallucinations | The AI confidently gives information that is false or made up. |
Missing Sources | AI tools often don't include real citations or may invent them. |
Bias & Stereotypes | AI may repeat biased or unfair perspectives based on its training data. |
Generic Responses | AI outputs may lack depth, originality, or critical thinking. |
Mismatched Tone or Voice | AI writing may not match your usual writing style or assignment expectations. |
AI tools can make factual errors, especially when:
Generative AI tools like ChatGPT are able to produce a lot of different content, from quick answers to a question to creating cover letters, poems, short stories, outlines, essays, and reports. However, it often contains errors, false claims, or plausible sounding, but completely incorrect or nonsensical answers, so be sure to take the time to verify and check the content created to catch these problems.
In addition to text errors, AI can create realistic but fake images, videos, and voices. These are called deepfakes.
Deepfakes use AI technology to make it look like a real person said or did something they never actually did. AI tools can also generate impersonated writing--making it appear that a specific person wrote something when they didn't.
Why this matters in academic work:
AI can unintentionally spread fake or harmful information. It's important to know the difference:
Misinformation |
Disinformation |
Malinformation |
Definition: False or inaccurate information that is being created and spread accidentally without intention to deceive or harm. | False information that is being created and spread purposefully with the intention to hide the truth, mislead, and manipulate a person, social group, organization, or country. | Definition: Information that is based in reality but used out of context to inflict harm on a person, organization, or country. |
Example: Unintentional mistakes such as inaccurate photo captions, dates, statistics, translations, or when satire is taken serious. | Example: Fabricated or deliberately manipulated audio/visual content. Intentionally created conspiracy theories or rumors. | Example: Editing a video to remove important context to harm or mislead. |
Some AI-generated content may include manipulated language designed to:
This is called social engineering. Be cautious if AI-generated content:
Never trust AI content blindly.
AI tools are trained on internet data which often includes social, cultural, and political biases.
As a result, AI-generated content may:
Ask yourself:
You are responsible for recognizing and addressing bias.
AI responses are often too simple, too short, or missing key information. Before you use AI content, check:
AI is not a substitute for research, analysis, or deep understanding.
AI tools are trained on information up to a certain date. They are not updated in real time.
For example, ChatGPT's free version may only know information from before 2022 or 2023.
AI content often does not provide real sources--or worse, it may invent fake ones. If AI gives you a citation:
Not crediting sources of information used and creating fake citations are both cases of plagiarism, and therefore breaches of Academic Integrity. Be sure to check TSTC Library's ONEsearch and/or Google Scholar to verify whether the sources are correct or even exist.
AI tools do not typically explain how they generate content or what data they were trained on. You usually don't know:
AI-generated content may use or remix information from copyrighted sources without giving credit.
For example, there have been several lawsuits against tech companies that use images found on the internet to program their AI tools. One such lawsuit is by Getty Images which accuses Stable Diffusion of using millions of pictures from Getty's library to train its AI tool. They are claiming damages of US $1.8 trillion.
Some AI platforms can also claim ownership of the content they generate. Before using AI content:
A study by Shumailov et al. (2023) found that the inclusion of AI-generated content in training datasets led to model collapse, which is "a degenerative process whereby, over time, models forget the true underlying data distribution, even in the absence of a shift in the distribution over time" (p. 2). Model collapse happens when future AI models are trained mostly on AI-generated content instead of human-created content. Over time, this can cause: