Skip to Main Content

Artificial Intelligence: A Guide for Students

Provides a general overview of uses, tools, and issues with GenAI (generative artificial intelligence).

Why you should always double-check AI output

AI tools can generate text, images, and summaries in seconds but that doesn't mean the content is accurate, fair, or appropriate for academic use.

AI-generated content is often:

  • Incorrect or misleading
  • Incomplete or lacking sources
  • Biased or one-sided
  • Too generic to meet your assignment requirements

Here are some issues to watch for when using AI tools:

Problem What it Means
Inaccuracies & Hallucinations The AI confidently gives information that is false or made up.
Missing Sources AI tools often don't include real citations or may invent them.
Bias & Stereotypes AI may repeat biased or unfair perspectives based on its training data.
Generic Responses AI outputs may lack depth, originality, or critical thinking.
Mismatched Tone or Voice AI writing may not match your usual writing style or assignment expectations.

Accuracy

AI tools can make factual errors, especially when: 

  • They were trained on outdated or incorrect information.
  • They try to "fill in the blanks" by guessing.
  • They misunderstand the question or prompt.

Generative AI tools like ChatGPT are able to produce a lot of different content, from quick answers to a question to creating cover letters, poems, short stories, outlines, essays, and reports. However, it often contains errors, false claims, or plausible sounding, but completely incorrect or nonsensical answers, so be sure to take the time to verify and check the content created to catch these problems. 

Deepfakes and Impersonation

In addition to text errors, AI can create realistic but fake images, videos, and voices.  These are called deepfakes.

Deepfakes use AI technology to make it look like a real person said or did something they never actually did.  AI tools can also generate impersonated writing--making it appear that a specific person wrote something when they didn't.

Why this matters in academic work:

  • Deepfakes and impersonation can easily  spread false information.
  • You may accidentally share or cite something that looks real but is fake.
  • Generated content can be exploited, leading to reputational damage or financial harm to individuals or organizations. 
Fact-checking is your responsibility. If you can't verify that something is real, don't use it in your academic work. 

Misinformation, disinformation, & malinformation

AI can unintentionally spread fake or harmful information. It's important to know the difference:

Misinformation

Disinformation

Malinformation

Definition: False or inaccurate information that is being created and spread accidentally without intention to deceive or harm. False information that is being created and spread purposefully with the intention to hide the truth, mislead, and manipulate a person, social group, organization, or country.  Definition: Information that is based in reality but used out of context to inflict harm on a person, organization, or country.
Example: Unintentional mistakes such as inaccurate photo captions, dates, statistics, translations, or when satire is taken serious. Example: Fabricated or deliberately manipulated audio/visual content.  Intentionally created conspiracy theories or rumors. Example: Editing a video to remove important context to harm or mislead.

Phishing & social engineering

Some AI-generated content may include manipulated language designed to:

  • Trick you into sharing personal information
  • Get you to click dangerous links
  • Pressure you into unethical behavior

This is called social engineering. Be cautious if AI-generated content:

  • Asks for your passwords or personal information
  • Includes suspicious links
  • Tries to create fear or urgency

Never trust AI content blindly. 

Bias & discrimination

AI tools are trained on internet data which often includes social, cultural, and political biases.

As a result, AI-generated content may:

  • Leave out important voices or perspectives
  • Reinforce harmful stereotypes
  • Favor dominant cultural views

Ask yourself:

  • Whose voices are missing?
  • Is this content fair and respectful?
  • Could this language hurt or exclude someone?

You are responsible for recognizing and addressing bias.

Comprehensiveness

AI responses are often too simple, too short, or missing key information. Before you use AI content, check:

  • Does it fully answer the question?
  • Does it include all relevant details?
  • Are important perspectives or sources missing?

AI is not a substitute for research, analysis, or deep understanding.

Currency

AI tools are trained on information up to a certain date.  They are not updated in real time.

For example, ChatGPT's free version may only know information from before 2022 or 2023.

If you need recent news, current research, or up-to-date information, don't rely on AI!  Instead, use current, credible resources in your research. 

Sources

AI content often does not provide real sources--or worse, it may invent fake ones.  If AI gives you a citation:

  • Double-check that the source exists
  • Verify that it actually says what the AI claims
  • Make sure it's appropriate for academic work

Not crediting sources of information used and creating fake citations are both cases of plagiarism, and therefore breaches of Academic Integrity. Be sure to check TSTC Library's ONEsearch and/or Google Scholar to verify whether the sources are correct or even exist.

Transparency

AI tools do not typically explain how they generate content or what data they were trained on. You usually don't know:

  • What sources were included in the training data
  • If the information is complete or biased
  • Whether privacy or copyrighted content was used without permission

Copyright

AI-generated content may use or remix information from copyrighted sources without giving credit.

For example, there have been several lawsuits against tech companies that use images found on the internet to program their AI tools. One such lawsuit is by Getty Images which accuses Stable Diffusion of using millions of pictures from Getty's library to train its AI tool. They are claiming damages of US $1.8 trillion.

Some AI platforms can also claim ownership of the content they generate. Before using AI content:

  • Check the tool's copyright policies
  • Make sure you have the right to use, modify, and share the content.
  • Always cite the AI tool in your work when required

Model Collapse

A study by Shumailov et al. (2023) found that the inclusion of AI-generated content in training datasets led to model collapse, which is  "a degenerative process whereby, over time, models forget the true underlying data distribution, even in the absence of a shift in the distribution over time" (p. 2). Model collapse happens when future AI models are trained mostly on AI-generated content instead of human-created content.  Over time, this can cause:

  • Lower quality information
  • Recycled errors
  • Less diverse and creative content
Academic work should promote original thinking not AI recycling.