I can't find the citations that ChatGPT gave me. What should I do?
ChatGPT (the free version) makes up citations that don't exist.
ChatGPT might give you articles by an author that usually writes about your topic, or even identify a journal that published on your topic, but the title, pages numbers, and dates are completely fictional. This is because ChatGPT is not connected to web search, so has no way of identifying actual sources.
You can try to see if any are valid by searching in Library Search on our home page or in Google Scholar, but chances are the sources do not exist.
It's better to use ChatGPT for tasks like these:
other writing and text-related tasks.
It's not designed to be a search engine. Use Library Search, Google Scholar, or databases for your discipline instead.
Another option for searching the web is Perplexity AI. It combines a language model with a search engine and provides links to its sources, so you can fact-check. It doesn't include all the scholarly resources you would find in Library Search or Google Scholar, but it can be a complementary tool for finding web search results with natural language.
How can I fact-check the information that ChatGPT and other language models give me?
If you are using a model that links to its sources (like Copilot, Perplexity, or Gemini), follow the links and read the original pages. Make sure the AI-generated summary aligns with the content of the page it came from. And make sure the page content is relevant to the task you asked the model to do.
If you are using the free version of ChatGPT (without links to sources), you will want to do a quick web search to find out if what it’s saying is true. Look for more than one source to verify the information. Wikipedia can be helpful as can mainstream news sites that employ fact-checkers.
Because the free version of ChatGPT doesn’t have an understanding of facts, it’s often better to use a model that links to its sources, like Perplexity or Copilot. This makes it easier to fact-check.
Since websites can also contain misinformation, try using the SIFT Method: Stop, Investigate the source, Find better coverage, and Trace claims to the original context.
What is hallucination (in models like ChatGPT)?
Hallucination is the word used to describe the situation when models like ChatGPT output false information as if it were true. Even though the AI may sound very confident, sometimes the answers it gives are just plain wrong.
Why does this happen? AI tools like ChatGPT are trained to predict what words should come next in the conversation you are having with it. They are really good at putting together sentences that sound plausible and realistic.
However, these AI models don't understand the meaning behind the words. They lack the logical reasoning to tell if what they are saying actually makes sense or is factually correct. They were never designed to be search engines. Instead they might be thought of as “wordsmiths”—tools for summarizing, outlining, brainstorming, and the like.
So we can't blindly trust that everything they say is accurate, even if it sounds convincing. It's always a good idea to double check important information against other reliable sources.
Here’s a tip: Models that are grounded in an external source of information (like web search results) hallucinate less often. That’s because the model searches for relevant web pages, summarizes the results, and links to the pages that each part of the answer came from. This makes it easier to fact-check the result.
Examples of grounded models are Microsoft Copilot, Perplexity, and ChatGPT Plus (the paid version).
Is using ChatGPT for coursework considered cheating?
It depends. At the University of Central Missouri, it’s up to each instructor to create policies for classroom use. So if your instructor doesn’t allow it, it would be cheating, especially if you use ChatGPT to write an academic paper.
However, many instructors do allow it for specific assignments or tasks. Always ask your instructor what their policies are. The best time to talk with your instructor is before you begin your assignment to avoid needing to start over if generative AI isn't allowed.
Even if your instructor does allow it for particular uses, it’s still against the principles of academic integrity to represent a paper entirely written by ChatGPT as your own.
How can I protect my privacy when using ChatGPT?
First, don’t enter any private or confidential information into ChatGPT and similar tools. It’s possible that developers may review your entries to improve the next version of their model.
If you want to make sure your inputs aren't used to improve the model, you can turn off that feature in the settings. Click on your name, then Settings, then Data controls, and turn off the switch called “improve the model for everyone.”
Another option is to use the feature called “temporary chat.” At the top of the page, click on the menu that says “ChatGPT,” and then click the option that says “temporary chat.” Then your chat won’t appear in your history and ChatGPT won’t save anything from your conversation.
You can do the same in other tools. In Perplexity’s settings, the switch is called “AI data retention.” In Google’s Gemini, click a button in the lower left called “activity.” At the top of the page that comes up, you can click the menu that says “turn off.” Claude doesn’t use your inputs for training unless you opt in.