AI "hallucination"
The official term in the field of AI is "hallucination." This refers to the fact that it sometimes "makes stuff up." This is because these systems are probabilistic, not deterministic.
ChatGPT often makes up fictional sources
One area where ChatGPT usually gives fictional answers is when asked to create a list of sources. It can't always distinguish between reliable and unreliable sources and it doesn't assess the credibility of sources.
Since we've had many questions from students about this, we offer this FAQ:
I can’t find the citations that ChatGPT gave me. What should I do?
There is progress in making these models more truthful
However, there is progress in making these systems more truthful by grounding them in external sources of knowledge. Some examples are Microsoft Copilot and Perplexity AI, which use internet search results to ground answers. However, the Internet sources used, could also contain misinformation or disinformation. But at least with Copilot and Perplexity you can link to the sources used to begin verification.
Scholarly sources as grounding
There are also systems that combine language models with scholarly sources. For example:
A search engine that uses AI to search for and surface claims made in peer-reviewed research papers. Ask a plain English research question, and get word-for-word quotes from research papers related to your question. The source material used in Consensus comes from the Semantic Scholar database, which includes over 200M papers across all domains of science.