What are AI Hallucinations? And How You Can Avoid Them
AI is great. You can get detailed answers and explanations to pretty much anything in a matter of seconds. But what happens when AI doesn't know what you're talking about? It really just fakes a response with such confidence it's hard to decipher what's legit and what's not. Here's a few ways to reduce hallucinations when using tools like ChatGPT.

Justin Gluska
Updated September 14, 2023

a robot tripping on LSD in a bedroom laying on a bed, sunny and colorful room 4k
Reading Time: 6 minutes
Artificial Intelligence is rapidly permeating our lives, and while it has brought fantastic advancements, it also has some peculiarities.
One such peculiarity is AI hallucinations.
No, your devices aren't starting to have dream-like visions or hear phantom sounds, but sometimes, AI technology will produce an output that seems 'pulled from thin air'.
Confused? You're not alone.
Let's explore what AI Hallucinations mean, the challenges they pose, and how you can avoid them.
The term AI hallucinations emerged around 2022 with the deployment of large language models like ChatGPT. Users reported that these chatbots seemed to be sneakily embedding plausible-sounding but false data into their content.
This unsettling undesired quality came to be known as hallucination due to a faint resemblance it bore to human hallucinations, although the phenomena are pretty distinct.
So, What are AI Hallucinations?
For humans, hallucinations typically involve false perceptions. AI hallucinations, on the other hand, are concerned with unjustified responses or beliefs.
Essentially, it is when an AI confidently spews out a response that is not backed up by the data it was trained on.
If you asked a hallucinating chatbot for a financial report for Tesla, it might randomly insist that Tesla's revenue was $13.6 billion, even though that is not the case. These AI hallucinations can cause some serious misinformation and confusion. And I see it happen super frequently with ChatGPT
Why Do AI Hallucinations Happen?
AI performs its tasks by recognizing patterns in data. Predicts future information based on the data it has 'seen' or been 'trained' on.
Hallucinations can happen due to a variety of reasons: Insufficient training data, encoding and decoding errors, or biases in the way the model encodes or recollects knowledge.
For chatbots like ChatGPT, which generate content by producing each subsequent word based on prior words (including the ones it generated earlier in the same conversation), there is a cascading effect of possible hallucinations as the generated response lengthens.
While most AI hallucinations are relatively harmless and honestly somewhat amusing, some cases can bend more towards the problematic side of the spectrum.
In November 2022, Facebook's Galactica produced an entire academic paper under the pretense that it was quoting a non-existent source. The generated content erroneously cited a fabricated paper by a real author in the relevant field!
Similarly, OpenAI's ChatGPT, upon request, created a complete report on Tesla's financial quarter – but with completely invented financial figures.
And these are just a couple of examples of AI hallucinations. As ChatGPT continues to pick up mainstream friction, it's only a matter of time until we see a higher frequency of these.
How Can You Avoid AI Hallucinations?
AI hallucinations can be combated through carefully engineered prompts and making use of applications like Zapier which has developed guides to help users avoid AI hallucinations. Here are a few strategies based on their suggestions you might find handy:
1. Fine-Tune & Contextualize with High-Quality Data
Importance of Data: It is often said that an AI is only as good as the data it's trained on. By fine-tuning ChatGPT or similar models with high-quality, diverse, and accurate datasets, the instances of hallucinations can be minimized. Obviously you can't re-train the model if you aren't OpenAI, but you can fine tune your input or requested output when asking direct questions.
Implementation: Regularly updating training data is the most optimal way of reducing hallucinations. Having human reviewers evaluating and correcting the model's responses during training further increase reliability. If you don't have access to fine-tune the model (the case of ChatGPT) you can ask questions with simple "yes" or "no" answers to limit hallucinations. I've also found pasting context of what you're asking allows ChatGPT to answer questions a lot better
2. Provide User Feedback
Collective Improvement: Go ahead and tell ChatGPT it was wrong, or direct it in certain ways to explain its misguidance. ChatGPT can't retrain itself based on you saying something, but flagging a response is a great way of letting the company know this result is wrong, and should be something else.
3. Assign a Specific Role to the AI
Before you begin to ask questions, contextualize what the AI is supposed to be. If you fill in the shoes of the conversation, the walk becomes a lot easier. While this doesn't always translate to less hallucinations, I've noticed you'll get less overconfident answers. Make sure to double check all the facts & explanations you get though.
4. Adjust the Temperature
While you can't change the temperature directly within ChatGPT, you can adjust it on the OpenAI Playground. The temperature is what gives the model more or less variability. The more variable, the more likely the model is to get off track and start saying really anything. Keeping the model at a reasonable temperature will keep it in-tune with whatever conversation is at hand.
5. Do Your Own Research!
As silly as it sounds, fact-checking the results you get from an AI model is the only surefire way of understanding the result you get from one of these tools. This doesn't really reduce hallucinations, but it can help differentiate fact from fiction.
AI Is Not Perfect
While these techniques can significantly help to curtail AI hallucinations, it's important to remember that AI is not foolproof!
Yes, it can crunch enormous amounts of data and provide insightful interpretations within seconds. However, like any technology, it does not possess consciousness or the ability to differentiate between what's true and what's not viscerally, as humans do.
AI is a tool, dependent on the quality and reliability of the data it's been trained on, and on the way we use it. And while AI has caused a revolution in technology, it’s important to be aware and wary of these AI hallucinations.
I do have a lot of confidence that things will get better as these models are retrained & updated, but we'll probably always have to deal with this fake confidence spewed when a tool really doesn't know what it's talking about. Skepticism is key. Let's not let our guard down & keep using our intuition.
Want to Learn Even More?
If you enjoyed this article, subscribe to our free newsletter where we share tips & tricks on how to use tech & AI to grow and optimize your business, career, and life.