Namaste Yogis. Welcome to the Blockchain & AI Forum, where your blockchain and artificial intelligence technology questions are answered, mostly correct! Here no question is too mundane. As a bonus, a proverb is also included. Today’s question was submitted by Aaron from Dallas, and he wants to know if it’s true that artificial intelligence can hallucinate?
Aaron, you came to the right place. The short answer is yes, artificial intelligence can and does “hallucinate” but don’t panic it doesn’t mean your AI has taken or was given LSD! Ha, ha. Instead, let’s take a moment to define what it means for AI to hallucinate.

Within the context of AI, hallucinations are outputs, e.g., words, phrases, images, that are generated by the AI model (program) that are non-sensical or grammatically incorrect. In other words, the model made a mistake. The error can be simple, i.e., a misspelled word. Or, consequential. For example, the hallucination could mean the model completely generated “facts”. Put another way, it made up the facts. Or, it may signify the model combined data that are individually correct but when fused together are wrong, non-sensical or grammatically incorrect. Holy, computer glitch Batman!
Why would the AI model hallucinate, you ask? I offer four reasons. First, it could be the AI model is not trained on enough data. Did you know, ChatGPT4 is running on mostly 2021 and 2022 data? Second, perhaps the model was trained on noisy or dirty data. The old garbage in and garbage out explanation! Here is a third reason. Maybe the model was given insufficient context. Suppose you were only told the punch line of a joke without context. The punch line within the context might be the most hilarious statement in Western civilization but would fall flat lacking context. Last, an AI model may hallucinate if it is not provided sufficient parameters. Imagine if you were told to guess a number between 1 – 10 compared to having to guess a number between 1 – 1000. Would the difference in parameters affect your accuracy? We both know the answer.
Now you know AI models may “hallucinate” and why. With this new understanding of AI, remember to use common sense. Remember AI isn’t perfect, subject to errors, and to bias. Nevertheless, a great leap forward. In keeping with today’s proverb from Jordan, I will end immediately with this proverb about people who won’t stop talking, they say, “He swallowed a radio”.
Until next time.
Yogi Nelson
