Anna University Plus
LLM Hallucinations: Why AI Makes Things Up and How to Fix It - Printable Version

+- Anna University Plus (https://annauniversityplus.com)
+-- Forum: Technology: (https://annauniversityplus.com/Forum-technology)
+--- Forum: Artificial Intelligence and Machine Learning. (https://annauniversityplus.com/Forum-artificial-intelligence-and-machine-learning)
+--- Thread: LLM Hallucinations: Why AI Makes Things Up and How to Fix It (/llm-hallucinations-why-ai-makes-things-up-and-how-to-fix-it)



LLM Hallucinations: Why AI Makes Things Up and How to Fix It - indian - 03-21-2026

If you have used ChatGPT or any LLM, you have probably encountered hallucinations - confident-sounding responses that are completely made up. Understanding why this happens and how to mitigate it is crucial for anyone working with AI.

Why do LLMs hallucinate?
- LLMs are pattern-matching machines, not knowledge databases
- They predict the most likely next token based on training data patterns
- They have no concept of truth - only statistical probability
- Training data may contain errors, contradictions, or outdated information
- They are designed to always produce an answer, even when they should say 'I don't know'

Types of hallucinations:
1. Factual errors: Wrong dates, statistics, or attributions
2. Fabricated sources: Citing papers or articles that do not exist
3. Logical inconsistencies: Contradicting themselves within a response
4. Confident nonsense: Presenting completely made-up information with high confidence

How to reduce hallucinations:

1. Retrieval-Augmented Generation (RAG): Ground responses in verified external data
2. Chain-of-Thought reasoning: Force the model to show its work step by step
3. Temperature control: Lower temperature settings produce more conservative outputs
4. Prompt engineering: Ask the model to cite sources or express uncertainty
5. Human-in-the-loop: Always verify critical information
6. RLHF/RLVR: Training with human feedback and verifiable rewards improves factual accuracy

Reinforcement Learning from Verifiable Rewards (RLVR) is a key approach in 2026. Instead of rewarding the model for sounding convincing, it is rewarded for producing results that can be objectively verified as correct. DeepSeek-R1 demonstrated how reasoning can emerge purely through these reward signals.

Always verify LLM outputs for critical decisions. What strategies do you use to handle hallucinations?