Latest News and Trends

OpenAI attributes AI hallucinations to LLMs trained to guess answers

OpenAI has revealed that the predominant source of hallucinations in large language models (LLMs) stems from existing training and evaluation frameworks, which incentivize models to guess answers rather than admit uncertainty. The company advocates revising benchmark testing to improve the reliability and trustworthiness of these AI systems.