Latest News and Trends

AI hallucinations remain challenge despite advances in LLM, says NTU professor

Despite rapid progress in developing large language models (LLMs), artificial intelligence (AI) hallucinations—instances where AI generates plausible but factually incorrect responses—continue to pose significant challenges. Professor Yun-Nung Chen from National Taiwan University’s Department of Computer Science highlighted the need for increased user participation and greater efforts by AI systems to express uncertainty and cite sources to enhance trustworthiness.