Afficher la notice abrégée
dc.contributor.author |
Touameur, Ouissem |
|
dc.date.accessioned |
2025-03-17T10:24:57Z |
|
dc.date.available |
2025-03-17T10:24:57Z |
|
dc.date.issued |
2024 |
|
dc.identifier.uri |
http://depot.umc.edu.dz/handle/123456789/14533 |
|
dc.description.abstract |
16
Corresponding author email : ouissem.touameur@gmail.com
From Hallucinations to Truth: Strategies for Improving Chatbot
Accuracy and Trustworthiness
Ouissem, Touameur*; Harrag, Fouzi
Farhat Abbas, Setif University
Abstract
Hallucinations in chatbot systems—instances where the model generates
inaccurate or entirely fabricated information—pose a significant challenge to the
reliability and trustworthiness of AI-driven communication. This paper provides
a comprehensive review of the state-of-the-art in tackling hallucinations,
detailing their underlying causes, varied manifestations, and the landscape of
existing mitigation strategies. We critically examine leading approaches,
including knowledge augmentation models, advanced fine-tuning methods,
automated fact-checking systems, and the emerging role of explainable AI in
identifying and reducing hallucinations |
fr_FR |
dc.title |
From Hallucinations to Truth: Strategies for Improving Chatbot Accuracy and Trustworthiness |
fr_FR |
dc.type |
Article |
fr_FR |
Fichier(s) constituant ce document
Ce document figure dans la(les) collection(s) suivante(s)
Afficher la notice abrégée