DeepScience
Back to Roadmap
Artificial IntelligencePartial

Hallucination elimination and grounding

Language models confidently generate plausible but factually incorrect statements, a phenomenon known as hallucination or confabulation. Retrieval-augmented generation (RAG) reduces but does not eliminate the problem, as models can ignore or misrepresent retrieved context. Reliable attribution, calibrated uncertainty estimation, and detection of knowledge conflicts between parametric and contextual knowledge are all active research areas. Eliminating hallucination while preserving the generative fluency and creativity of language models is a fundamental tension.

Research Domains

foundationssystems

Keywords

hallucinationfactualitygroundingretrieval augmented generationRAGattributioncitationcalibrationuncertainty estimationknowledge conflictconfabulation

Last updated: April 8, 2026

Recent Papers(Artificial Intelligence)

DETECTING RARE CORTICAL CONNECTIVITY AROUND THE HUMAN CENTRAL SULCUS: A DEEP LEARNING ANALYSIS OF 37,000+ TRACTOGRAPHIES

April 8, 2026openalex

MULTI-MAP FUSION FOR WEAKLY SUPERVISED DISEASE LOCALIZATION FROM GLOBALLY ASSIGNED DIAGNOSTIC LABELS IN BRAIN MRI

April 8, 2026openalex

EVALUATING SEGMENTATION USING BETTI-1 TOPOLOGICAL METRIC: APPLICATION TO NASAL CAVITIES IN THE CONTEXT OF AIRFLOW SIMULATION

April 8, 2026openalex

Faster 4D Flow MRI Scan with 3D Arbitrary-Scale Super-Resolution

April 8, 2026openalex

Iterative confidence-based pseudo-labeling for semi-supervised lung cancer segmentation under annotation scarcity

April 8, 2026openalex

FALCON: Unfolded Variational Model for Blind Deconvolution and Segmentation in 3d Dental Imaging

April 8, 2026openalex

Diffusion-Based Fourier Domain Deconvolution with Application to Ultrasound Image Restoration

April 8, 2026openalex