Jiaji Deng
2025
Ground Every Sentence: Improving Retrieval-Augmented LLMs with Interleaved Reference-Claim Generation
Sirui Xia
|
Xintao Wang
|
Jiaqing Liang
|
Yifei Zhang
|
Weikang Zhou
|
Jiaji Deng
|
Fei Yu
|
Yanghua Xiao
Findings of the Association for Computational Linguistics: NAACL 2025
Retrieval-Augmented Generation (RAG) has been widely adopted to enhance Large Language Models (LLMs) in knowledge-intensive tasks. To enhance credibility and verifiability in RAG systems, Attributed Text Generation (ATG) is proposed, which provides citations to retrieval knowledge in LLM-generated responses. Prior methods mainly adopt coarse-grained attributions, with passage-level or paragraph-level references or citations, which fall short in verifiability. This paper proposes ReClaim(Refer & Claim), a fine-grained ATG method that alternates the generation of references and answers step by step. Different from previous coarse-grained attribution, ReClaim provides sentence-level citations in long-form question-answering tasks. With extensive experiments, we verify the effectiveness of ReClaim in extensive settings, achieving a citation accuracy rate of 90%.
Search
Fix data
Co-authors
- Jiaqing Liang 1
- Xintao Wang 1
- Sirui Xia 1
- Yanghua Xiao 1
- Fei Yu 1
- show all...