keyboard_arrow_up
A Self-Reflective Retrieval Augmented Generation System to Eliminate Hallucination in LLM Generation

Authors

>Haotong Hu1 and Garret Washburn2, 1China, 2California State Polytechnic University, USA

Abstract

In a Retrieval Augmented Generation system, the possibility of hallucination is ever prevalent, as it has been increasingly common for an AI model to produce inaccurate generations given an input, which can become increasingly more devastating for a RAG system due to its node-like generation. To solve this problem, we introduced a series of self-reflective nodes, creating a self-RAG model. To create our program, we established a series of nodes that utilized ChatGPT for generation as well as the Langchain python library and its workflow functions to piece together the nodes. A major challenge faced during the planning and development stages was the creation of the self-reflective nodes within the workflow, as these proved to be difficult to implement. To test the accuracy and capacity of this proposed Self-RAG model, multiple experiments were carried out and the result ensures a quality generation. This model should be implemented because it avoids misleading and inaccurate answers.

Keywords

Rag, LLM, Self-RAG, Self-Reflective

Full Text  Volume 14, Number 14