diff options
author | Aditya <bluenerd@protonmail.com> | 2025-02-03 15:12:10 +0530 |
---|---|---|
committer | Aditya <bluenerd@protonmail.com> | 2025-02-03 15:12:10 +0530 |
commit | 573bebf0b709507b09ae6d21616b675dcac08d69 (patch) | |
tree | 7c1d680e8466c16b2ee05b8f212cc04da97be895 /sources.md | |
parent | 91733b3eb48860c8ed5354082d86d40b9d4f7a22 (diff) |
add summary
Diffstat (limited to 'sources.md')
-rw-r--r-- | sources.md | 18 |
1 files changed, 18 insertions, 0 deletions
@@ -350,3 +350,21 @@ The paper presents a novel approach called Forward-Looking Active Retrieval Augm - FLARE did not provide significant gains on certain datasets like Wizard of Wikipedia and ELI5. - The Wizard of Wikipedia dataset involves relatively short outputs, making multiple retrievals unnecessary. - ELI5 requires in-depth answers to open-ended questions, which presents challenges in grounding generation in retrieval. + +# Fine Tuning vs. Retrieval Augmented Generation for Less Popular Knowledge +**Domain**: PEFT + RAG + +**Relevance Score**: + +## Abstract +Language Models (LMs) memorize a vast amount of factual knowledge, exhibiting strong performance across diverse tasks and domains. However, it has been observed that the performance diminishes when dealing with less-popular or low-frequency concepts and entities, for example in domain specific applications. The two prominent approaches to enhance the performance of LMs on low-frequent topics are: Retrieval Augmented Generation (RAG) and fine-tuning (FT) over synthetic data. This paper explores and evaluates the impact of RAG and FT on customizing LMs in handling low-frequency entities on question answering tasks. We conduct extensive experiments on twelve LMs of varying size and type and different fine tuning, data augmentation, and retrieval models. Our findings indicate that while FT boosts the performance across entities of varying popularity, RAG surpasses FT by a large margin particularly for least popular factual knowledge. Additionally, the success of both RAG and FT approaches is amplified by improving retrieval and data augmentation techniques. Fine tuning, while beneficial for small LMs, requires extensive resources. To address this issue, we propose the new Stimulus RAG approach that surpasses the effectiveness of fine tuning based approaches, thereby eliminating the need for the costly data augmentation and fine tuning step for enriching LMs with less popular factual knowledge. The code is available at https://github.com/informagi/RAGvsFT + +## Summary +The paper investigates the effectiveness of two prominent approaches—Retrieval Augmented Generation (RAG) and Fine-Tuning (FT)—in enhancing the performance of language models (LMs) when dealing with less popular or low-frequency knowledge. The authors conducted extensive experiments on twelve different LMs, exploring various fine-tuning methods, data augmentation techniques, and retrieval models. The findings reveal that while fine-tuning improves performance across various entities, RAG significantly outperforms FT, especially for the least popular factual knowledge. Additionally, the success of both approaches is enhanced by optimizing retrieval and data augmentation techniques. The study highlights that fine-tuning, although beneficial for smaller models, requires substantial resources, leading to the proposal of a new method called Stimulus RAG (SRAG), which effectively eliminates the need for costly fine-tuning and data augmentation. + +The research emphasizes the importance of customizing LMs for less-resourced domains, particularly in applications like question answering systems that require accurate responses about specialized knowledge. The results indicate that while fine-tuning can enhance the accuracy of LMs, RAG provides a more effective solution for integrating less popular knowledge. The paper concludes that the SRAG approach not only surpasses the performance of fine-tuned models but also offers a cost-effective alternative for enriching LMs with factual knowledge, thereby addressing the challenges associated with data scarcity in specialized domains. + +## Limitations +- **Resource Intensity of Fine-Tuning**: Fine-tuning methods require significant computational resources and extensive training data, which may not be feasible for all applications, particularly in less-resourced domains. + +- **Complexity of Implementation**: The proposed Stimulus RAG (SRAG) method, while effective, may introduce additional complexity in implementation compared to traditional fine-tuning or RAG methods. |