Retrieval Augmented generation vs. LLM context

502 views Asked by At

I am still learning the concepts behind RAG but I was wondering, alot if references explain RAG by saying that you will be able to increase the LLM knowledge by augmenting a new knowledge using an external retrieving system.

But how is that different than just add a context (even if its long as some LLMs has a huge window size) to the model prompt?

1

There are 1 answers

0
Karthik Soman On

LLMs excel as in-context learners, performing tasks with provided examples in a zero-shot manner. For instance, enabling an LLM for sentiment analysis requires example sentences and their corresponding classes. You can think of this as a teacher guiding a student on how to approach a particular type of question. The examples need not be direct answers, but serve as instructive instances.

On the other hand, RAG employs an external database to introduce prompt-specific context. You can think of this as a teacher providing relevant background about a specific question, enabling the student to address it successfully. In this case, the provided background is tailored to that question. RAG, hence, utilizes the in-context learning of LLMs but automates the extraction of prompt-specific context from the external database.

Here is an example for a RAG system (called KG-RAG) that makes use of Knowledge Graph to augment the generative capability of an LLM.