Issue with Passing Retrieved Documents to Large Language Model in RetrievalQA Chain

63 views Asked by At

I'm currently enrolled in a course on Coursera where I'm learning to implement a retrieval-based question-answering (RetrievalQA) system in Python. The course provides code that utilizes the RetrievalQA.from_chain_type() method to create a RetrievalQA chain with both a large language model (LLM) and a vector retriever.

Upon reviewing the provided code, it's evident that relevant documents are retrieved from the vector store using vectordb.similarity_search(). However, there doesn't appear to be a clear step for explicitly passing these retrieved documents to the LLM for question-answering within the RetrievalQA chain.

My understanding is that in a typical RetrievalQA process, relevant documents retrieved from the vector store are subsequently passed to the LLM. This ensures that the LLM can utilize the retrieved information to generate accurate responses to user queries.

I'm seeking clarification on the proper methodology for integrating the retrieved documents into the RetrievalQA chain to ensure effective utilization by the LLM. Any insights, suggestions, or code examples on how to achieve this integration would be greatly appreciated. Thank you for your assistance!

from langchain.vectorstores import Chroma
from langchain.embeddings.openai import OpenAIEmbeddings
persist_directory = 'docs/chroma/'
embedding = OpenAIEmbeddings()
vectordb = Chroma(persist_directory=persist_directory, embedding_function=embedding)
question = "What are major topics for this class?"
docs = vectordb.similarity_search(question,k=3)
len(docs)
from langchain.chat_models import ChatOpenAI
llm = ChatOpenAI(model_name=llm_name, temperature=0)
from langchain.chains import RetrievalQA

qa_chain = RetrievalQA.from_chain_type(
    llm,
    retriever=vectordb.as_retriever()
)result = qa_chain({"query": question})
result["result"]
0

There are 0 answers