How do I see the complete prompt (retrieved_relevant_context + question) after qa_chain
is run?
qa_chain = RetrievalQA.from_chain_type(
llm,
retriever=vectordb.as_retriever(),
)
result = qa_chain({"query": question})
How do I see the complete prompt (retrieved_relevant_context + question) after qa_chain
is run?
qa_chain = RetrievalQA.from_chain_type(
llm,
retriever=vectordb.as_retriever(),
)
result = qa_chain({"query": question})
Another 2 options to print out the full chain, including prompt
verbose
and debug
from langchain.globals import set_verbose, set_debug
set_debug(True)
set_verbose(True)
StdOutCallbackHandler
from langchain.callbacks import StdOutCallbackHandler
handler = StdOutCallbackHandler()
qa_with_sources_chain = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
chain_type_kwargs={"prompt": PROMPT},
retriever=vectorstore.as_retriever(
# search_type="similarity_score_threshold",
search_kwargs={'score_threshold': 0.01, "k": 8},
# search_kwargs={"k": 8}
),
callbacks=[handler],
# return_source_documents=True
)
You can see your complete prompt by setting the verbose parameter to TRUE as mentionned below.