Langchain : How do input variables work, in particular how is "context" replaced by what I want in the Template?

2.5k views Asked by At

I'm learning about langchain I had trouble understanding how templates work.

from langchain.prompts import PromptTemplate
prompt_template = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.

{context}

Question: {question}
Answer in Italian:"""
PROMPT = PromptTemplate(
    template=prompt_template, input_variables=["context", "question"]
)

Here there is a context input variable. What is this "context" ? I mean conceptually I know what it is, but how does it work ? Why does it corresponds exactly to some parts of the documents, since the function only use an embeddings database ? Why is it between {} ? Digging into the source code I also found a 'summaries' variable,

Thanks in advance :)

1

There are 1 answers

0
curiouscat On

context and question are placeholders that are set when the LLM agent is run with an input. This is why they are specified as input_variables when the PromptTemplate instance is created.

These placeholders are keys in the input dictionary fed to the langchain chain instance.

As an example, if I were to use the prompt template in the original post to create an instance of an LLMChain, I would specify context and question in the input to this LLMChain

from langchain.chains import LLMChain
from langchain.llms import OpenAI 

chain = LLMChain(llm=OpenAI(temperature=0.9), prompt=PROMPT)

response = chain.run({
           "context": "Italy is home to some of the finest grapes in the world",
           "question": "What are the top three wine exports from Italy?"
           }) 

print(response)

'Chianti, Prosecco e Barolo.'