I'm trying to implement RAG with Mistral 7B LLM in google colab but when I try to query i get an error
here's my code :
index=VectorStoreIndex.from_documents(documents,service_context=service_context)
query_engine = index.as_query_engine()
response=query_engine.query("my question")
the last line gives me this error :
KeyError Traceback (most recent call last)
<ipython-input-22-8e2dbdba5aa9> in <cell line: 1>()
----> 1 response=query_engine.query("my question")
40 frames
/usr/local/lib/python3.10/dist-packages/llama_index/core/prompts/base.py in format(***failed resolving arguments***)
194
195 mapped_all_kwargs = self._map_all_vars(all_kwargs)
--> 196 prompt = self.template.format(**mapped_all_kwargs)
197
198 if self.output_parser is not None:
KeyError: 'query'
in the LlamaIndex docs it says that this is the correct usage pattern so idk where the problem is