When I create VectorStoreIndex object, why does it require ChatGPT or some other LLM that will be used at the query time?
Is there a way to use LlamaIndex only on the indexing side? Is there no good separation between query and indexing pipelines in llama-index?