QA_Chain from Langchain does not recognize Azure OpenAi engine' or 'deployment_id

731 views Asked by At

I am retrieving the results from my internal Db but for this example I have added an open URL. I am using Azure openai and langchain in conjunction to build this retrieval engine. I checked in the Azure Portal that deployment is successful and i am able to run in a stand alone prompt.

The last query throws this error:

InvalidRequestError: Must provide an 'engine' or 'deployment_id' parameter to create a <class 'openai.api_resources.completion.Completion'>.

Since we can see I have already supplied a deployment_id above. What am I missing?

Here is the entire code

from langchain.vectorstores import Chroma
from langchain.embeddings import OpenAIEmbeddings
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.llms import OpenAI
from langchain.chains import RetrievalQA
from langchain.document_loaders import TextLoader
from langchain.document_loaders import DirectoryLoader  
loader= TextLoader(r'./RawText.txt', encoding='utf-8')
documents = loader.load()    
text_splitter= RecursiveCharacterTextSplitter(chunk_size= 70, chunk_overlap=0)
texts= text_splitter.split_documents(documents)

#create the DB

    persist_directory = 'db'
#embedding= OpenAIEmbeddings()
   embeddings = OpenAIEmbeddings(deployment="text-embedding-ada-002",model="text-embedding-ada-002", chunk_size = 1)
   vectordb= Chroma.from_documents(documents=texts,
                               embedding=embeddings,
                               persist_directory=persist_directory)

Post this I am creating a retriever as below:

  vectordb= Chroma(persist_directory=persist_directory, embedding_function=embeddings)
    retriever= vectordb.as_retriever()
    docs=retriever.get_relevant_documents("Databricks")

#Creating a Chain:

 from langchain.chains import RetrievalQA
 import openai

#Specify the name of the engine you want to use

engine = "test_chat"
qa_chain=RetrievalQA.from_chain_type(llm=OpenAI(),
                                chain_type="stuff",
                                retriever= retriever,
                                return_source_documents=True)

#test_chat here for reference is text-embedding-ada-002
#Cite Source

def process_llm_responses(llm_response):
print(llm_response['result'])
print('\n\nSources:')
for source in llm_response["source_documents"]:
    print(source.metadata["source"])

#full retrieval in process

query = "What is A medallion architecture"
llm_response= qa_chain(query)
process_llm_responses
1

There are 1 answers

1
ZKS On BEST ANSWER

Since not knowing your langchain version, first check with below code

embeddings = OpenAIEmbeddings(deployment="text-embedding-ada-002",model="text-embedding-ada-002",engine="text-embedding-ada-002", chunk_size = 1)

If the above code doesn't work you try for below

 embeddings = OpenAIEmbeddings(deployment="text-embedding-ada-002",model="text-embedding-ada-002",openai_api_type='azure', chunk_size = 1)