I'm using langchain with Azure OpenAI and Azure Cognitive Search. Currently I'm using Azure OpenAI text-embedding-ada-002 model for generating embeddings, but I would like to use a embbeding model from HugginFace if possible, because Azure OpenAI API does not allow to send documents in batches, so I need to make several calls and hit the rate limit.
I tried using this embbeding in my code:
embeddings = SentenceTransformerEmbeddings(
model_name="all-mpnet-base-v2",
)
Instead of:
embeddings = OpenAIEmbeddings(
...
)
The problem I'm facing, is that when I use AzureSearch's aadd_texts
method I get this error:
The vector field 'content_vector' dimensionality must match the field definition's 'dimensions' property. Expected: '1536'. Actual: '768'. (IndexDocumentsFieldError) 98: The vector field 'content_vector' dimensionality must match the field definition's 'dimensions' property. Expected: '1536'. Actual: '768'.
Code: IndexDocumentsFieldError
I'm pretty lost. Did anyone used an open source embeddings model with Cognitive Search? How?
Azure OpenAI API does allow to send documents in batches.
Input string or array Input text to get embeddings for, encoded as an array or string. The number of input tokens varies depending on what model you are using. Only text-embedding-ada-002 (Version 2) supports array input.
Please check https://learn.microsoft.com/en-us/azure/ai-services/openai/reference#embeddings.