Is there a way via API to know what AI model the specified Azure Open AI endpoint (deployment) is configured to use?

92 views Asked by At

I'm working on an application where end users are required to bring their own Azure OpenAI subscription in order for it to work. Because of this, I am not able to check what models they have configured on their Azure OpenAI deployment. Is there a way via API call to determine what it supports?

I was originally going to use the /models endpoint but that applies to the whole resource. In this case, since the endpoint is deployment-specific, the resource support API does not help.

1

There are 1 answers

1
SAGExSDX On

It looks like at least for the case of Azure OpenAI, you can send in any value for the model when making a chat completion request and it'll return back whatever model was actually available to use in the response.

So if the deployment only supports gpt-4 (8k tokens) and you send gpt-4-32k in the chat completion request, the response will come back with model set to gpt-4. I am able to use that response to determine what the max token count should be.