What is the correct way to use async httpx client with langchain for OpenAI?

56 views Asked by At

While trying to invoke Langchain runnables asynchronously using ainvoke(), facing Attribute Error.

Following the snippet


llm = OpenAI(model_name="gpt-3.5-turbo-instruct", async_client=httpx.AsyncClient(verify=False))
prompt = PromptTemplate(
    input_variables=["text"],
    template=template,
)

runnable= prompt | llm

result = await runnable.ainvoke({"text": text})

Facing this error Attribute Error: AsyncClient doesn't contain the method create()

0

There are 0 answers