When handling API calls to openai and trying to stream it back to the client, there are two possible cases for errors:
Error occurs in the request, the stream does not begin between openai <-> my backend For this there is a nice documentation by vercel https://sdk.vercel.ai/docs/guides/providers/openai#guide-handling-errors (tldr: wrap it in a try catch (duh))
Error occurs while the stream is ongoing. In this edge case the try catch block does not trigger. It could be a problem with sveltekit though?
Is there a way I can add an event listener or a "middleware" in the openai response stream, so that whenever an error does occur I can handle it accordingly?
const response = await openai.chat.completions.create({
model,
stream: true,
messages
});
// Convert the response into a friendly text-stream
const stream = OpenAIStream(response);
// Respond with the stream
return new Response(stream);