Is it possible to determine model input at runtime with seldon

42 views Asked by At

I'm thinking of deploying ml models with seldon core on kubernetes. Seldon provides ways to do pre-processing, post-processing, predicting, combining and routing models. But, I think, these all assume that the input data is fixed. Is the input data for an entire seldon graph fixed, or is it possible to call models at runtime? In other words, is it possible use the output of one model to determine which other model should be called.

The thing I'm trying to do is to run one model that has a variable number of outputs (let's say an image instance segmentation model) and run a second model for each of the outputs (let's say an image classification model). In this case the input of the second model depends on the output of the first model.

Is this supported by seldon? Is there a way to do it with seldon?

0

There are 0 answers