Why are there two services for one seldon deployment

791 views Asked by At

I noticed whenever I deployed one model, there are two services, e.g.

kubectl get service -n model-namespace 
NAME                            TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)             AGE
iris-model-default              ClusterIP   10.96.82.232   <none>        8000/TCP,5001/TCP   8h
iris-model-default-classifier   ClusterIP   10.96.76.141   <none>        9000/TCP            8h

I wonder why do we have two instead of one.

What are the three ports (8000, 9000, 5001) for respectively? which one should I use?

The manifest yaml is

apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
  name: iris-model
  namespace: model-namespace
spec:
  name: iris
  predictors:
  - graph:
      implementation: SKLEARN_SERVER
      modelUri: gs://seldon-models/sklearn/iris
      name: classifier
    name: default
    replicas: 1

from https://docs.seldon.io/projects/seldon-core/en/v1.1.0/workflow/quickstart.html

The CRD definition appears to be here in case it's useful.

k describe service/iris-model-default
Name:              iris-model-default
Namespace:         model-namespace
Labels:            app.kubernetes.io/managed-by=seldon-core
                   seldon-app=iris-model-default
                   seldon-deployment-id=iris-model
Annotations:       getambassador.io/config:
                     ---
                     apiVersion: ambassador/v1
                     kind: Mapping
                     name: seldon_model-namespace_iris-model_default_rest_mapping
                     prefix: /seldon/model-namespace/iris-model/
                     rewrite: /
                     service: iris-model-default.model-namespace:8000
                     timeout_ms: 3000
                     ---
                     apiVersion: ambassador/v1
                     kind: Mapping
                     name: seldon_model-namespace_iris-model_default_grpc_mapping
                     grpc: true
                     prefix: /(seldon.protos.*|tensorflow.serving.*)/.*
                     prefix_regex: true
                     rewrite: ""
                     service: iris-model-default.model-namespace:5001
                     timeout_ms: 3000
                     headers:
                       namespace: model-namespace
                       seldon: iris-model
Selector:          seldon-app=iris-model-default
Type:              ClusterIP
IP:                10.96.82.232
Port:              http  8000/TCP
TargetPort:        8000/TCP
Endpoints:         172.18.0.17:8000
Port:              grpc  5001/TCP
TargetPort:        8000/TCP
Endpoints:         172.18.0.17:8000
Session Affinity:  None
Events:            <none>
k describe service/iris-model-default-classifier 
Name:              iris-model-default-classifier
Namespace:         model-namespace
Labels:            app.kubernetes.io/managed-by=seldon-core
                   default=true
                   model=true
                   seldon-app-svc=iris-model-default-classifier
                   seldon-deployment-id=iris-model
Annotations:       <none>
Selector:          seldon-app-svc=iris-model-default-classifier
Type:              ClusterIP
IP:                10.96.76.141
Port:              http  9000/TCP
TargetPort:        9000/TCP
Endpoints:         172.18.0.17:9000
Session Affinity:  None
Events:            <none>
k get pods --show-labels
NAME                                               READY   STATUS    RESTARTS   AGE   LABELS
iris-model-default-0-classifier-579765fc5b-rm6np   2/2     Running   0          10h   app.kubernetes.io/managed-by=seldon-core,app=iris-model-default-0-classifier,fluentd=true,pod-template-hash=579765fc5b,seldon-app-svc=iris-model-default-classifier,seldon-app=iris-model-default,seldon-deployment-id=iris-model,version=default

So only one pod is involved, I'm getting the idea that these ports are mapped from different containers:

k get pods -o json | jq '.items[].spec.containers[] | .name, .ports'                                                                                                                           [0] 0s 
"classifier"
[
  {
    "containerPort": 6000,
    "name": "metrics",
    "protocol": "TCP"
  },
  {
    "containerPort": 9000,
    "name": "http",
    "protocol": "TCP"
  }
]
"seldon-container-engine"
[
  {
    "containerPort": 8000,
    "protocol": "TCP"
  },
  {
    "containerPort": 8000,
    "name": "metrics",
    "protocol": "TCP"
  }
]

A more seldon-specific question is why so many ports are needed?

1

There are 1 answers

3
Rico On

Yes, looks like you have 2 containers in your pod.

The first service:

iris-model-default ➡️ seldon-container-engine HTTP: 8000:8000 and GRPC: 5001:8000

The second service:

iris-model-default-classifier ➡️ classifier HTTP: 9000:9000 (6000 used internally looks like for metrics)

You didn't mention but sounds like deployed the classifier:

apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
  name: iris-model
  namespace: seldon
spec:
  name: iris
  predictors:
  - graph:
      implementation: SKLEARN_SERVER
      modelUri: gs://seldon-models/sklearn/iris
      name: classifier
    name: default
    replicas: 1

If you'd like to find out the rationale behind why the two containers/services you might have dig into the operator itself .