How to fix knative-serving activator pod waiting?

120 views Asked by At

I have k8s with calico networking layer. Then I try run kubectl get pods -n knative-serving

NAME                          READY   STATUS             RESTARTS          AGE
activator-5ccbd798b6-rkn5v    0/1     CrashLoopBackOff   440 (4m50s ago)   40h
autoscaler-848d88dbb-64btf    1/1     Running            0                 40h
controller-856747f668-wtwfp   1/1     Running            0                 40h
webhook-55cff8d4f9-79lsl      1/1     Running            0                 40h

kubectl describe pod activator -n knative-serving

Name:             activator-5ccbd798b6-rkn5v
Namespace:        knative-serving
Priority:         0
Service Account:  activator
Node:             server/ip
Start Time:       Thu, 19 Oct 2023 19:38:11 +0300
Labels:           app=activator
                  app.kubernetes.io/component=activator
                  app.kubernetes.io/name=knative-serving
                  app.kubernetes.io/version=1.11.2
                  pod-template-hash=5ccbd798b6
                  role=activator
Annotations:      cni.projectcalico.org/containerID: e36bdd4c6f963f3c193aa2e38de3d91a0917b20903183e758fda5c49d2bdd013
                  cni.projectcalico.org/podIP: 192.168.214.135/32
                  cni.projectcalico.org/podIPs: 192.168.214.135/32
Status:           Running
IP:               192.168.214.135
IPs:
  IP:           192.168.214.135
Controlled By:  ReplicaSet/activator-5ccbd798b6
Containers:
  activator:
    Container ID:    docker://316e47d76f5b91ece432dc107348e20484df464d78e9b19e791bd452682b3ded
    Image:           gcr.io/knative-releases/knative.dev/serving/cmd/activator@sha256:ba44f180d293dcbe00aa62cde4c1e66bc7bfce7e6a4392c6221e93d5c6042d60
    Image ID:        docker-pullable://gcr.io/knative-releases/knative.dev/serving/cmd/activator@sha256:ba44f180d293dcbe00aa62cde4c1e66bc7bfce7e6a4392c6221e93d5c6042d60
    Ports:           9090/TCP, 8008/TCP, 8012/TCP, 8013/TCP
    Host Ports:      0/TCP, 0/TCP, 0/TCP, 0/TCP
    SeccompProfile:  RuntimeDefault
    State:           Running
      Started:       Sat, 21 Oct 2023 10:15:48 +0300
    Last State:      Terminated
      Reason:        Completed
      Exit Code:     0
      Started:       Sat, 21 Oct 2023 10:12:50 +0300
      Finished:      Sat, 21 Oct 2023 10:15:48 +0300
    Ready:           False
    Restart Count:   424
    Limits:
      cpu:     1
      memory:  600Mi
    Requests:
      cpu:      300m
      memory:   60Mi
    Liveness:   http-get http://:8012/ delay=15s timeout=1s period=10s #success=1 #failure=12
    Readiness:  http-get http://:8012/ delay=0s timeout=1s period=5s #success=1 #failure=5
    Environment:
      GOGC:                       500
      POD_NAME:                   activator-5ccbd798b6-rkn5v (v1:metadata.name)
      POD_IP:                      (v1:status.podIP)
      SYSTEM_NAMESPACE:           knative-serving (v1:metadata.namespace)
      CONFIG_LOGGING_NAME:        config-logging
      CONFIG_OBSERVABILITY_NAME:  config-observability
      METRICS_DOMAIN:             knative.dev/internal/serving
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zptz7 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  kube-api-access-zptz7:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                      From     Message
  ----     ------     ----                     ----     -------
  Warning  BackOff    8m37s (x5223 over 38h)   kubelet  Back-off restarting failed container activator in pod activator-5ccbd798b6-rkn5v_knative-serving(a4092055-82fa-419a-9498-49d022130a1c)
  Warning  Unhealthy  3m39s (x16657 over 38h)  kubelet  Readiness probe failed: HTTP probe failed with statuscode: 500

server is Ubuntu 22.04.2 LTS k8s started via sudo kubeadm init --cri-socket=/var/run/cri-dockerd.sock --pod-network-cidr=192.168.0.0/16 what I installed in k8s calico

kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.3/manifests/tigera-operator.yaml
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.3/manifests/custom-resources.yaml
kubectl taint nodes --all node-role.kubernetes.io/control-plane-

knative-serving

kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.11.2/serving-crds.yaml
kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.11.2/serving-core.yaml

istio

kubectl apply -l knative.dev/crd-install=true -f https://github.com/knative/net-istio/releases/download/knative-v1.11.1/istio.yaml
kubectl apply -f https://github.com/knative/net-istio/releases/download/knative-v1.11.1/istio.yaml
kubectl apply -f https://github.com/knative/net-istio/releases/download/knative-v1.11.1/net-istio.yaml
kubectl patch svc istio-ingressgateway -n istio-system -p '{"spec": {"type": "LoadBalancer", "externalIPs":["<server ip>"]}}'
kubectl label namespace default istio-injection=enabled

I don't know, why I cannot install knative on k8s successfuly, but on minikube it work correctly...

I tryed to modify activator pod memory and cpu limits, but I`m not able to change this rows via lens

1

There are 1 answers

0
Dmitry Mustache On BEST ANSWER

I upgrade my server to 6 cpu and 6 ram and it fixed the error