Kubernetes cluster why etcd shutdown?

1.8k views Asked by At

I have a problem, I setup a kubernetes cluster on a VM (spec OK), but when I start it, it works for 1Minute and the apiserver becomes unreachable, after a few hours of debugging I realized that it was etcd which shutdown after one minute. If people have an idea on this why it cuts, I've been stuck on it for 2 days so I'm exploring all new avenues!

There is no firewall install on this VM (debian11 bullseye) ! If this version of linux have hidden firewall tell me !

If somes informations are missing for help ask me ! Thanks for reading, and attention carry on this problem :D

Etcd contaigner logs

WARN[0000] runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default setti                             ngs are now deprecated, you should set the endpoint instead.
ERRO[0000] unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or                              directory"
{"level":"info","ts":"2022-07-26T22:41:12.770Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.247.136:2379","--cert-file=/etc/kubernetes/pki/etcd/s                             erver.crt","--client-cert-auth=true","--data-dir=/var/lib/etcd","--experimental-initial-corrupt-check=true","--initial-advertise-peer-urls=https://192.168.247.136:2380","--initial-cluster=debian=https://192                             .168.247.136:2380","--key-file=/etc/kubernetes/pki/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.247.136:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-ur                             ls=https://192.168.247.136:2380","--name=debian","--peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/etc/kubernetes/pki/etcd/peer.key","--peer-trusted-ca-fi                             le=/etc/kubernetes/pki/etcd/ca.crt","--snapshot-count=10000","--trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt"]}
{"level":"info","ts":"2022-07-26T22:41:12.770Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/etcd","dir-type":"member"}
{"level":"info","ts":"2022-07-26T22:41:12.770Z","caller":"embed/etcd.go:131","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.247.136:2380"]}
{"level":"info","ts":"2022-07-26T22:41:12.770Z","caller":"embed/etcd.go:479","msg":"starting with peer TLS","tls-info":"cert = /etc/kubernetes/pki/etcd/peer.crt, key = /etc/kubernetes/pki/etcd/peer.key, cli                             ent-cert=, client-key=, trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2022-07-26T22:41:12.771Z","caller":"embed/etcd.go:139","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.247.136:2379"]}
{"level":"info","ts":"2022-07-26T22:41:12.771Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.3","git-sha":"0452feec7","go-version":"go1.16.15","go-os":"linux","go-arch":"                             amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"debian","data-dir":"/var/lib/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/etcd/member","force-new-cluster":                             false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.247.136                             :2380"],"listen-peer-urls":["https://192.168.247.136:2380"],"advertise-client-urls":["https://192.168.247.136:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.247.136:2379"],"listen-me                             trics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-size-bytes":2147483648,"pre-vote":true,"initial                             -corrupt-check":true,"corrupt-check-time-interval":"0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-                             check-interval":"5s"}
{"level":"info","ts":"2022-07-26T22:41:12.773Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/etcd/member/snap/db","took":"1.372861ms"}
{"level":"info","ts":"2022-07-26T22:41:12.780Z","caller":"etcdserver/server.go:529","msg":"No snapshot found. Recovering WAL from scratch!"}
{"level":"info","ts":"2022-07-26T22:41:12.789Z","caller":"etcdserver/raft.go:483","msg":"restarting local member","cluster-id":"532b763a3016b","local-member-id":"f0a214efcbf3ac55","commit-index":1300}
{"level":"info","ts":"2022-07-26T22:41:12.789Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0a214efcbf3ac55 switched to configuration voters=()"}
{"level":"info","ts":"2022-07-26T22:41:12.789Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0a214efcbf3ac55 became follower at term 15"}
{"level":"info","ts":"2022-07-26T22:41:12.789Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft f0a214efcbf3ac55 [peers: [], term: 15, commit: 1300, applied: 0, lastindex: 1300, lastterm                             : 15]"}
{"level":"warn","ts":"2022-07-26T22:41:12.790Z","caller":"auth/store.go:1220","msg":"simple token is not cryptographically signed"}
{"level":"info","ts":"2022-07-26T22:41:12.800Z","caller":"mvcc/kvstore.go:415","msg":"kvstore restored","current-rev":1102}
{"level":"info","ts":"2022-07-26T22:41:12.801Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 G                             B"}
{"level":"info","ts":"2022-07-26T22:41:12.802Z","caller":"etcdserver/corrupt.go:46","msg":"starting initial corruption check","local-member-id":"f0a214efcbf3ac55","timeout":"7s"}
{"level":"info","ts":"2022-07-26T22:41:12.803Z","caller":"etcdserver/corrupt.go:116","msg":"initial corruption checking passed; no corruption","local-member-id":"f0a214efcbf3ac55"}
{"level":"info","ts":"2022-07-26T22:41:12.803Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"f0a214efcbf3ac55","local-server-version":"3.5.3","cluster-version":"to_be_                             decided"}
{"level":"info","ts":"2022-07-26T22:41:12.803Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
{"level":"info","ts":"2022-07-26T22:41:12.806Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /etc/kubernetes/pki/etcd/server.crt, key = /etc/kubernetes/pki/etcd/server.ke                             y, client-cert=, client-key=, trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2022-07-26T22:41:12.806Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.247.136:2380"}
{"level":"info","ts":"2022-07-26T22:41:12.806Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f0a214efcbf3ac55","initial-advertise-peer-urls":["https://192.168.247.                             136:2380"],"listen-peer-urls":["https://192.168.247.136:2380"],"advertise-client-urls":["https://192.168.247.136:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.247.136:2379"],"listen                             -metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2022-07-26T22:41:12.806Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.247.136:2380"}
{"level":"info","ts":"2022-07-26T22:41:12.806Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0a214efcbf3ac55 switched to configuration voters=(17339444535481314389)"}
{"level":"info","ts":"2022-07-26T22:41:12.807Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"532b763a3016b","local-member-id":"f0a214efcbf3ac55","added-peer-id":"f0a214efcbf3ac55"                             ,"added-peer-peer-urls":["https://192.168.247.136:2380"]}
{"level":"info","ts":"2022-07-26T22:41:12.807Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"532b763a3016b","local-member-id":"f0a214efcbf3ac55","cluster-version":"                             3.5"}
{"level":"info","ts":"2022-07-26T22:41:12.807Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2022-07-26T22:41:12.807Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2022-07-26T22:41:14.491Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0a214efcbf3ac55 is starting a new election at term 15"}
{"level":"info","ts":"2022-07-26T22:41:14.491Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0a214efcbf3ac55 became pre-candidate at term 15"}
{"level":"info","ts":"2022-07-26T22:41:14.491Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0a214efcbf3ac55 received MsgPreVoteResp from f0a214efcbf3ac55 at term 15"}
{"level":"info","ts":"2022-07-26T22:41:14.492Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0a214efcbf3ac55 became candidate at term 16"}
{"level":"info","ts":"2022-07-26T22:41:14.492Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0a214efcbf3ac55 received MsgVoteResp from f0a214efcbf3ac55 at term 16"}
{"level":"info","ts":"2022-07-26T22:41:14.492Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0a214efcbf3ac55 became leader at term 16"}
{"level":"info","ts":"2022-07-26T22:41:14.493Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f0a214efcbf3ac55 elected leader f0a214efcbf3ac55 at term 16"}
{"level":"info","ts":"2022-07-26T22:41:14.503Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f0a214efcbf3ac55","local-member-attributes":"{N                             ame:debian ClientURLs:[https://192.168.247.136:2379]}","request-path":"/0/members/f0a214efcbf3ac55/attributes","cluster-id":"532b763a3016b","publish-timeout":"7s"}
{"level":"info","ts":"2022-07-26T22:41:14.503Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-07-26T22:41:14.504Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2022-07-26T22:41:14.505Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2022-07-26T22:41:14.506Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-07-26T22:41:14.520Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.247.136:2379"}
{"level":"info","ts":"2022-07-26T22:41:14.523Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2022-07-26T22:41:21.463Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2022-07-26T22:41:21.463Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"debian","data-dir":"/var/lib/etcd","advertise-peer-urls":["https://192.168.247.136:2380"],"ad                             vertise-client-urls":["https://192.168.247.136:2379"]}
WARNING: 2022/07/26 22:41:21 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp                              127.0.0.1:2379: connect: connection refused". Reconnecting...
WARNING: 2022/07/26 22:41:21 [core] grpc: addrConn.createTransport failed to connect to {192.168.247.136:2379 192.168.247.136:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while diali                             ng dial tcp 192.168.247.136:2379: connect: connection refused". Reconnecting...
{"level":"info","ts":"2022-07-26T22:41:21.469Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f0a214efcbf3ac55","current-leader                             -member-id":"f0a214efcbf3ac55"}
{"level":"info","ts":"2022-07-26T22:41:21.471Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.247.136:2380"}
{"level":"info","ts":"2022-07-26T22:41:21.474Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.247.136:2380"}
{"level":"info","ts":"2022-07-26T22:41:21.474Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"debian","data-dir":"/var/lib/etcd","advertise-peer-urls":["https://192.168.247.136:2380"],"adv                             ertise-client-urls":["https://192.168.247.136:2379"]}

Apiserver logs

WARN[0000] runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead.
ERRO[0000] unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory"
I0727 15:02:59.827907       1 server.go:558] external host was not specified, using 192.168.247.136
I0727 15:02:59.829798       1 server.go:158] Version: v1.24.3
I0727 15:02:59.829842       1 server.go:160] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0727 15:03:00.941962       1 shared_informer.go:255] Waiting for caches to sync for node_authorizer
I0727 15:03:00.942665       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0727 15:03:00.942686       1 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0727 15:03:00.944169       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0727 15:03:00.944190       1 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
W0727 15:03:00.948187       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0727 15:03:01.944180       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0727 15:03:01.949432       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0727 15:03:02.945356       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0727 15:03:03.615633       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0727 15:03:04.623473       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0727 15:03:05.849105       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0727 15:03:07.325303       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0727 15:03:09.227793       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0727 15:03:11.099972       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0727 15:03:15.788729       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0727 15:03:16.531054       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
E0727 15:03:20.950165       1 run.go:74] "command failed" err="context deadline exceeded"

systemctl status kubelet

     Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
    Drop-In: /etc/systemd/system/kubelet.service.d
             └─10-kubeadm.conf
     Active: active (running) since Tue 2022-07-26 23:34:32 CEST; 17h ago
       Docs: https://kubernetes.io/docs/home/
   Main PID: 3035 (kubelet)
      Tasks: 16 (limit: 2285)
     Memory: 69.5M
        CPU: 54min 30.388s
     CGroup: /system.slice/kubelet.service
             └─3035 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock --pod-infra-container-image=k8s.gcr.io/pause:3.7

juil. 27 16:53:04 debian kubelet[3035]: E0727 16:53:04.693716    3035 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
juil. 27 16:53:04 debian kubelet[3035]: E0727 16:53:04.853359    3035 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://192.168.247.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/debian?timeout=10s": dial tcp 192.168.247.136:6443: connect: connection refused
juil. 27 16:53:07 debian kubelet[3035]: I0727 16:53:07.033077    3035 scope.go:110] "RemoveContainer" containerID="6f65d156c6136ab6a2e63852144ebd61218e1f07ba7533dee626de6d07e224c5"
juil. 27 16:53:07 debian kubelet[3035]: E0727 16:53:07.033997    3035 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-debian_kube-system(138153a41e76259b600c4262853e5e9e)\"" pod="kube-system/kube-controller-manager-debian" podUID=138153a41e76259b600c4262853e5e9e
juil. 27 16:53:08 debian kubelet[3035]: E0727 16:53:08.932320    3035 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"debian\": Get \"https://192.168.247.136:6443/api/v1/nodes/debian?resourceVersion=0&timeout=10s\": dial tcp 192.168.247.136:6443: connect: connection refused"
juil. 27 16:53:08 debian kubelet[3035]: E0727 16:53:08.932985    3035 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"debian\": Get \"https://192.168.247.136:6443/api/v1/nodes/debian?timeout=10s\": dial tcp 192.168.247.136:6443: connect: connection refused"
juil. 27 16:53:08 debian kubelet[3035]: E0727 16:53:08.934204    3035 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"debian\": Get \"https://192.168.247.136:6443/api/v1/nodes/debian?timeout=10s\": dial tcp 192.168.247.136:6443: connect: connection refused"
juil. 27 16:53:08 debian kubelet[3035]: E0727 16:53:08.934813    3035 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"debian\": Get \"https://192.168.247.136:6443/api/v1/nodes/debian?timeout=10s\": dial tcp 192.168.247.136:6443: connect: connection refused"
juil. 27 16:53:08 debian kubelet[3035]: E0727 16:53:08.937656    3035 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"debian\": Get \"https://192.168.247.136:6443/api/v1/nodes/debian?timeout=10s\": dial tcp 192.168.247.136:6443: connect: connection refused"
juil. 27 16:53:08 debian kubelet[3035]: E0727 16:53:08.937719    3035 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" ```










0

There are 0 answers