I have a some problem on my cluster k8s on Fedora server, I have a 1 master and 2 nodes, the configuration of etc, flannel, docker and kubernetes found
I run
kubectl run busybox --image=busybox --port 8080 \
-- sh -c "while true; do { echo -e 'HTTP/1.1 200 OK\r\n'; \
env | grep HOSTNAME | sed 's/.*=//g'; } | nc -l -p 8080; done"
and, this found fine
kubectl expose deployment busybox --type=NodePort
now
kubectl autoscale deployment busybox --min=1 --max=4 --cpu-percent=20 deployment "busybox" autoscaled
when describe a hpa the metrics its a
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
busybox Deployment/busybox <unknown>/20% 1 4 1 1h
I try this https://github.com/kubernetes-incubator/metrics-server
git clone https://github.com/kubernetes-incubator/metrics-server.git
kubectl create -f metrics-server/deploy/1.8+/
but the pod of metric the status its CrashLoopBackOff
kubectl logs metrics-server-6fbfb84cdd-5gkth --namespace=kube-system
I0618 18:23:36.725579 1 heapster.go:71] /metrics-server --source=kubernetes.summary_api:''
I0618 18:23:36.741334 1 heapster.go:72] Metrics Server version v0.2.1
F0618 18:23:36.752641 1 heapster.go:112] Failed to create source provide: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
and
kubectl describe hpa busybox
Name: busybox
Namespace: default
Labels: <none>
Annotations: <none>
CreationTimestamp: Mon, 18 Jun 2018 12:55:28 -0400
Reference: Deployment/busybox
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): <unknown> / 20%
Min replicas: 1
Max replicas: 4
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale
ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedComputeMetricsReplicas 1h (x13 over 1h) horizontal-pod-autoscaler failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io)
Warning FailedGetResourceMetric 49m (x91 over 1h) horizontal-pod-autoscaler unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io)
Warning FailedGetResourceMetric 44m (x9 over 48m) horizontal-pod-autoscaler unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io)
Warning FailedComputeMetricsReplicas 33m (x13 over 39m) horizontal-pod-autoscaler failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io)
Warning FailedGetResourceMetric 4m (x71 over 39m) horizontal-pod-autoscaler unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io)
I deleted the ServiceAccount from KUBE_ADMISSION_CONTROL in /etc/kubernetes/apiserver
On Fedora 28!
Config Files
Services Files
Generate config files to ssl
nano openssl.cnf
nano worker-openssl.cnf
And Genereate the certs file
Now
AND
AND
References
Comment by floreks Cluster configuration
Thank you Sebastian Florek and Nick Rak