When we had a look into the events in AKS, we observed the below errors for all the pods which were created and terminated:
2m47s Warning FailedMount pod/app-fd6c6b8d9-ssr2t Unable to attach or mount volumes: unmounted volumes=[log-volume config-volume log4j2 secrets-app-inline kube-api-access-z49xc], unattached volumes=[log-volume config-volume log4j2 secrets-app-inline kube-api-access-z49xc]: timed out waiting for the condition
We already have 2 replicas running for the application so don't think that the error will be due to AccessModes of volumes.
Below is the HPA config:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: app-cpu-hpa
namespace: namespace-dev
spec:
maxReplicas: 5
minReplicas: 2
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: app
metrics:
- type: Resource
resource:
name: cpu
targetAverageValue: 500m
Below is the deployment config:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
labels:
app: app
group: app
obs: appd
spec:
replicas: 2
selector:
matchLabels:
app: app
template:
metadata:
annotations:
container.apparmor.security.beta.kubernetes.io/app: runtime/default
labels:
app: app
group: app
obs: appd
spec:
containers:
- name: app
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 2000
imagePullPolicy: {{ .Values.image.pullPolicy }}
resources:
limits:
cpu: {{ .Values.app.limits.cpu }}
memory: {{ .Values.app.limits.memory }}
requests:
cpu: {{ .Values.app.requests.cpu }}
memory: {{ .Values.app.requests.memory }}
env:
- name: LOG_DIR_PATH
value: /opt/apps/
volumeMounts:
- name: log-volume
mountPath: /opt/apps/app/logs
- name: config-volume
mountPath: /script/start.sh
subPath: start.sh
- name: log4j2
mountPath: /opt/appdynamics-java/ver21.9.0.33073/conf/logging/log4j2.xml
subPath: log4j2.xml
- name: secrets-app-inline
mountPath: "/mnt/secrets-app"
readOnly: true
readinessProbe:
failureThreshold: 3
httpGet:
path: /actuator/info
port: {{ .Values.metrics.port }}
scheme: "HTTP"
httpHeaders:
- name: Authorization
value: "Basic XXX50aXXXXXX=="
- name: cache-control
value: "no-cache"
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
initialDelaySeconds: 60
livenessProbe:
httpGet:
path: /actuator/info
port: {{ .Values.metrics.port }}
scheme: "HTTP"
httpHeaders:
- name: Authorization
value: "Basic XXX50aXXXXXX=="
- name: cache-control
value: "no-cache"
initialDelaySeconds: 300
periodSeconds: 5
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
volumes:
- name: log-volume
persistentVolumeClaim:
claimName: {{ .Values.apppvc.name }}
- name: config-volume
configMap:
name: {{ .Values.configmap.name }}-configmap
defaultMode: 0755
- name: secrets-app-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "app-kv-secret"
nodePublishSecretRef:
name: secrets-app-creds
- name: log4j2
configMap:
name: log4j2
defaultMode: 0755
restartPolicy: Always
imagePullSecrets:
- name: {{ .Values.imagePullSecrets }}
Can someone please let me know where the config might be going wrong?