Apache server runs with docker run but kubernetes pod fails with CrashLoopBackOff

798 views Asked by At

My application uses apache2 web server. Due to restrictions in the kubernetes cluster, I do not have root previliges inside pod. So I have changed default port of apache2 from 80 to 8080 to be able to run as non-root user.

My problem is that once I build the docker image and run it in local it runs fine, but when I deploy using kubernetes in the cluster it keeps failing with:

Action '-D FOREGROUND' failed.

resulting in CrashLoopBackOff.

So, basically the apache2 server is not able to run in the pod with non-root user, but runs fine in local with docker run.

Any help is appreciated.

I am attaching my deployment and service files for reference:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: &DeploymentName app
spec:
  replicas: 1
  selector:
    matchLabels: &appName
      app: *DeploymentName
  template:
    metadata:
      name: main
      labels:
        <<: *appName
    spec:
      securityContext:
        fsGroup: 2000
        runAsUser: 1000
        runAsGroup: 3000
      volumes:
        - name: var-lock
          emptyDir: {}
      containers:
        - name: *DeploymentName
          image: image:id
          ports:
            - containerPort: 8080
          volumeMounts:
            - mountPath: /etc/apache2/conf-available
              name: var-lock
            - mountPath: /var/lock/apache2
              name: var-lock
            - mountPath: /var/log/apache2
              name: var-lock
            - mountPath: /mnt/log/apache2
              name: var-lock
          readinessProbe:
              tcpSocket:
                port: 8080
              initialDelaySeconds: 180
              periodSeconds: 60
          livenessProbe:
              tcpSocket:
                port: 8080
              initialDelaySeconds: 300
              periodSeconds: 180
          imagePullPolicy: Always
          tty: true
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
          envFrom:
            - configMapRef:
                name: *DeploymentName
          resources:
            limits:
              cpu: 1
              memory: 2Gi
            requests:
              cpu: 1
              memory: 2Gi

---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: &hpaName app
spec:
  maxReplicas: 1
  minReplicas: 1
  scaleTargetRef:
    apiVersion: extensions/v1beta1
    kind: Deployment
    name: *hpaName
  targetCPUUtilizationPercentage: 60

---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: app
  name: app
spec:
  selector:
    app: app
  ports:
    - protocol: TCP
      name: http-web-port
      port: 80
      targetPort: 8080
    - protocol: TCP
      name: https-web-port
      port: 443
      targetPort: 443
1

There are 1 answers

0
Fariya Rahmat On

CrashLoopBackOff is a common error in Kubernetes, indicating a pod constantly crashing in an endless loop.

The CrashLoopBackOff error can be caused by a variety of issues, including:

  1. Insufficient resources-lack of resources prevents the container from loading Locked file—a file was already locked by another container

  2. Locked database-the database is being used and locked by other pods Failed reference—reference to scripts or binaries that are not present on the container

  3. Setup error- an issue with the init-container setup in Kubernetes Config loading error—a server cannot load the configuration file.

  4. Misconfigurations- a general file system misconfiguration Connection issues—DNS or kube-DNS is not able to connect to a third-party service

  5. Deploying failed services—an attempt to deploy services/applications that have already failed (e.g. due to a lack of access to other services)

To fix kubernetes CrashLoopbackoff error refer to this link and also check out stackpost for more information.