Kubernetes pod marked as OOMKilled

1.7k views Asked by At

Kubernetes pod getting terminated and marked as OOMKilled. Below is my cronjob yaml file:

kind: CronJob
metadata:
name: test-cron
spec:
schedule: "30 2 1 * *"
concurrencyPolicy: Forbid
jobTemplate:
  spec:
    backoffLimit: 1
    template:
      spec:
        containers:
          - name: test-container
            image: <image>
            resources:
              limits:
                memory: 10240Mi
                cpu: 4000m
                ephemeral-storage: 2Gi
              requests:
                memory: 10240Mi
                cpu: 4000m
                ephemeral-storage: 2Gi
            args:
              - java
              - -cp
              - /jars/*
              - -Xmx9g
              - -Xms9g
              - -XX:+UnlockCommercialFeatures
              - -XX:+FlightRecorder
              - -Dcom.sun.management.jmxremote
              - -Dcom.sun.management.jmxremote.port=9002
              - -Dcom.sun.management.jmxremote.authenticate=false
              - -Dcom.sun.management.jmxremote.ssl=false
              - com.test.app.TestApplication
        restartPolicy: Never

I am not getting OutOfMemoryError in my java application. One of the reasons can be pod is using high memory than the limit mentioned in the yaml. But how's that possible because Xmx set is 9GB and if heap usage tries to go above 9GB then my application should throw OOM error.

One thing I tried doing was increasing the pod memory request and limit to 15GB basically now there is big difference between Xmx and pod memory request/limit. This time my pod ran successfully. Why did it work?

0

There are 0 answers