Cron Jobs in Kubernetes - connect to existing Pod, execute script

63k views Asked by At

I'm certain I'm missing something obvious. I have looked through the documentation for ScheduledJobs / CronJobs on Kubernetes, but I cannot find a way to do the following on a schedule:

  1. Connect to an existing Pod
  2. Execute a script
  3. Disconnect

I have alternative methods of doing this, but they don't feel right.

  1. Schedule a cron task for: kubectl exec -it $(kubectl get pods --selector=some-selector | head -1) /path/to/script

  2. Create one deployment that has a "Cron Pod" which also houses the application, and many "Non Cron Pods" which are just the application. The Cron Pod would use a different image (one with cron tasks scheduled).

I would prefer to use the Kubernetes ScheduledJobs if possible to prevent the same Job running multiple times at once and also because it strikes me as the more appropriate way of doing it.

Is there a way to do this by ScheduledJobs / CronJobs?

http://kubernetes.io/docs/user-guide/cron-jobs/

6

There are 6 answers

3
Sudharsan Punniyakotti On

This one should help .

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: hello
spec:
  schedule: "*/30 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: hello
            image: busybox
            args:
            - /bin/sh
            - -c
            kubectl exec -it  <podname> "sh script.sh ";
          restartPolicy: OnFailure
1
Simon I On

As far as I'm aware there is no "official" way to do this the way you want, and that is I believe by design. Pods are supposed to be ephemeral and horizontally scalable, and Jobs are designed to exit. Having a cron job "attach" to an existing pod doesn't fit that module. The Scheduler would have no idea if the job completed.

Instead, a Job can to bring up an instance of your application specifically for running the Job and then take it down once the Job is complete. To do this you can use the same Image for the Job as for your Deployment but use a different "Entrypoint" by setting command:.

If they job needs access to data created by your application then that data will need to be persisted outside the application/Pod, you could so this a few ways but the obvious ways would be a database or a persistent volume. For example useing a database would look something like this:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: APP
spec:
  template:
    metadata:
      labels:
        name: THIS
        app: THAT
    spec:
      containers:
        - image: APP:IMAGE
          name: APP
          command:
          - app-start
          env:
            - name: DB_HOST
              value: "127.0.0.1"
            - name: DB_DATABASE
              value: "app_db"

And a job that connects to the same database, but with a different "Entrypoint" :

apiVersion: batch/v1
kind: Job
metadata:
  name: APP-JOB
spec:
  template:
    metadata:
      name: APP-JOB
      labels:
        app: THAT
    spec:
      containers:
      - image: APP:IMAGE
        name: APP-JOB
        command:
        - app-job
        env:
          - name: DB_HOST
            value: "127.0.0.1"
          - name: DB_DATABASE
            value: "app_db"

Or the persistent volume approach would look something like this:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: APP
spec:
  template:
    metadata:
      labels:
        name: THIS
        app: THAT
    spec:
      containers:
        - image: APP:IMAGE
          name: APP
          command:
          - app-start
          volumeMounts:
          - mountPath: "/var/www/html"
            name: APP-VOLUME
      volumes:
        - name:  APP-VOLUME
          persistentVolumeClaim:
            claimName: APP-CLAIM

---

apiVersion: v1
kind: PersistentVolume
metadata:
  name: APP-VOLUME
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  nfs:
    path: /app

---

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: APP-CLAIM
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi
  selector:
    matchLabels:
      service: app

With a job like this, attaching to the same volume:

apiVersion: batch/v1
kind: Job
metadata:
  name: APP-JOB
spec:
  template:
    metadata:
      name: APP-JOB
      labels:
        app: THAT
    spec:
      containers:
      - image: APP:IMAGE
        name: APP-JOB
        command:
        - app-job
        volumeMounts:
        - mountPath: "/var/www/html"
          name: APP-VOLUME
    volumes:
      - name:  APP-VOLUME
        persistentVolumeClaim:
          claimName: APP-CLAIM
0
Juampy NR On

I managed to do this by creating a custom image with doctl (DigitalOcean's command line interface) and kubectl. The CronJob object would use these two commands to download the cluster configuration and run a command against a container.

Here is a sample CronJob:

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: drupal-cron
spec:
  schedule: "*/5 * * * *"
  concurrencyPolicy: Forbid
  jobTemplate:
    spec:
      template:
        spec:
          containers:
            - name: drupal-cron
              image: juampynr/digital-ocean-cronjob:latest
              env:
                - name: DIGITALOCEAN_ACCESS_TOKEN
                  valueFrom:
                    secretKeyRef:
                      name: api
                      key: key
              command: ["/bin/bash","-c"]
              args:
                - doctl kubernetes cluster kubeconfig save drupster;
                  POD_NAME=$(kubectl get pods -l tier=frontend -o=jsonpath='{.items[0].metadata.name}');
                  kubectl exec $POD_NAME -c drupal -- vendor/bin/drush core:cron;
          restartPolicy: OnFailure

Here is the Docker image that the CronJob uses: https://hub.docker.com/repository/docker/juampynr/digital-ocean-cronjob

If you are not using DigitalOcean, figure out how to download the cluster configuration so kubectl can use it. For example, with Google Cloud, you would have to download gcloud.

Here is the project repository where I implemented this https://github.com/juampynr/drupal8-do.

0
Raman On

Create a scheduled pod that uses the Kubernetes API to run the command you want on the target pods, via the exec function. The pod image should contain the client libraries to access the API -- many of these are available or you can build your own.

For example, here is a solution using the Python client that execs to each ZooKeeper pod and runs a database maintenance command:

import time

from kubernetes import config
from kubernetes.client import Configuration
from kubernetes.client.apis import core_v1_api
from kubernetes.client.rest import ApiException
from kubernetes.stream import stream
import urllib3

config.load_incluster_config()

configuration = Configuration()
configuration.verify_ssl = False
configuration.assert_hostname = False
urllib3.disable_warnings()
Configuration.set_default(configuration)

api = core_v1_api.CoreV1Api()
label_selector = 'app=zk,tier=backend'
namespace = 'default'

resp = api.list_namespaced_pod(namespace=namespace,
                               label_selector=label_selector)

for x in resp.items:
  name = x.spec.hostname

  resp = api.read_namespaced_pod(name=name,
                                 namespace=namespace)

  exec_command = [
  '/bin/sh',
  '-c',
  'opt/zookeeper/bin/zkCleanup.sh -n 10'
  ]

  resp = stream(api.connect_get_namespaced_pod_exec, name, namespace,
              command=exec_command,
              stderr=True, stdin=False,
              stdout=True, tty=False)

  print("============================ Cleanup %s: ============================\n%s\n" % (name, resp if resp else "<no output>"))

and the associated Dockerfile:

FROM ubuntu:18.04

ADD ./cleanupZk.py /

RUN apt-get update \
  && apt-get install -y python-pip \
  && pip install kubernetes \
  && chmod +x /cleanupZk.py

CMD /cleanupZk.py

Note that if you have an RBAC-enabled cluster, you may need to create a service account and appropriate roles to make this API call possible. A role such as the following is sufficient to list pods and to run exec, such as the example script above requires:

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: pod-list-exec
  namespace: default
rules:
  - apiGroups: [""] # "" indicates the core API group
    resources: ["pods"]
    verbs: ["get", "list"]
  - apiGroups: [""] # "" indicates the core API group
    resources: ["pods/exec"]
    verbs: ["create", "get"]

An example of the associated cron job:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: zk-maint
  namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: zk-maint-pod-list-exec
  namespace: default
subjects:
- kind: ServiceAccount
  name: zk-maint
  namespace: default
roleRef:
  kind: Role
  name: pod-list-exec
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: zk-maint
  namespace: default
  labels:
    app: zk-maint
    tier: jobs
spec:
  schedule: "45 3 * * *"
  successfulJobsHistoryLimit: 3
  failedJobsHistoryLimit: 1
  concurrencyPolicy: Forbid
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: zk-maint
            image: myorg/zkmaint:latest
          serviceAccountName: zk-maint
          restartPolicy: OnFailure
          imagePullSecrets:
          - name: azure-container-registry
1
Brett Wagner On

This seems like an anti-pattern. Why can't you just run your worker pod as a job pod?

Regardless you seem pretty convinced you need to do this. Here is what I would do.

Take your worker pod and wrap your shell execution in a simple webservice, it's 10 minutes of work with just about any language. Expose the port and put a service in front of that worker/workers. Then your job pods can simply curl ..svc.cluster.local:/ (unless you've futzed with dns).

0
macetw On

It sounds as though you might want to run scheduled work within the pod itself rather than doing this at the Kubernetes level. I would approach this as a cronjob within the container, using traditional Linux crontab. Consider:

kind: Pod
apiVersion: v1
metadata:
  name: shell
spec:
  init-containers:
  - name: shell
    image: "nicolaka/netshoot"
    command:
    - /bin/sh
    - -c
    - |
      echo "0 */5 * * * /opt/whatever/bin/do-the-thing" | crontab -
      sleep infinity

If you want to track logs from those processes, that will require a fluentd type of mechanism to track those log files.