How to call Sidekiq process on Rails application using kubernetes cluster?

680 views Asked by At

I run my Rails Application production on kubernetes cluster. One node for Rails process, 1 node for Sidekiq for cronjob. I call delay jobs on Rails Application, it can not run because the sidekiq process does not appear on Rails Node. What should I do please? On Dockerfile

ENTRYPOINT ["./entrypoints/docker-entrypoint.sh"]

On docker-entrypoint.sh:

#!/bin/sh

set -e

if [ -f tmp/pids/server.pid ]; then
  rm tmp/pids/server.pid
fi

bundle exec rails s -b 0.0.0.0 -e production

Can I run multiple process on one cluster Or call to another node for Job?

1

There are 1 answers

0
David Maze On

You have your entrypoint script hard-wired to only run the Rails server. A better approach would be to separate this setup from the actual command to run. If a Dockerfile has both an ENTRYPOINT and a CMD then the CMD is passed as arguments to the ENTRYPOINT, and you can combine this with the shell exec "$@" construct to replace the entrypoint script with the main container process.

In a Ruby context, you probably need to run most things under Bundler, and I'd fold that into the final line.

#!/bin/sh
# docker-entrypoint.sh
...

exec bundle exec "$@"

In the Dockerfile, you'd both specify the ENTRYPOINT as you have it now, but also the default CMD to run.

ENTRYPOINT ["./entrypoints/docker-entrypoint.sh"]
CMD rails s -b 0.0.0.0 -e production

Now, the benefit of doing this is that you can replace the CMD when you run the container. In plain Docker, you'd pass the command after the docker run image-name

docker run ... image-name sidekiq

The entrypoint wrapper would still run, and clean up the Rails pid file, and run sidekiq under Bundler instead of running the Rails server.

Bringing this up to Kubernetes, you would have two separate Deployments, one for the Rails server and one for the Sidekiq worker. Somewhat confusingly, Kubernetes uses different names for the two parts of the command; command: overrides Dockerfile ENTRYPOINT, and args: overrides CMD. So for this setup you need to specify sidekiq as the args:, and leave the command: alone.

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: app
spec:
  template:
    spec:
      containers:
        - name: app
          image: image-name
---
kind: Deployment
metadata:
  name: worker
spec:
  template:
    spec:
      containers:
        - name: worker
          image: image-name
          args:
            - sidekiq

Only the main application needs a matching Service (if you're using the Istio service mesh, it has different requirements) and you need to make sure the spec: { template: { metadata: { labels: } } } can disambiguate the two sets of Pods. Either or both parts can independently have non-default replicas: and a matching HorizontalPodAutoscaler.