In an Openshift environment (Kubernetes v1.18.3+47c0e71) I am trying to run a very basic container which will contain:
- Alpine (latest version)
- JDK 1.8
- Jmeter 5.3
I just want it to boot and run in a container, expecting connections to run Jmeter CLI from the command line terminal.
I have gotten this to work perfectly in my local Docker distribution. This is the Dokerfile content:
FROM alpine:latest
ARG JMETER_VERSION="5.3"
ENV JMETER_HOME /opt/apache-jmeter-${JMETER_VERSION}
ENV JMETER_BIN ${JMETER_HOME}/bin
ENV JMETER_DOWNLOAD_URL https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-${JMETER_VERSION}.tgz
USER root
ARG TZ="Europe/Amsterdam"
RUN apk update \
&& apk upgrade \
&& apk add ca-certificates \
&& update-ca-certificates \
&& apk add --update openjdk8-jre tzdata curl unzip bash \
&& apk add --no-cache nss \
&& rm -rf /var/cache/apk/ \
&& mkdir -p /tmp/dependencies \
&& curl -L --silent ${JMETER_DOWNLOAD_URL} > /tmp/dependencies/apache-jmeter-${JMETER_VERSION}.tgz \
&& mkdir -p /opt \
&& tar -xzf /tmp/dependencies/apache-jmeter-${JMETER_VERSION}.tgz -C /opt \
&& rm -rf /tmp/dependencies
# Set global PATH such that "jmeter" command is found
ENV PATH $PATH:$JMETER_BIN
WORKDIR ${JMETER_HOME}
For some reason, when I configure a Pod with a container with that exact configuration previously uploaded to a private Docker images registry, it does not work.
This is the Deployment configuration (yaml) file (very basic aswell):
apiVersion: apps/v1
kind: Deployment
metadata:
name: jmeter
namespace: myNamespace
labels:
app: jmeter
group: myGroup
spec:
selector:
matchLabels:
app: jmeter
replicas: 1
template:
metadata:
labels:
app: jmeter
spec:
containers:
- name: jmeter
image: myprivateregistry.azurecr.io/jmeter:dev
resources:
limits:
cpu: 100m
memory: 500Mi
requests:
cpu: 100m
memory: 500Mi
imagePullPolicy: Always
restartPolicy: Always
imagePullSecrets:
- name: myregistrysecret
Unfortunately, I am not getting any logs:
A screenshot of the Pod events:
Unfortunately, not getting either to access the terminal of the container:
Any idea on:
- how to get further logs?
- what is going on?
On your local machine, you are likely using
docker run -it <my_container_image>
or similar. Using the-it
option will run an interactive shell in your container without you specifying aCMD
and will keep that shell running as the primary process started in your container. So by using this command, you are basically already specifying a command.Kubernetes expects that the container image contains a process that is run on start (
CMD
) and that will run as long as the container is alive (for example a webserver).In your case, Kubernetes is starting the container, but you are not specifying what should happen when the container image is started. This leads to the container immediately terminating, which is what you can see in the Events above. Because you are using a
Deployment
, the failing Pod is then restarted again and again.A possible workaround to this is to run the
sleep
command in your container on startup by specifing acommand
in your Pod like so:(Kubernetes documentation)
This will start the Pod and immediately run the
/bin/sleep infinite
command, leading to the primary process being thissleep
process that will never terminate. Your container will now run indefinitely. Now you can useoc rsh <name_of_the_pod
to connect to the container and run anything you would like interactively (for examplejmeter
).