Kubernetes (on-premises) Metallb LoadBalancer and sticky sessions

1.7k views Asked by At

I installed one Kubernetes Master and two kubernetes worker on-premises.

After I installed Metallb as LoadBalancer using commands below:

$ kubectl edit configmap -n kube-system kube-proxy 
apiVersion: kubeproxy.config.k8s.io/v1alpha1 
kind: KubeProxy
Configuration mode:
"ipvs" ipvs:
   strictARP: true

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.6/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.6/manifests/metallb.yaml
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"

vim config-map.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 10.100.170.200-10.100.170.220

kubectl apply -f config-map.yaml
kubectl describe configmap config -n metallb-system

I created my yaml file as below:

myapp-tst-deploy.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-tst-deployment
  labels:
    app: myapp-tst
spec:
  replicas: 2
  selector:
    matchLabels:
      app: myapp-tst
  template:
    metadata:
      labels:
        app: myapp-tst
    spec:
      containers:
      - name: myapp-tst
        image: myapp-tomcat
        securityContext:
          privileged: true
          capabilities:
            add:
              - SYS_ADMIN

myapp-tst-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: myapp-tst-service
  labels:
    app: myapp-tst
spec:
  externalTrafficPolicy: Cluster
  type: LoadBalancer
  ports:
  - name: myapp-tst-port
    nodePort: 30080
    port: 80
    protocol: TCP
    targetPort: 8080
  selector:
    app: myapp-tst
  sessionAffinity: None

myapp-tst-ingress.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: myapp-tst-ingress
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/affinity: "cookie"
    nginx.ingress.kubernetes.io/affinity-mode: "persistent"
    nginx.ingress.kubernetes.io/session-cookie-name: "INGRESSCOOKIE"
    nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
    nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
spec:
  rules: 
    - http:
        paths:
          - path: /
            backend:
              serviceName: myapp-tst-service
              servicePort: myapp-tst-port

I run kubectl -f apply for all three files, and these is my result:

kubectl get all -o wide
NAME                                     READY   STATUS    RESTARTS   AGE     IP          NODE               NOMINATED NODE   READINESS GATES
pod/myapp-tst-deployment-54474cd74-p8cxk   1/1     Running   0          4m53s   10.36.0.1   bcc-tst-docker02   <none>           <none>
pod/myapp-tst-deployment-54474cd74-pwlr8   1/1     Running   0          4m53s   10.44.0.2   bca-tst-docker01   <none>           <none>

NAME                      TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)        AGE     SELECTOR
service/myapp-tst-service   LoadBalancer   10.110.184.237   10.100.170.15   80:30080/TCP   4m48s   app=myapp-tst,tier=backend
service/kubernetes        ClusterIP      10.96.0.1        <none>          443/TCP        6d22h   <none>

NAME                                 READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS   IMAGES                  SELECTOR
deployment.apps/myapp-tst-deployment   2/2     2            2           4m53s   myapp-tst      mferraramiki/myapp-test   app=myapp-tst

NAME                                           DESIRED   CURRENT   READY   AGE     CONTAINERS   IMAGES                  SELECTOR
replicaset.apps/myapp-tst-deployment-54474cd74   2         2         2       4m53s   myapp-tst      myapp/myapp-test   app=myapp-tst,pod-template-hash=54474cd74

But when I try to connect using LB external IP (10.100.170.15) the system redirect the browser request (on the same browser) on a pod, if I refresh or open a new tab (on the same url) the system reply redirect the request to another pod.

I need when a user digit url in the browser, he must be connect to a specific pod during all session, and not switch to other pods.

How can solve this problem if is it possible? In my VM I resolved this issue using stickysession, how can enable it on LB or in Kubernetes components?

1

There are 1 answers

0
Sándor On BEST ANSWER

In the myapp-tst-service.yaml file the "sessionAffinity" is set to "None".

You should try to set it to "ClientIP".

From page https://kubernetes.io/docs/concepts/services-networking/service/ :

"If you want to make sure that connections from a particular client are passed to the same Pod each time, you can select the session affinity based on the client's IP addresses by setting service.spec.sessionAffinity to "ClientIP" (the default is "None"). You can also set the maximum session sticky time by setting service.spec.sessionAffinityConfig.clientIP.timeoutSeconds appropriately. (the default value is 10800, which works out to be 3 hours)."