Connecting to GKE POD running Postgres with client Postico 2

281 views Asked by At

I want to connect to a Postgres instance that it is in a pod in GKE.

I think a way to achieve this can be with kubectl port forwarding.

In my local I have "Docker for desktop" and when I apply the yamls files I am able to connect to the database. The yamls I am using in GKE are almost identical

secrets.yaml

    apiVersion: v1
    kind: Secret
    metadata:
      namespace: staging
      name: postgres-secrets
    type: Opaque
    data:
      MYAPPAPI_DATABASE_NAME: XXXENCODEDXXX
      MYAPPAPI_DATABASE_USERNAME: XXXENCODEDXXX
      MYAPPAPI_DATABASE_PASSWORD: XXXENCODEDXXX

pv.yaml

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      namespace: staging
      name: db-data-pv
      labels:
        type: local
    spec:
      storageClassName: generic
      capacity: 
        storage: 1Gi
      accessModes:
        - ReadWriteMany
      hostPath:
        path: "/var/lib/postgresql/data"

pvc.yaml

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      namespace: staging
      name: db-data-pvc
    spec:
      storageClassName: generic
      accessModes:
      - ReadWriteMany
      resources:
        requests:
          storage: 500Mi

deployment.yaml

    # Deployment
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      namespace: staging
      labels:
        app: postgres-db
      name: postgres-db
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: postgres-db
      template:
        metadata:
          labels:
            app: postgres-db
        spec:
          containers:
            - name: postgres-db
              image: postgres:12.4
              ports:
                - containerPort: 5432
              volumeMounts:
                - mountPath: /var/lib/postgresql/data
                  name: postgres-db
              env:
                - name: POSTGRES_USER
                  valueFrom:
                    secretKeyRef:
                      name: postgres-secrets
                      key: MYAPPAPI_DATABASE_USERNAME

                - name: POSTGRES_DB
                  valueFrom:
                    secretKeyRef:
                      name: postgres-secrets
                      key: MYAPPAPI_DATABASE_NAME

                - name: POSTGRES_PASSWORD
                  valueFrom:
                    secretKeyRef:
                      name: postgres-secrets
                      key: MYAPPAPI_DATABASE_PASSWORD
          volumes:
            - name: postgres-db
              persistentVolumeClaim:
                claimName: db-data-pvc

svc.yaml

    apiVersion: v1
    kind: Service
    metadata:
      namespace: staging
      labels:
        app: postgres-db
      name: postgresdb-service
    spec:
      type: ClusterIP
      selector:
        app: postgres-db
      ports:
        - port: 5432

and it seems that everything is working

Then I execute kubectl port-forward postgres-db-podname 5433:5432 -n staging and when I try to connect it throws

FATAL: role "myappuserdb" does not exist


UPDATE 1

This is from GKE YAML

spec:
      containers:
      - env:
        - name: POSTGRES_DB
          valueFrom:
            secretKeyRef:
              key: MYAPPAPI_DATABASE_NAME
              name: postgres-secrets
        - name: POSTGRES_USER
          valueFrom:
            secretKeyRef:
              key: MYAPPAPI_DATABASE_USERNAME
              name: postgres-secrets
        - name: POSTGRES_PASSWORD
          valueFrom:
            secretKeyRef:
              key: MYAPPAPI_DATABASE_PASSWORD
              name: postgres-secrets

UPDATE 2

I will explain what happened and how I solve this.

The first time I applied the files, kubectl apply -f k8s/, in the deployment, the environment variable POSTGRES_USER was referencing a wrong secret, MYAPPAPI_DATABASE_NAME and it should make reference to MYAPPAPI_DATABASE_USERNAME.

After this first time, everytime I did kubectl delete -f k8s/ the resources were deleted. However, when I created the resources again, the data that I created in the previous step was not cleaned.

I deleted the cluster and created a new cluster and everything worked. I need to check if there is a way to clean the data in kubernetes volume.

1

There are 1 answers

6
Emon46 On BEST ANSWER

in your deployment's env spec you have assigned the wrong value for POSTGRES_USER. you have assigned the value POSTGRES_USER = MYAPPAPI_DATABASE_NAME.

but i think it should be POSTGRES_USER = MYAPPAPI_DATABASE_USERNAME .

              env:
                - name: POSTGRES_USER
                  valueFrom:
                    secretKeyRef:
                      name: postgres-secrets
                      key: MYAPPAPI_DATABASE_NAME #<<<this is the value need to change>>>

please try this one

              env:
                - name: POSTGRES_USER
                  valueFrom:
                    secretKeyRef:
                      name: postgres-secrets
                      key: MYAPPAPI_DATABASE_USERNAME