Need help for GKE ingress with terraform for n8n deployment

515 views Asked by At

I try to deploy n8n with its helm chart (https://github.com/8gears/n8n-helm-chart) to a GKE cluster. The problem I now face is when I set up the ingress to point to the application it loses the session constantly. I already found out that it has to be related to the ingress because when I access the pod directly everything works fine.

I try now to set up session affinity on the ingress, but I can't find any resource on how I can do this with terraform. My second option would be to set up an Nginx ingress but I have no experience how to do this. I hope someone can help me to find this out or point me to a better solution for the ingress. Thanks!

This is my terraform config for n8n:


resource "google_compute_managed_ssl_certificate" "n8n_ssl" {
  name = "${var.release_name}-ssl"
  managed {
    domains = ["n8n.${var.host}"]
  }
}
resource "helm_release" "n8n" {
  count           = 1
  depends_on      = [kubernetes_namespace.n8n, google_sql_database.n8n, google_sql_user.n8n, google_compute_managed_ssl_certificate.n8n_ssl]
  repository      = "https://8gears.container-registry.com/chartrepo/library"
  chart           = "n8n"
  version         = var.helm_version
  name            = var.release_name
  namespace       = var.namespace
  recreate_pods   = true
  values = [
    "${file("n8n_values.yaml")}"
  ]
  set_sensitive {
    name  = "n8n.encryption_key"
    value = var.n8n_encryption_key
  }
  set {
    name  = "config.database.postgresdb.host"
    value = data.terraform_remote_state.cluster.outputs.database_connection
  }
  set {
    name  = "config.database.postgresdb.user"
    value = var.db_username
  }
  set_sensitive {
    name  = "secret.database.postgresdb.password"
    value = var.db_password
  }
  set {
    name  = "config.security.basicAuth.user"
    value = var.username
  }
  set_sensitive {
    name  = "config.security.basicAuth.password"
    value = var.password
  }
}

resource "kubernetes_ingress" "n8n_ingress" {
  wait_for_load_balancer = true
  depends_on = [google_compute_managed_ssl_certificate.n8n_ssl]
  metadata {
    name = "${var.release_name}-ingress"
    namespace = helm_release.n8n[0].namespace
    annotations = {
      "ingress.kubernetes.io/compress-enable"         = "false",
      "ingress.gcp.kubernetes.io/pre-shared-cert"     = google_compute_managed_ssl_certificate.n8n_ssl.name
    }
  }
  spec {
    backend {
      service_name = helm_release.n8n[0].name
      service_port = 80
    }
  }
}

and my n8n_values.yml:

config:
  port: 5678
  generic:
    timezone: Europe/London
  database:
    type: postgresdb
  security:
    basicAuth:
      active: true

secret:
  database:
    postgresdb:
      password: ""

extraEnv:
  VUE_APP_URL_BASE_API: https://n8n.***/
  WEBHOOK_TUNNEL_URL: https://n8n.***/

image:
  repository: n8nio/n8n
  pullPolicy: IfNotPresent
  tag: latest

service:
  type: ClusterIP
  port: 80
1

There are 1 answers

1
Gari Singh On BEST ANSWER

To enable session affinity with GKE Ingress, you will need to create a BackendConfig resource. GKE Ingress supports client IP or cookie-based affinity.

apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
  name: my-backendconfig
spec:
  sessionAffinity:
    affinityType: "CLIENT_IP"


apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
  name: my-backendconfig
spec:
  sessionAffinity:
    affinityType: "GENERATED_COOKIE"
    affinityCookieTtlSec: 50

When using terraform, I think you'd need to use the kubernetes_manifest resource to deploy the BackendConfig resource.

You would then need to add the BackendConfig as an annotation on the Service resource. Looking at the service.yaml provided by the helm chart, it does not appear you can add annotations via values.yaml so you'd need to modify it to support adding annotations.