I am trying to deploy SFTPGo with nginx-ingress controller on GKE. The deployment works but when I try to build sftp connection through cli, it fails with connection refused/timed out error.
I am using drakkan/sftpgo helm chart for SFTPGo deployment. Adding deployment, service and ingress yamls below:
# Source: sftpgo/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: release-name-sftpgo
labels:
helm.sh/chart: sftpgo-0.19.0
app.kubernetes.io/name: sftpgo
app.kubernetes.io/instance: release-name
app.kubernetes.io/version: "2.5.4"
app.kubernetes.io/managed-by: Helm
annotations:
beta.cloud.google.com/backend-config: '{"default": "hc-test"}'
cloud.google.com/l4-rbs: enabled
cloud.google.com/neg: '{"ingress": true}'
spec:
type: NodePort
ports:
- name: sftp
port: 22
targetPort: sftp
protocol: TCP
- name: http
port: 80
targetPort: http
protocol: TCP
- name: telemetry
port: 10000
targetPort: telemetry
protocol: TCP
selector:
app.kubernetes.io/name: sftpgo
app.kubernetes.io/instance: release-name
---
# Source: sftpgo/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: release-name-sftpgo
labels:
helm.sh/chart: sftpgo-0.19.0
app.kubernetes.io/name: sftpgo
app.kubernetes.io/instance: release-name
app.kubernetes.io/version: "2.5.4"
app.kubernetes.io/managed-by: Helm
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: sftpgo
app.kubernetes.io/instance: release-name
template:
metadata:
labels:
app.kubernetes.io/name: sftpgo
app.kubernetes.io/instance: release-name
spec:
serviceAccountName: sftpgo
hostNetwork: false
securityContext:
{}
containers:
- name: sftpgo
securityContext:
{}
image: "ghcr.io/drakkan/sftpgo:v2.5.4"
imagePullPolicy: IfNotPresent
args:
- sftpgo
- serve
env:
- name: SFTPGO_SFTPD__BINDINGS__0__PORT
value: "2022"
- name: SFTPGO_SFTPD__BINDINGS__0__ADDRESS
value: "0.0.0.0"
- name: SFTPGO_HTTPD__BINDINGS__0__PORT
value: "8080"
- name: SFTPGO_HTTPD__BINDINGS__0__ADDRESS
value: "0.0.0.0"
- name: SFTPGO_TELEMETRY__BIND_PORT
value: "10000"
- name: SFTPGO_TELEMETRY__BIND_ADDRESS
value: "0.0.0.0"
ports:
- name: sftp
containerPort: 2022
protocol: TCP
- name: http
containerPort: 8080
protocol: TCP
- name: telemetry
containerPort: 10000
protocol: TCP
livenessProbe:
httpGet:
path: /healthz
port: telemetry
readinessProbe:
httpGet:
path: /healthz
port: telemetry
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
resources:
{}
volumeMounts:
- name: config
mountPath: /etc/sftpgo/sftpgo.json
subPath: sftpgo.json
readOnly: true
volumes:
- name: config
configMap:
name: release-name-sftpgo
---
# Source: sftpgo/templates/ingress-nginx.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: sftpgo-ingress
labels:
helm.sh/chart: sftpgo-0.19.0
app.kubernetes.io/name: sftpgo
app.kubernetes.io/instance: release-name
app.kubernetes.io/version: "2.5.4"
app.kubernetes.io/managed-by: Helm
namespace: sftpgo
annotations:
kubernetes.io/ingress.class: "nginx"
networking.gke.io/managed-certificates: "google-managed-cert"
kubernetes.io/ingress.global-static-ip-name: "sftpgo-external"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: "sample.domain" #
http:
paths:
- backend:
service:
name: release-name-sftpgo
port:
number: 80
path: /
pathType: Prefix
- backend:
service:
name: release-name-sftpgo
port:
number: 22
path: /
pathType: Prefix
# Source: sftpgo/templates/backendconfig.yaml
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: hc-test
spec:
healthCheck:
timeoutSec: 1
type: HTTP
requestPath: /healthz
port: 8080
healthCheck:
timeoutSec: 1
type: TCP
requestPath: /
port: 2022
Also, I'm new to nginx-ingress and I'm unable to figure out how to use managed certs with nginx-ingress on GKE but my major concern is sftp connection unable to build. The sftpgo web page is displayed and exposed properly but the problem lies with sftp connection (ports are added in service and being exposed too)
Ingress only supports HTTP(S) protocols, which is why you are able to access the web page. To expose other TCP protocols such as sftp, you would typically use a Service of type LoadBalancer: