How do you disable mTLS in linkerd?

1.5k views Asked by At

I'm sitting behind a corporate firewall that only allows https traffic that it can intercept with a MITM certificate. Unfortunately, linkerd and most other service meshes have mTLS enabled between all pod proxy communication. How can I disable mTLS so the linkerd pods aren't blocked in my network?

What I tried

$ linkerd install > linkerd.yaml
$ linkerd inject --ignore-cluster --manual --disable-identity --disable-tap linkerd.yaml >> afterInject.yaml
$ kubectl apply -f afterInject.yaml

Interestingly, gathering linkerd metrics with the following command is working:

linkerd metrics -n linkerd $(
  kubectl --namespace linkerd get pod \
    --selector linkerd.io/control-plane-component=controller \
    --output name
)

logs

Linkerd is still deploying tap in the linkerd namespace with these logs

$ kubectl logs -n linkerd linkerd-tap-6c845f67cd-wzzp4 tap
time="2020-09-18T17:14:28Z" level=info msg="running version stable-2.8.1"
time="2020-09-18T17:14:28Z" level=info msg="Using trust domain: cluster.local"
time="2020-09-18T17:14:28Z" level=info msg="waiting for caches to sync"
time="2020-09-18T17:14:28Z" level=info msg="caches synced"
time="2020-09-18T17:14:28Z" level=info msg="starting admin server on :9998"
time="2020-09-18T17:14:28Z" level=info msg="starting APIServer on :8089"
2020/09/18 17:14:51 http: TLS handshake error from 127.0.0.1:58856: EOF
2020/09/18 17:14:51 http: TLS handshake error from 127.0.0.1:58860: EOF
2020/09/18 17:14:51 http: TLS handshake error from 127.0.0.1:58858: EOF
2020/09/18 17:14:51 http: TLS handshake error from 127.0.0.1:58864: EOF
2020/09/18 17:14:51 http: TLS handshake error from 127.0.0.1:58862: EOF

Furthermore, I'm getting the following errors at the kubeapi server

controller.go:114] loading OpenAPI spec for "v1alpha1.tap.linkerd.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: Error trying to reach service: 'dial tcp 10.108.135.128:443: i/o timeout', Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]

Making this even weirder is my linkerd check says everything is OK:

# linkerd check
kubernetes-api
--------------
√ can initialize the client
√ can query the Kubernetes API

kubernetes-version
------------------
√ is running the minimum Kubernetes API version
√ is running the minimum kubectl version

linkerd-existence
-----------------
√ 'linkerd-config' config map exists
√ heartbeat ServiceAccount exist
√ control plane replica sets are ready
√ no unschedulable pods
√ controller pod is running
√ can initialize the client
√ can query the control plane API

linkerd-config
--------------
√ control plane Namespace exists
√ control plane ClusterRoles exist
√ control plane ClusterRoleBindings exist
√ control plane ServiceAccounts exist
√ control plane CustomResourceDefinitions exist
√ control plane MutatingWebhookConfigurations exist
√ control plane ValidatingWebhookConfigurations exist
√ control plane PodSecurityPolicies exist

linkerd-identity
----------------
√ certificate config is valid
√ trust anchors are using supported crypto algorithm
√ trust anchors are within their validity period
√ trust anchors are valid for at least 60 days
√ issuer cert is using supported crypto algorithm
√ issuer cert is within its validity period
√ issuer cert is valid for at least 60 days
√ issuer cert is issued by the trust anchor

linkerd-api
-----------
√ control plane pods are ready
√ control plane self-check
√ [kubernetes] control plane can talk to Kubernetes
√ [prometheus] control plane can talk to Prometheus
√ tap api service is running

linkerd-version
---------------
√ can determine the latest version
√ cli is up-to-date

control-plane-version
---------------------
√ control plane is up-to-date
√ control plane and cli versions match

linkerd-addons
--------------
√ 'linkerd-config-addons' config map exists

linkerd-grafana
---------------
√ grafana add-on service account exists
√ grafana add-on config map exists
√ grafana pod is running

Status check results are √

I also added a taefik ingress rule to an otherwise working cluster, and can't access the dashboard:

apiVersion: v1
kind: Secret
type: Opaque
metadata:
  name: web-ingress-auth
  namespace: linkerd
data:
  auth: private value...
  # generated with htpassword
  # then base 64 encoded
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: web-ingress
  namespace: linkerd
  annotations:
    kubernetes.io/ingress.class: 'traefik'
    ingress.kubernetes.io/custom-request-headers: l5d-dst-override:linkerd-web.linkerd.svc.cluster.local:8084
    traefik.ingress.kubernetes.io/auth-type: basic
    traefik.ingress.kubernetes.io/auth-secret: web-ingress-auth
spec:
  rules:
    - host: linkerd-dashboard.private.com
      http:
        paths:
          - backend:
              serviceName: linkerd-web
              servicePort: 8084

Although it seems like it should be working:

$ kubectl get ing -n linkerd
NAME          CLASS    HOSTS                                ADDRESS         PORTS   AGE
web-ingress   <none>   linkerd-dashboard.private.com        x.x.x.x   80      42m
1

There are 1 answers

0
cpretzer On BEST ANSWER

Glad to hear that you got it sorted out @mikeLundquist.

To answer the original question, you did everything right to disable mTLS by specifying these flags on installation:

--disable-identity

--disable-tap

For helm users, you can add disableIdentity: true and disableTap: true to the Linkerd helm chart under the proxy section.