Azure Kubernetes - Istio Egress not working

824 views Asked by At

I have used the following configuration to setup the Istio

cat << EOF | kubectl apply -f -
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
  namespace: istio-system
  name: istio-control-plane
spec:
  # Use the default profile as the base
  # More details at: https://istio.io/docs/setup/additional-setup/config-profiles/
  profile: default
  # Enable the addons that we will want to use
  addonComponents:
    grafana:
      enabled: true
    prometheus:
      enabled: true
    tracing:
      enabled: true
    kiali:
      enabled: true
  values:
    global:
      # Ensure that the Istio pods are only scheduled to run on Linux nodes
      defaultNodeSelector:
        beta.kubernetes.io/os: linux
    kiali:
      dashboard:
        auth:
          strategy: anonymous
  components:
    egressGateways:
    - name: istio-egressgateway
      enabled: true
EOF

I could see that the istio services

enter image description here

kubectl get svc -n istio-system

I have deployed the sleep app

kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.7/samples/sleep/sleep.yaml
-n akv2k8s-test

and have deployed the ServiceEntry

kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
  name: httpbin-ext
  namespace: akv2k8s-test
spec:
  hosts:
  - httpbin.org
  ports:
  - number: 80
    name: http
    protocol: HTTP
  resolution: DNS
  location: MESH_EXTERNAL
EOF

and tried accessing the external URL

export SOURCE_POD=$(kubectl get -n  akv2k8s-test  pod -l app=sleep -o jsonpath='{.items..metadata.name}')
kubectl exec "$SOURCE_POD"  -n  akv2k8s-test  -c sleep -- curl -sI http://httpbin.org/headers | grep  "HTTP/"; 

however I could not see any logs reported on the proxy

kubectl logs "$SOURCE_POD" -n  akv2k8s-test -c istio-proxy | tail

enter image description here

as per the documentation I should see this

enter image description here

however I don't see the header

enter image description here

am I missing something here?

2

There are 2 answers

0
One Developer On BEST ANSWER

I got it working as mentioned below

cat << EOF | kubectl apply -f -
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
  namespace: istio-system
  name: istio-control-plane
spec:
  # Use the default profile as the base
  # More details at: https://istio.io/docs/setup/additional-setup/config-profiles/
  profile: default
  # Enable the addons that we will want to use
  addonComponents:
    grafana:
      enabled: true
    prometheus:
      enabled: true
    tracing:
      enabled: true
    kiali:
      enabled: true
  values:
    global:
      # Ensure that the Istio pods are only scheduled to run on Linux nodes
      defaultNodeSelector:
        beta.kubernetes.io/os: linux
    kiali:
      dashboard:
        auth:
          strategy: anonymous
  components:
    egressGateways:
    - name: istio-egressgateway
      enabled: true
  meshConfig:
    accessLogFile: /dev/stdout
    outboundTrafficPolicy:
      mode: REGISTRY_ONLY
EOF


cat << EOF | kubectl apply -f -
apiVersion: v1
kind: Namespace
metadata:
  name: akv2k8s-test
  labels:
    istio-injection: enabled
    azure-key-vault-env-injection: enabled
---
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
  name: cnn
  namespace: akv2k8s-test
spec:
  hosts:
  - edition.cnn.com
  ports:
  - number: 80
    name: http-port
    protocol: HTTP
  - number: 443
    name: https
    protocol: HTTPS
  resolution: DNS
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: istio-egressgateway
  namespace: akv2k8s-test
spec:
  selector:
    istio: egressgateway
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - edition.cnn.com
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: egressgateway-for-cnn
  namespace: akv2k8s-test
spec:
  host: istio-egressgateway.istio-system.svc.cluster.local
  subsets:
  - name: cnn
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: direct-cnn-through-egress-gateway
  namespace: akv2k8s-test
spec:
  hosts:
  - edition.cnn.com
  gateways:
  - istio-egressgateway
  - mesh
  http:
  - match:
    - gateways:
      - mesh
      port: 80
    route:
    - destination:
        host: istio-egressgateway.istio-system.svc.cluster.local
        subset: cnn
        port:
          number: 80
      weight: 100
  - match:
    - gateways:
      - istio-egressgateway
      port: 80
    route:
    - destination:
        host: edition.cnn.com
        port:
          number: 80
      weight: 100
EOF

kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.7/samples/sleep/sleep.yaml -n akv2k8s-test
export SOURCE_POD=$(kubectl get pod -l app=sleep -n akv2k8s-test -o jsonpath={.items..metadata.name})
kubectl exec "$SOURCE_POD" -n akv2k8s-test -c sleep -- curl -sL -o /dev/null -D - http://edition.cnn.com/politics
kubectl logs -l istio=egressgateway -c istio-proxy -n istio-system | tail


kubectl delete -n akv2k8s-test gateway istio-egressgateway
kubectl delete -n akv2k8s-test serviceentry cnn
kubectl delete -n akv2k8s-test virtualservice direct-cnn-through-egress-gateway
kubectl delete -n akv2k8s-test destinationrule egressgateway-for-cnn
0
Jakub On

Just to clarify what Karthikeyan Vijayakumar did to make this work.

In theory istio Egress Gateway won't work here because you haven't used it, you just used istio Service Entry to access publicly accessible service edition.cnn.com from within your Istio cluster.

Take a look at documentation here, you need few more components to actually use it.

You're missing Egress Gateway, Destination Rule and Virtual Service as per above documentation point 3 and 4.

Service entry enables adding additional entries into Istio’s internal service registry, so that auto-discovered services in the mesh can access/route to these manually specified services. It's not like you enable egress gateway and just with that every traffic goes through Egress Gateway.

When you add below dependencies the traffic goes as follow, I assume you are using the pod inside the cluster to communicate with external service.

[sleep pod - envoy sidecar] -> mesh gateway -> egress gateway -> external service