Communicating with Redis server from a container behind Envoy

2.6k views Asked by At

I have deployed envoy containers as part of an Istio deployment over k8s. Each Envoy proxy container is installed as a "sidecar" next to the app container within the k8s's pod.

I'm able to initiate HTTP traffic from within the application, but when trying to contact Redis server (another container with another envoy proxy), I'm not able to connect and receive HTTP/1.1 400 Bad Request message from envoy.

When examining the envoy's logs I can see the following message whenever this connection passing through the envoy: HTTP/1.1" 0 - 0 0 0 "_"."_"."_"."_""

As far as I understand, Redis commands being sent using pure TCP transport w/o HTTP. Is it possible that Envoy expects to see only HTTP traffic and rejects TCP only traffic? Assuming my understanding is correct, is there a way to change this behavior using Istio and accept and process generic TCP traffic as well?

The following are my related deployment yaml files:

apiVersion: v1
kind: Service
metadata:
  name: redis
  namespace: default
  labels:
    component: redis
    role: client
spec:
  selector:
    app: redis
  ports:
  - name: http
    port: 6379
    targetPort: 6379
    protocol: TCP
  type: ClusterIP

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-db
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: redis
    spec:
      containers:
      - name: redis
        image: redis:3.2-alpine
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 6379

Thanks

1

There are 1 answers

0
Effi Bar-She'an On BEST ANSWER

Getting into envoy (istio proxy):

kubectl exec -it my-pod -c proxy bash

Looking at envoy configuration:

cat /etc/envoy/envoy-rev2.json

You will see that it generates a TCP proxy filter which handles TCP only traffic. Redis example:

"address": "tcp://10.35.251.188:6379",
  "filters": [
    {
      "type": "read",
      "name": "tcp_proxy",
      "config": {
        "stat_prefix": "tcp",
        "route_config": {
          "routes": [
            {
              "cluster": "out.cd7acf6fcf8d36f0f3bbf6d5cccfdb5da1d1820c",
              "destination_ip_list": [
                "10.35.251.188/32"
              ]
            }
          ]
        }
      }

In your case, adding http into Redis service port name (Kubernetes deployment file), generates http_connection_manager filter which doesn't handle row TCP.

See istio docs:

Kubernetes Services are required for properly functioning Istio service. Service ports must be named and these names must begin with http or grpc prefix to take advantage of Istio’s L7 routing features, e.g. name: http-foo or name: http is good. Services with non-named ports or with ports that do not have a http or grpc prefix will be routed as L4 traffic.

Bottom line, just remove port name form Redis service and it should solve the issue :)