Using an AuthorizationPolicy causes a 503 error

464 views Asked by At

When using an AuthorizationPolicy, I came across a 503 error trying to access the application. The error seems to be between the gateway and the service, but traffic never reaches the service. For instance, when using this policy to allow all traffic:

---
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
  name: authorize-allow-all
  namespace: dagster
spec:
  selector:
      matchLabels:
        component: dagit
  action: ALLOW

Logs showing 503 error:

022-03-23T15:25:47.112544Z  debug   envoy filter    tls inspector: new connection accepted
2022-03-23T15:25:47.117785Z debug   envoy filter    tls:onServerName(), requestedServerName: radacted
2022-03-23T15:25:47.117863Z debug   envoy conn_handler  [C8274] new connection
2022-03-23T15:25:47.202884Z debug   envoy http  [C8274] new stream
2022-03-23T15:25:47.202965Z debug   envoy http  [C8274][S2813519150144299458] request headers complete (end_stream=false):
‘:authority’, ‘radacted’
‘:path’, ‘/graphql’
‘:method’, ‘POST’
‘dagster-testing’, ‘3’
‘authorization’, ‘Bearer eyJ0...’
‘content-type’, ‘application/json’
‘user-agent’, ‘PostmanRuntime/7.29.0’
‘accept’, ‘*/*’
‘cache-control’, ‘no-cache’
‘postman-token’, ‘d625526b-3f8a-4f9f-9b8d-352a1683b9d5’
‘accept-encoding’, ‘gzip, deflate, br’
‘connection’, ‘keep-alive’
‘content-length’, ‘110’
‘cookie’, ‘_oauth2_proxy_csrf=radacted’
2022-03-23T15:25:47.203065Z debug   envoy router    [C8274][S2813519150144299458] cluster ’outbound|3000||dagster-dagit.dagster.svc.cluster.local’ match for URL ‘/graphql’
2022-03-23T15:25:47.203121Z debug   envoy router    [C8274][S2813519150144299458] router decoding headers:
‘:authority’, ‘radacted’
‘:path’, ‘/graphql’
‘:method’, ‘POST’
‘:scheme’, ‘https’
‘dagster-testing’, ‘3’
‘authorization’, ‘Bearer eyJ0...’
‘content-type’, ‘application/json’
‘user-agent’, ‘PostmanRuntime/7.29.0’
‘accept’, ‘*/*’
‘cache-control’, ‘no-cache’
‘postman-token’, ‘d625526b-3f8a-4f9f-9b8d-352a1683b9d5’
‘accept-encoding’, ‘gzip, deflate, br’
‘content-length’, ‘110’
‘cookie’, ‘_oauth2_proxy_csrf=radacted’
‘x-forwarded-for’, ‘10.11.225.213’
‘x-forwarded-proto’, ‘https’
‘x-envoy-internal’, ‘true’
‘x-request-id’, ‘8aee0b05-65b3-420b-9ea4-2c3c11f3f93b’
‘x-envoy-decorator-operation’, ‘dagster-dagit.dagster.svc.cluster.local:3000/*’
‘x-envoy-peer-metadata’, ‘ChQKDkFQUF9DT05UQUlORVJTEgIaAAoaCgpDTFVTVEVSX0lEEgwaCkt1YmVybmV0ZXMKGQoNSVNUSU9fVkVSU0lPThIIGgYxLjEwLjAK0AUKBkxBQkVMUxLFBSrCBQoXCgNhcHASEBoOaXN0aW8tb3BlcmF0b3IKKAobYXBwLmt1YmVybmV0ZXMuaW8vY29tcG9uZW50EgkaB2luZ3Jlc3MKJQobYXBwLmt1YmVybmV0ZXMuaW8vbWFuYWdlZEJ5EgYaBEhlbG0KMgoWYXBwLmt1YmVybmV0ZXMuaW8vbmFtZRIYGhZpc3Rpby1vcGVyYXRvci1pbmdyZXNzCi0KGWFwcC5rdWJlcm5ldGVzLmlvL3BhcnQtb2YSEBoOaXN0aW8tb3BlcmF0b3IKJQoZYXBwLmt1YmVybmV0ZXMuaW8vdmVyc2lvbhIIGgZ2MC4wLjIKEwoFY2hhcnQSChoIZ2F0ZXdheXMKHQoNaGVsbS5zaC9jaGFydBIMGgp1ZHAtYWRkb25zChQKCGhlcml0YWdlEggaBlRpbGxlcgo2CilpbnN0YWxsLm9wZXJhdG9yLmlzdGlvLmlvL293bmluZy1yZXNvdXJjZRIJGgd1bmtub3duCiEKBWlzdGlvEhgaFmRhZ3N0ZXItaW5ncmVzc2dhdGV3YXkKGQoMaXN0aW8uaW8vcmV2EgkaB2RlZmF1bHQKMAobb3BlcmF0b3IuaXN0aW8uaW8vY29tcG9uZW50EhEaD0luZ3Jlc3NHYXRld2F5cwohChFwb2QtdGVtcGxhdGUtaGFzaBIMGgo1OThiYzU5OGQ5ChIKB3JlbGVhc2USBxoFaXN0aW8KOwofc2VydmljZS5pc3Rpby5pby9jYW5vbmljYWwtbmFtZRIYGhZkYWdzdGVyLWluZ3Jlc3NnYXRld2F5Ci8KI3NlcnZpY2UuaXN0aW8uaW8vY2Fub25pY2FsLXJldmlzaW9uEggaBmxhdGVzdAoRCgNzaGESChoINjE0ZWE5MmMKIgoXc2lkZWNhci5pc3Rpby5pby9pbmplY3QSBxoFZmFsc2UKGgoHTUVTSF9JRBIPGg1jbHVzdGVyLmxvY2FsCjEKBE5BTUUSKRonZGFnc3Rlci1pbmdyZXNzZ2F0ZXdheS01OThiYzU5OGQ5LXA3enc0ChsKCU5BTUVTUEFDRRIOGgxpc3Rpby1zeXN0ZW0KXwoFT1dORVISVhpUa3ViZXJuZXRlczovL2FwaXMvYXBwcy92MS9uYW1lc3BhY2VzL2lzdGlvLXN5c3RlbS9kZXBsb3ltZW50cy9kYWdzdGVyLWluZ3Jlc3NnYXRld2F5ChcKEVBMQVRGT1JNX01FVEFEQVRBEgIqAAopCg1XT1JLTE9BRF9OQU1FEhgaFmRhZ3N0ZXItaW5ncmVzc2dhdGV3YXk=’
‘x-envoy-peer-metadata-id’, ‘router~100.112.75.76~dagster-ingressgateway-598bc598d9-p7zw4.istio-system~istio-system.svc.cluster.local’
‘x-envoy-attempt-count’, ‘1’
‘x-b3-traceid’, ‘6884eec06668989ccb5ed70c16ef6f58’
‘x-b3-spanid’, ‘cb5ed70c16ef6f58’
‘x-b3-sampled’, ‘0’
‘x-envoy-original-path’, ‘/graphql’
2022-03-23T15:25:47.203134Z debug   envoy pool  queueing stream due to no available connections
2022-03-23T15:25:47.203139Z debug   envoy pool  trying to create new connection
2022-03-23T15:25:47.203144Z debug   envoy pool  creating a new connection
2022-03-23T15:25:47.203205Z debug   envoy client    [C8275] connecting
2022-03-23T15:25:47.203214Z debug   envoy connection    [C8275] connecting to 100.112.29.212:3000
2022-03-23T15:25:47.203322Z debug   envoy connection    [C8275] connection in progress
2022-03-23T15:25:47.203366Z debug   envoy http  [C8274][S2813519150144299458] request end stream
2022-03-23T15:25:47.204011Z debug   envoy connection    [C8275] connected
2022-03-23T15:25:47.205067Z debug   envoy client    [C8275] connected
2022-03-23T15:25:47.205080Z debug   envoy pool  [C8275] attaching to next stream
2022-03-23T15:25:47.205085Z debug   envoy pool  [C8275] creating stream
2022-03-23T15:25:47.205093Z debug   envoy router    [C8274][S2813519150144299458] pool ready
2022-03-23T15:25:47.205927Z debug   envoy connection    [C8275] remote close
2022-03-23T15:25:47.205937Z debug   envoy connection    [C8275] closing socket: 0
2022-03-23T15:25:47.205959Z debug   envoy connection    [C8275] SSL shutdown: rc=1
2022-03-23T15:25:47.205999Z debug   envoy client    [C8275] disconnect. resetting 1 pending requests
2022-03-23T15:25:47.206006Z debug   envoy client    [C8275] request reset
2022-03-23T15:25:47.206013Z debug   envoy router    [C8274][S2813519150144299458] upstream reset: reset reason: connection termination, transport failure reason:
2022-03-23T15:25:47.206043Z debug   envoy http  [C8274][S2813519150144299458] Sending local reply with details upstream_reset_before_response_started{connection termination}
2022-03-23T15:25:47.206079Z debug   envoy http  [C8274][S2813519150144299458] encoding headers via codec (end_stream=false):
‘:status’, ‘503’
‘content-length’, ‘95’
‘content-type’, ‘text/plain’
‘date’, ‘Wed, 23 Mar 2022 15:25:47 GMT’
‘server’, ‘istio-envoy’
2022-03-23T15:25:47.206202Z debug   envoy wasm  wasm log stats_outbound stats_outbound: [extensions/stats/plugin.cc:621]::report() metricKey cache hit , stat=12
2022-03-23T15:25:47.206213Z debug   envoy wasm  wasm log stats_outbound stats_outbound: [extensions/stats/plugin.cc:621]::report() metricKey cache hit , stat=6
2022-03-23T15:25:47.206217Z debug   envoy wasm  wasm log stats_outbound stats_outbound: [extensions/stats/plugin.cc:621]::report() metricKey cache hit , stat=10
2022-03-23T15:25:47.206221Z debug   envoy wasm  wasm log stats_outbound stats_outbound: [extensions/stats/plugin.cc:621]::report() metricKey cache hit , stat=14
2022-03-23T15:25:47.206232Z debug   envoy pool  [C8275] client disconnected, failure reason:
2022-03-23T15:25:47.206253Z debug   envoy pool  [C8275] destroying stream: 0 remaining

Where can we go to find more information about this error (logging, pods/locations, etc.)?

1

There are 1 answers

0
ross On BEST ANSWER

The error above came from an alternate (unsecured) service. The Istiod ads logs had the following:

    "pilot_conflict_inbound_listener": {
        "{{pod}}.{{namespace}}": {
            "proxy": "{{pod}}.{{namespace}}",
            "message": "Conflicting inbound listener:{{port}}. existing: {{insecured service}}.{{namespace}}.svc.cluster.local, incoming: {{actual service}}.{{namespace}}.svc.cluster.local"
        }
    }
}

Removing the service resolved the 503 response.