I have some RESTfull APIs deployed on AWS, mostly on Elasticbeanstalk.
My company is gradually adopting a Microservices architecture, and, therefore, I want to start managing these APIs in a more professional and automated way. Hence, I want to adopt some kind of API Manager to provide standard functionalities such as routing and discovery.
In addition, I wish to use such API Manager to expose some of my APIs to the Internet. The manager would be exposed to the Internet through SSL only and should require some sort of authentication from external consumers before routing their requests to the internal APIs. For my use case, a simple API Key in the Authorization header of every request would suffice.
I'm currently considering two products as API Managers: WSO2 and Kong. The former is a somewhat new open source project hosted on Github.
In all the deployment scenarios that I am considering, the API Managers would have to be deployed on AWS EC2 instances. Moreover, they would have to be deployed on, at least, two different availability zones and behind an Elastic Load Balancer (ELB) to provide high availability to the managed APIs.
Most of my APIs adhere to the HATEOAS constraints. Therefore, many of their JSON responses contain links to other resources, which must be built dynamically based on the original request.
For instance:
If a user sent a request from the Internet through the exposed API Manager, the URL would look like: https://apimanager.mycompany.com/accounts/123
As a result, the user should receive a JSON response containing an Account resource with a link to, let's say, a Subscription resource. The link URL should be based on the protocol, host and port of the original request, and, therefore, would look like: https://apimanager.mycompany.com/subscriptions/789.
In order to meet the dynamic link generation requirements mentioned above, my APIs rely on the X-Forwarded-Proto, X-Forwarded-Host and X-Forwarded-Port HTTP headers. These should contain the protocol (http or https), the host name and the port used by the consumer in the original request, in spite of how many proxies the request passed through.
However, I noticed that when requests pass through ELBs, the X-Forwarded-Proto and X-Forwarded-Port headers are changed to values that refer to the last ELB the request passed through, instead of the values that were in the original request.
For instance: If the original request hits the API Manager through HTTPS, the Manager forwards the request to the internal API through HTTP; thus, when the request hits the second ELB, the ELB changes the X-Forwarded-Proto header to "http". As a result, the original "https" value of the X-Forwarded-Proto header is lost. Hence, the API is unable to build proper links with the "https" protocol in the URLs.
Apparently, ELBs can't be configured to behave in any other way. I couldn't find any setting that could affect this behavior in AWS's documentation.
Moreover, there doesn't seem to be any better alternative to AWS's ELBs. If I choose to use another product like HAProxy, or do the load balancing through the API Manager itself, I would have to install it on a regular EC2 instance, and, therefore, create a single point of failure.
I'm including an informal diagram to better convey my point of view.
Furthermore, I couldn't find any relevant discussion about deployment scenarios for WSO2 or Kong that would address these matters in any way. It's not clear to me how these products should relate to AWS's ELBs.
Comments from others with similar environments will be very welcome.
Thank you.
Interesting question/challenge - I'm not aware of a way to configure an Elastic Load Balancer's
X-Forwarded-*
header behavior. However, you might be able to work around this by leveraging ELB's different listener types for the two supported network layers of the OSI Model:TCP/SSL Listener without Proxy Protocol
Rather than using an HTTP listener (OSI layer 7), which makes sense for terminating SSL etc., you could just use the non intrusive TCP/SSL listener (OSI layer 4) for your internal load balancers, see Protocols:
I haven't tried this, but would expect the
X-Forwarded-*
headers added by the external HTTP/HTTPS load balancer to be passed through unmodified by the internal TCP/SSL load balancer in this scenario.TCP/SSL Listener with Proxy Protocol
Alternatively, you could also leverage the more advanced/recent Proxy Protocol Support for Your Load Balancer right away, see the introductory blog post Elastic Load Balancing adds Support for Proxy Protocol for more on this:
Other than the
X-Forwarded-*
headers, you can enable and disable proxy protocol handling. On the flip-side, your backend layers might not yet facilitate proxy protocol automatically and need to be adapted accordingly.