I am working on an ASP.NET Core 2.0 API and my API needs to make calls to another, 3rd party, REST API to upload and retrieve files and get file lists and status information. I will be hosting the API in Azure and plan to do Blue-Green deployments between my staging and production slots.
It seems that the general consensus for best practice is to set up a Singleton instance of the HTTPClient via DI registration in Startup.cs-->ConfigureSerrvices method, in order to improve performance and avoid socket errors that can occur if I new up an dispose the HTTPClient connection with each use via a Using statement. This is noted in the following links;
https://aspnetmonsters.com/2016/08/2016-08-27-httpclientwrong/
https://msdn.microsoft.com/en-us/library/system.net.http.httpclient(v=vs.110).aspx#Anchor_5
http://www.nimaara.com/2016/11/01/beware-of-the-net-httpclient/
But if I do that, then I can face an issue where the Singleton instance will not see any DNS change when I do a Blue-Green deployment in Azure. This is noted in the following links;
http://byterot.blogspot.co.uk/2016/07/singleton-httpclient-dns.html
https://github.com/dotnet/corefx/issues/11224
http://www.nimaara.com/2016/11/01/beware-of-the-net-httpclient/
So.. the general consensus now is to use a static HTTPClient instance but control the ConnectionLeaseTimeout value of the ServicePoint class to set it to a shorter value that will force-close the connection to get a refresh of the DNS. This blog post even talks about a nice RestClient component in a nuget package (Easy.Common by Nima) that properly addresses ConnectionLeaseTimeout as well as the cached DNS values.
However, it seems that ASP.NET Core 2.0 does not fully implement the ServicePoint and so this approach is not really currently supported in ASP.Net Core 2.0.
Can anyone advise me as to the correct approach I should pursue for using HttpClient in my ASP.NET Core 2.0 API running on Azure? I want to be able to do Blue-Green deployments. So, should I just resort to the Using statement and new up the client with each use and just suffer the performance hit?
It just seems like there has to be a reliable and performant solution to this common need.
The problem with some of the articles you've linked to is that they've resulted in a widespread belief that setting
ConnectionLeaseTimeout
does some sort of black magic way down in the sockets layer, and that if you're on a platform that doesn't support it, you're screwed. Those articles do a disservice by not touching on what that setting actually does, which is send aConnection: Close
header to the server being called at regular intervals. That's it. I've verified this from the source, and it's very easy to replicate. In fact I've done it myself in my Flurl library, implementation details here and here.That said, I personally find the DNS problem to be a bit overblown. Note, for example, that connections are automatically closed after sitting idle for a period of time (100 seconds by default). The benefits of using
HttpClient
as a singleton far outweigh the risks.My advice would be to use an instance per 3rd-party service being called. The thinking here is you get maximum reuse while still taking advantage of things like
DefaultRequestHeaders
, which tend to be specific to one service. If you're only calling one service, that's just a singleton. (Conversely, if you're calling 1000 different services, you can't avoid 1000 open sockets any way you slice it.) If you don't expect the connection to ever go idle for long and want to be defensive about possible DNS switches with the 3rd-party service, send aConnection: close
header, or simply dispose and recreate theHttpClient
, at regular intervals. And note that this is a trade-off, not a perfect solution, that should help mitigate the problem.