I am working on a link checker, in general I can perform HEAD
requests, however some sites seem to disable this verb, so on failure I need to also perform a GET
request (to double check the link is really dead)
I use the following code as my link tester:
public class ValidateResult
{
public HttpStatusCode? StatusCode { get; set; }
public Uri RedirectResult { get; set; }
public WebExceptionStatus? WebExceptionStatus { get; set; }
}
public ValidateResult Validate(Uri uri, bool useHeadMethod = true,
bool enableKeepAlive = false, int timeoutSeconds = 30)
{
ValidateResult result = new ValidateResult();
HttpWebRequest request = WebRequest.Create(uri) as HttpWebRequest;
if (useHeadMethod)
{
request.Method = "HEAD";
}
else
{
request.Method = "GET";
}
// always compress, if you get back a 404 from a HEAD it can be quite big.
request.AutomaticDecompression = DecompressionMethods.GZip;
request.AllowAutoRedirect = false;
request.UserAgent = UserAgentString;
request.Timeout = timeoutSeconds * 1000;
request.KeepAlive = enableKeepAlive;
HttpWebResponse response = null;
try
{
response = request.GetResponse() as HttpWebResponse;
result.StatusCode = response.StatusCode;
if (response.StatusCode == HttpStatusCode.Redirect ||
response.StatusCode == HttpStatusCode.MovedPermanently ||
response.StatusCode == HttpStatusCode.SeeOther)
{
try
{
Uri targetUri = new Uri(Uri, response.Headers["Location"]);
var scheme = targetUri.Scheme.ToLower();
if (scheme == "http" || scheme == "https")
{
result.RedirectResult = targetUri;
}
else
{
// this little gem was born out of http://tinyurl.com/18r
// redirecting to about:blank
result.StatusCode = HttpStatusCode.SwitchingProtocols;
result.WebExceptionStatus = null;
}
}
catch (UriFormatException)
{
// another gem... people sometimes redirect to http://nonsense:port/yay
result.StatusCode = HttpStatusCode.SwitchingProtocols;
result.WebExceptionStatus = WebExceptionStatus.NameResolutionFailure;
}
}
}
catch (WebException ex)
{
result.WebExceptionStatus = ex.Status;
response = ex.Response as HttpWebResponse;
if (response != null)
{
result.StatusCode = response.StatusCode;
}
}
finally
{
if (response != null)
{
response.Close();
}
}
return result;
}
This all works fine and dandy. Except that when I perform a GET
request, the entire payload gets downloaded (I watched this in wireshark).
Is there any way to configure the underlying ServicePoint
or the HttpWebRequest
not to buffer or eager load the response body at all?
(If I were hand coding this I would set the TCP receive window really low, and then only grab enough packets to get the Headers, stop acking TCP packets as soon as I have enough info.)
for those wondering what this is meant to achieve, I do not want to download a 40k 404 when I get a 404, doing this a few hundred thousand times is expensive on the network
When you do a GET, the server will start sending data from the start of the file to the end. Unless you interrupt it. Granted, at 10 Mb/sec, that's going to be a megabyte per second so if the file is small you'll get the whole thing. You can minimize the amount you actually download in a couple of ways.
First, you can call
request.Abort
after getting the response and before callingresponse.close
. That will ensure that the underlying code doesn't try to download the whole thing before closing the response. Whether this helps on small files, I don't know. I do know that it will prevent your application from hanging when it's trying to download a multi-gigabyte file.The other thing you can do is request a range, rather than the entire file. See the AddRange method and its overloads. You could, for example, write
request.AddRange(512)
, which would download only the first 512 bytes of the file. This depends, of course, on the server supporting range queries. Most do. But then, most support HEAD requests, too.You'll probably end up having to write a method that tries things in sequence:
request.Abort
afterGetResponse
returns.