Why ns_t_ns is faster than ns_t_a when query root sever?

109 views Asked by At

I want to know the latency between client and local dns server. So I send a query for the root dns server(.) like that:

res_nquery(&res, ".", ns_c_in, ns_t_a, answer, sizeof(answer));

But if I change ns_t_a to ns_t_ns, the query is become more faster. Why this happen?

Response when use ns_t_a: enter image description here

Response when use ns_t_ns: enter image description here

1

There are 1 answers

3
Florian Weimer On

A recursive resolver needs to cache the ./IN/NS record set, and usually does so when the resolver is started. This is called priming and is covered in this RFC:

The set of root name servers also never expires from the cache (in a typical implementation).

A query for ./IN/A does not happen during regular operation, so the cache needs to be populated first. This resource record set will also expire eventually.

If both resource records sets are in the cache, typical resolver response times will be identical.