I didn't see anything that answered what I'm looking for here - if there is something I apologize.
I have several secondary nameservers and four master nameservers - two per datacenter. I want the following query behavior:
Client => nameserver list (e.g. /etc/resolv.conf), populated with secondaries
- If secondary knows the answer, respond
- If secondary doesn't know the answer, I want it to forward the request to the master nameservers
- Master nameservers would then recurse to the root nameservers if it doesn't already know using the root hints file.
I was thinking forwarders, but I believe that would make the secondaries forward everything unless it already has it cached, and I don't want that behavior. They are authoritative for zones and should respond as such.
Any ideas?
Unfortunately I don't think this is how DNS works!
If you have two nameservers defined in your resolv.conf the resolver will query the first and wait for either an answer or a timeout. If there is a timeout it will then move on to the next. If the DNS server responds,even with a negative answer, that is the end of the resolution process.
DNS makes the presumption that every name server is pulling from the same dataset. If server A gets a response from a server that is authorative for a domain then that is as far as that query gets. If an authorative nameserver doesn't have a record about a name it is authorative over then it is presumed that that record doesn't exist.
The client isn't going to make the assumption that any other record in resolv.conf will get any other answer. There are multiple records there to protect against server failures, not to get alternative answers.
What is the problem you are trying to solve here?
Related
As question above, is it possible to store all top-level domain locally? Alough there are many TLDs, but not that much, If we can skip DNS root servers, DNS lookup would definitely speed up. Can anyone explain this?
On the false premise of speeding things up
If we can skip DNS root servers, DNS lookup would definitely speed up.
That premise is false or at least far more complicated than what you seem to see for now.
Every recursive nameserver ships with a list of root nameservers, both names and IP addresses. By a process called root priming, they will contact any of those to get an updated list (IP addresses do change, from time to time). This is cached, as any DNS reply is cached.
Hence, you contact root servers far less often than you think, and hence you do not have really anything to "speed up" here.
This is also true for the level below, the TLDs. By querying the root nameservers any recursive nameserver will get the list of authoritative nameservers for a given TLD, and this will be cached.
Current root zone NS records TTL is 2 days, so your recursive nameserver will never contact root nameservers more than once per 2 days for a given TLD, and you will certainly need resolution of multiple names in the same TLD, so again you have almost nothing to gain here.
But yes, you can store all TLDs locally
You can store the list of TLDs locally, this is absolutely not a problem.
ICANN lets you download the root zone, the exact same content published by the root nameservers, you just need to go to https://www.internic.net/domain/root.zone for example. Or you can program your own nameserver to do AXFR queries towards some root nameservers that allow those, so that you become a private slave, and have locally the list of all TLDs. See bottom of https://root-servers.org/faq.html
The problem is now mainly how do you make sure this is updated? TLDs come and go, not dozens per day, but changes happen: new TLDs introduced (big wave after 2012, new wave expected for gTLD, small changes in countries too for ccTLDs), or removed (expired or bankrupted gTLDs, countries disappearing, etc.), or change in nameservers set (names or IP addresses).
Do you want now to monitor things so that you have the correct up to date list
(where querying the root nameservers by definition will always give you the up to date content)?
You have also a smaller question aside: how do you ensure authenticity of the content? Do you properly check the certificate attached to the HTTPS resource above? Do you authenticate the AXFR reply? Etc.
In fact for some this is exactly where DNSSEC shines as it can authenticate all records in the root zone, hence it is not important from where you get the content... as long as you can validate it. But to validate it you need a copy of the root zone DNSSEC key, which has already changed over time.
Anyway, that kind of setup is described at length in RFC 7706 - Decreasing Access Time to Root Servers by Running One on Loopback which has the following abstract:
Some DNS recursive resolvers have longer-than-desired round-trip
times to the closest DNS root server. Some DNS recursive resolver
operators want to prevent snooping of requests sent to DNS root
servers by third parties. Such resolvers can greatly decrease the
round-trip time and prevent observation of requests by running a copy
of the full root zone on a loopback address (such as 127.0.0.1).
This document shows how to start and maintain such a copy of the root
zone that does not pose a threat to other users of the DNS, at the
cost of adding some operational fragility for the operator.
You can also read this paper: On Eliminating Root Nameservers from the DNS by Mark Allman. While the paper is unambiguously in favor of removing the root nameservers, it gives many insightful points on the benefits and drawbacks of doing that.
For more context and about what happens when a recursive nameserver starts, you may wish to have a look also at
BCP 209 - RFC 8109 - Initializing a DNS Resolver with Priming Queries whose abstract is:
This document describes the queries that a DNS resolver should emit
to initialize its cache. The result is that the resolver gets both a
current NS Resource Record Set (RRset) for the root zone and the
necessary address information for reaching the root servers.
You can also have a look at this website: https://localroot.isi.edu/
it gives you a way to synchronize the root zone and authenticate it with a TSIG key.
Probably for the same reason that you don't have local copies of all six-level DNS names, like www.paxdiablo.is.good.looking.com.
Yes, there would be a lot more of those (over and above the 13-or-so root level IP addresses) but the main problem is not so much the size as the update strategy.
At the moment, the 13 root servers have specific IP addresses (in reality, there are more than 13 servers but they all get accessed by one of those IP addresses).
Those IP addresses are baked in to the various DNS resolver programs, and they rarely change.
If one does have to change, the other 12 will continue to provide services until such point as all DNS resolvers around the planet have their lists updated. In fact, one recent (2107, I think) change to the L server was done by running both the old and new IP addresses for a period of six month, before shutting down the old. This would presumably give all DNS resolvers a chance to switch to the new address and therefore never be without the full complement of 13.
So you probably wouldn't want to change all 13 IP addresses in a single change, without some form of parallel serving old and new :-) This provides a great deal of resilience to the lookup system.
However, that resilience would not necessarily be the case for the TLD addresses since they may move from provider to provider at will, and may even change IP addresses within a provider. ICANN has much more control over the root domain than it does over the various TLD providers (in terms of their IP addresses), and there are some 1500 TLDs currently in existence.
In any case, improvements may not be as much as you expect since various DNS resolvers already cache multiple levels of the hierarchy, the same way ARP tables on your machines cache the IP-to-MacAddress lookup tables. You should read Patrick Mevzek's excellent answer to this question (and even accept it) since it delves deeper into the technical side of things.
I'm trying to get all the domains linked to a record like here
http://viewdns.info/reverseip/?host=23.227.38.68&t=1 but I'm getting no luck with dig 23.227.38.68 or nslookup 23.227.38.68. Any idea what I'm doing wrong?
The design of DNS does not support discovering every domain associated with a certain IP address. You may be able to retrieve one or more DNS names associated with the IP address through reverse IP lookup (PTR records), but does not necessarily give you all domains. In fact, it rarely will.
This is because the information you seek is scattered throughout the global DNS network and there is no single authoritative node in the network that has this information. If you think about it, you can point the DNS A record of your own domain to the IP of stackoverflow.com and that's perfectly valid, but anyone seeking to know this would have to find your DNS servers to figure this out. DNS does not provide any pointers for this, though.
Yet, certain "passive DNS" services (probably including viewdns.info) seem to overcome this limitation. These services all work by aggregating DNS data seen in the wild one way or another. At least one of these services works by monitoring DNS traffic passing through major DNS resolvers, building a database from DNS queries. For instance, if someone looks up yourdomain.com that points to 1.2.3.4 and the DNS query happens to pass through the monitored resolver, they take note of that. If a query for anotherdomain.com is seen later and it also resolves to 1.2.3.4, now they have two domains associated with 1.2.3.4, and so on. Note that due to the above, none of the passive DNS services are complete or real-time (they can get pretty close to either, though).
I'm creating my own DNS server using Windows Server and other users will use my DNS server and thought that if it will go down I should have a backup how about if users set my DNS as Primary and Secondary can be something like google public dns? How will it will work? If it can't resolve using my DNS it will try google's? It will try it every request?
Since you mention Google's public DNS server, I assume you're talking about a nameserver to be used as a recursive resolver (not as an authoritative server containing zones).
DNS doesn't distinguish between "primary" and "secondary" nameservers. What actually happens is up to the client.
Some clients may query nameservers in order, so they will query yours first, and then query Google's only if they don't get a response from yours. Other clients may choose a random server from the list for each query, so they will sometimes query yours and sometimes query Google's. Still others might track statistics on each nameserver and prefer the one that usually gives a faster response. This last options requires a stateful client and it's something another nameserver acting as a forwarder might do.
In practice it will not matter because your recursive resolver and Google's public recursive resolver should give the same response for every query.
The specific query that led me to try and unpick this process was:
Will a DNS lookup for a subdomain, such as assets.example.com, be faster if the parent domain, example.com, has already been resolved?
By my (naive) understanding, the basic process for translating a domain name into an IP address is quite simple. The addresses of the thirteen root servers, who know how to resolve top-level domains like com and net, are hard coded in network hardware. In the case of a lookup for example.com, our local DNS server, probably our router, asks one of these root servers where to find a top-level nameserver for com. It then asks the resultant nameserver if it knows how to resolve example. If it does, we're done, if not, we're passed to another server. Each nameserver in this process may well be caching, so that for a time our local router will now know offhand where to look for com and example, and the com server will know where to look for example.
Still, I don't really get it.
I know there are other intermediate DNS servers, such as those provided by ISPs. At what point are they queried?
If the com TLD nameserver does not know how to resolve example, how does it work out what other nameservers to check? Or would this simply mean that example.com cannot be resolved?
When I register a domain and configure nameservers, am I in effect editing a group of NS records for my subdomain of a particular TLD in the database used by the nameservers for that TLD?
Wikipedia explains that some DNS servers combine caching with a recursive query implementation which allows them to serve cache hits and reliably resolve cache misses. I don't understand how these servers come to be queried, or how (even broadly) the resolving algorithm works.
Looking back at my initial question, I might take a stab at "no", assuming the A records are both on the same nameserver. Is this accurate?
First, the misconceptions:
The root hints (names and IP addresses of the 13 root servers) are hardly ever "hard coded in network hardware". Network hardware such as a router, may sometimes have a built in DNS resolver if it happens to also have a DHCP server, but if it does, it's usually just a forwarding resolver that passes the query along to an upstream nameserver (obtained from an ISP) if it doesn't know the answer.
nameservers provided by ISPs don't usually act as "intermediate DNS servers". Either you use your own nameservers (e.g. corporate nameservers, or you installed BIND on your computer) or you use the ones provided by your ISP. In either case, whichever nameserver you choose will take care of the recursive resolution process from beginning to end. The exception is the aforementioned forwarding nameservers.
If the com TLD nameserver does not know how to resolve example, it does not work out what other nameservers to check. It is itself the nameserver to check. It either knows about example, or example doesn't exist.
The answer to your question is yes. If a nameserver has already resolved example.com (and that result is still valid in its cache), then it will be able to resolve assets.example.com more quickly.
The recursive resolution process is much as you described it: First find out the nameservers for . (the root), then find out the nameservers for com, etc... Only the recursive resolver does not actually ask for the nameservers for . and com and example.com. It actually asks for assets.example.com each time. The root servers won't give it the answer to that question (they don't know anything about assets.example.com) but they can at least offer a referral to the nameservers for com. Similarily, the nameservers for com won't answer the question (they don't know either) but they can offer a referral to the nameservers for example.com. The nameservers for example.com may or may not know the answer to the question depending on whether assets.example.com is delegated further to other nameservers or provisioned in the same zone as example.com. Accordingly, the recursive resolver will receive either a final answer or another referral.
Doing a lookup for my domain on http://www.intodns.com/ I noticed theese two messages:
In Parent section:
DNS Parent sent Glue The parent
nameserver g.gtld-servers.net is not
sending out GLUE for every nameservers
listed, meaning he is sending out your
nameservers host names without sending
the A records of those nameservers.
It's ok but you have to know that this
will require an extra A lookup that
can delay a little the connections to
your site. This happens a lot if you
have nameservers on different TLD
(domain.com for example with
nameserver ns.domain.org.)
and in NS section:
Glue for NS records INFO: GLUE was not
sent when I asked your nameservers for
your NS records.This is ok but you
should know that in this case an extra
A record lookup is required in order
to get the IPs of your NS records. The
nameservers without glue are:
109.230.225.96
84.201.40.52 You can fix this for example by adding A records to your
nameservers for the zones listed
above.
I do perfectly understand that the primary objective of glue records is to resolve circular dependencies.
The classic use case:
my domain is example.com and I want to have the nameserver ns1.example.com. This will never work because i cannot know the ip of ns1.example.com if I don't fetch example.com and in order to do that I need to fetch it from ns1.example.com. To resolve this deadlock I add a glue record to ns1.example.com containing the ip adress of the nameserver, so this can work out.
So this problem does not occour if the nameservers are in a different TLD than the domain i want to look up. But however to fetch the zone information from the nameservers I need to know their ip adress right? And in order to know that i need to fetch the zone the nameservers are in from their respective nameservers, right? (or rather my ISP needs to do that in the background) So an extra lookup that takes time?
If I now have glue records, I know the IP adress right away without the need to look it up - so this should speed up the resolution of my domain, shouldnt it?
However my DNS zone provider (tecserver.at) replied that
this would make no sense because "we
are not running ns1.ourdomain.com an
ns1.ourdomain.com as authorative NS for
ourdomain.com.
This would be the only sense for glue
records.
Tecserver has a glue record because
the NS for tecserver.at are
ns1.tecserver.at and ns2.tecserver.at.
Therefore a glue record is needed for
resolution.
simple answer for those interested:
glue records do speed up dns resolution, regardless if used for internal nameserver names or not.