I am writing to you because I have a conception problem for my DNS infrastructure.DNS. My infrastructure is composed of a DNS machine (recursive or forwarding) and another authoritatve that has say views according to the source of the client (we can assimilate it to Bind even if it is not the case). This Auhoritative machine should not be queried directly but must go through the other. To summarize here is the infrastructure:
> Client Location 1 Client Location 2 Client Location 3
> \ | /
> DNS Recursive ou Forwarding
> |
> DNS Authoratitve with 3 « views ».
I thought of different solutions to solve these problems :
Create different ports on the DNS Recursive (or Forwading), each port containing a DNS that would correspond to a view that would query the Authoritative DNS (and thus recognize the origin). But I find this solution rather ugly and that will quickly increase if the number of views increases.
Use the DNS extension : EDNS to forward the client network (but that seems pretty complicated).
I wanted to know if you have other solutions and if not what would be the best.
Thank you in advance !
The first solution does not seem really workable as there is nearly no way to change from the default DNS port in various end clients OS. You would instead need separate recursive nameservers on separate IP addresses and each client configured with the specific nameserver it needs to use.
The second solution can work, it is ECS the "EDNS Client Subnet" feature, described in RFC7871 and supported in various nameservers. See for example in Bind: https://www.isc.org/wp-content/uploads/2017/04/ecs.pages.pdf
Now are you really sure you need this setup or that this is the only way to achieve your goals? It is difficult to propose other ideas as you describe from the get go your solution but not really your problem initially nor your constraints.
For example, it may be solved in some cases by just configuring each client with a different domain search list. client1 would have client1.example.com as suffix, client2 would have client2.example.com and so on. Now, with only one standard recursive nameserver and one authoritative one for example.com without any kind of extension or complicated setup, when client1 attempts to resolve www it will (may) get a different reply than client2 also attempting to resolve www as the final two fully qualified domain name would be indeed different (www.client1.example.com vs www.client2.example.com), because of the different search lists. This of course depends a lot on what kind of applications are running on each client.
The use of simpler nameservers such as dnsmasq may also help, but again your space problem is not defined enough to be sure what to suggest.
Related
As question above, is it possible to store all top-level domain locally? Alough there are many TLDs, but not that much, If we can skip DNS root servers, DNS lookup would definitely speed up. Can anyone explain this?
On the false premise of speeding things up
If we can skip DNS root servers, DNS lookup would definitely speed up.
That premise is false or at least far more complicated than what you seem to see for now.
Every recursive nameserver ships with a list of root nameservers, both names and IP addresses. By a process called root priming, they will contact any of those to get an updated list (IP addresses do change, from time to time). This is cached, as any DNS reply is cached.
Hence, you contact root servers far less often than you think, and hence you do not have really anything to "speed up" here.
This is also true for the level below, the TLDs. By querying the root nameservers any recursive nameserver will get the list of authoritative nameservers for a given TLD, and this will be cached.
Current root zone NS records TTL is 2 days, so your recursive nameserver will never contact root nameservers more than once per 2 days for a given TLD, and you will certainly need resolution of multiple names in the same TLD, so again you have almost nothing to gain here.
But yes, you can store all TLDs locally
You can store the list of TLDs locally, this is absolutely not a problem.
ICANN lets you download the root zone, the exact same content published by the root nameservers, you just need to go to https://www.internic.net/domain/root.zone for example. Or you can program your own nameserver to do AXFR queries towards some root nameservers that allow those, so that you become a private slave, and have locally the list of all TLDs. See bottom of https://root-servers.org/faq.html
The problem is now mainly how do you make sure this is updated? TLDs come and go, not dozens per day, but changes happen: new TLDs introduced (big wave after 2012, new wave expected for gTLD, small changes in countries too for ccTLDs), or removed (expired or bankrupted gTLDs, countries disappearing, etc.), or change in nameservers set (names or IP addresses).
Do you want now to monitor things so that you have the correct up to date list
(where querying the root nameservers by definition will always give you the up to date content)?
You have also a smaller question aside: how do you ensure authenticity of the content? Do you properly check the certificate attached to the HTTPS resource above? Do you authenticate the AXFR reply? Etc.
In fact for some this is exactly where DNSSEC shines as it can authenticate all records in the root zone, hence it is not important from where you get the content... as long as you can validate it. But to validate it you need a copy of the root zone DNSSEC key, which has already changed over time.
Anyway, that kind of setup is described at length in RFC 7706 - Decreasing Access Time to Root Servers by Running One on Loopback which has the following abstract:
Some DNS recursive resolvers have longer-than-desired round-trip
times to the closest DNS root server. Some DNS recursive resolver
operators want to prevent snooping of requests sent to DNS root
servers by third parties. Such resolvers can greatly decrease the
round-trip time and prevent observation of requests by running a copy
of the full root zone on a loopback address (such as 127.0.0.1).
This document shows how to start and maintain such a copy of the root
zone that does not pose a threat to other users of the DNS, at the
cost of adding some operational fragility for the operator.
You can also read this paper: On Eliminating Root Nameservers from the DNS by Mark Allman. While the paper is unambiguously in favor of removing the root nameservers, it gives many insightful points on the benefits and drawbacks of doing that.
For more context and about what happens when a recursive nameserver starts, you may wish to have a look also at
BCP 209 - RFC 8109 - Initializing a DNS Resolver with Priming Queries whose abstract is:
This document describes the queries that a DNS resolver should emit
to initialize its cache. The result is that the resolver gets both a
current NS Resource Record Set (RRset) for the root zone and the
necessary address information for reaching the root servers.
You can also have a look at this website: https://localroot.isi.edu/
it gives you a way to synchronize the root zone and authenticate it with a TSIG key.
Probably for the same reason that you don't have local copies of all six-level DNS names, like www.paxdiablo.is.good.looking.com.
Yes, there would be a lot more of those (over and above the 13-or-so root level IP addresses) but the main problem is not so much the size as the update strategy.
At the moment, the 13 root servers have specific IP addresses (in reality, there are more than 13 servers but they all get accessed by one of those IP addresses).
Those IP addresses are baked in to the various DNS resolver programs, and they rarely change.
If one does have to change, the other 12 will continue to provide services until such point as all DNS resolvers around the planet have their lists updated. In fact, one recent (2107, I think) change to the L server was done by running both the old and new IP addresses for a period of six month, before shutting down the old. This would presumably give all DNS resolvers a chance to switch to the new address and therefore never be without the full complement of 13.
So you probably wouldn't want to change all 13 IP addresses in a single change, without some form of parallel serving old and new :-) This provides a great deal of resilience to the lookup system.
However, that resilience would not necessarily be the case for the TLD addresses since they may move from provider to provider at will, and may even change IP addresses within a provider. ICANN has much more control over the root domain than it does over the various TLD providers (in terms of their IP addresses), and there are some 1500 TLDs currently in existence.
In any case, improvements may not be as much as you expect since various DNS resolvers already cache multiple levels of the hierarchy, the same way ARP tables on your machines cache the IP-to-MacAddress lookup tables. You should read Patrick Mevzek's excellent answer to this question (and even accept it) since it delves deeper into the technical side of things.
I'm trying to get all the domains linked to a record like here
http://viewdns.info/reverseip/?host=23.227.38.68&t=1 but I'm getting no luck with dig 23.227.38.68 or nslookup 23.227.38.68. Any idea what I'm doing wrong?
The design of DNS does not support discovering every domain associated with a certain IP address. You may be able to retrieve one or more DNS names associated with the IP address through reverse IP lookup (PTR records), but does not necessarily give you all domains. In fact, it rarely will.
This is because the information you seek is scattered throughout the global DNS network and there is no single authoritative node in the network that has this information. If you think about it, you can point the DNS A record of your own domain to the IP of stackoverflow.com and that's perfectly valid, but anyone seeking to know this would have to find your DNS servers to figure this out. DNS does not provide any pointers for this, though.
Yet, certain "passive DNS" services (probably including viewdns.info) seem to overcome this limitation. These services all work by aggregating DNS data seen in the wild one way or another. At least one of these services works by monitoring DNS traffic passing through major DNS resolvers, building a database from DNS queries. For instance, if someone looks up yourdomain.com that points to 1.2.3.4 and the DNS query happens to pass through the monitored resolver, they take note of that. If a query for anotherdomain.com is seen later and it also resolves to 1.2.3.4, now they have two domains associated with 1.2.3.4, and so on. Note that due to the above, none of the passive DNS services are complete or real-time (they can get pretty close to either, though).
Can two or more domains be hosted on the same server? If yes what is the ip address we are going to get for the two domains?
as a user can i know how a server resolves the host name and assign unique id to different host names
After looking at my comment above and as you are a user, not an administrator, just look at the documentation of nslookup(1). It is a tool to make DNS queries to servers. It allow you to make dns resolution and to investigate the ways you are getting the answer (there are many ways to answer a query, believe me)
First you need to know how the asking is being done. Normally, clients make recursive queries (they want definitive answers to a query and want the server to do the heavy work) and servers do iterative ones (they approximate the answer by asking the servers in the chain to the final domain you are looking for) Servers and clients normally cache results for future questions and provide several ways of fault tolerance, so you cannot control normally how a query is solved. As this is probably the most requested service in internet, the protocol has been optimized to get quick answers even in the worst case.
Once you get an answer, it can be a partial one, it can be cached, it can be non-authoritative (meaning the server is serving a cached entry, not a locally administered one)
When you have several responses to a query (ok, this can happen) you receive them normally in order, depending where are you querying from. The server makes a best effort to order them on proximity to the client (the nearest address is served first) and/or randomly ordered, so you can make round robing to each of the addresses you receive. It depends on the client software, the server implementation, the administrator policy, etc.
Even you can receive a different response depending on who you are. Several corporate servers serve different views of the database depending on where the clients come from. If they come from the inside of the company, they serve addresses for servers not visible from the outside. For example, if you try to access the corporate web server, you can receive the private address to reach it, not the public address of the server accesible from the internet. This concept is called view, and many servers implement it, so the answer to your question is: it depends :)
Ive been reading alot about nameserver the last days. For our websites we want to optimize the waiting time of the visitors that is caused by our namserver. I will have some questions about IP Anycast and the general function of the DNS. Let me start by explaining what I understood the DNS works from user side:
User X wants to visit www.example.com, the following steps happen to get the IP address:
1.Step: User X sends request to the Nameserver of his ISP or nameserver by choice.(recursive nameserver)
2.Step: If the adress is not found, the recursive nameserver will send a request to one of the 13 root nameserver to get the nameserver for the .com TLD
3.Step: Query the .com nameserver to get the auhorative nameserver
4.Step: Query the auhorative nameserver to get the ip-address for www.example.com
First I realized that as a owner of a website you can only optimize Step number 4 and all other steps are not in our hands.
I came across IP Anycast nameserver (what is also used for the 13root nameservers) and totally understand the concept of distributed machines. But what I dont understand is where the decision logic, to which of the distributed machines the user will be send, according to his "position",is implemented? I mean when i buy an anycast nameserver, the logic should be implemented on the .com nameserver (Step 3), so that this nameserver decides to which machine of my anycast nameserver the user will be send.
For me thats really hard to understand and im asking myself if it really works that way? I hope someone can help me with these understanding questions.
Beside of that i found out, that another small method to gain some speed for the user, is to only use A Records and no CName Records anymore.
Are there some more ways to optimize a nameserver?
Thanks in advance!
Your question is not really related to programming, but more to operations, and is also a little too vague ("Are there some more ways to optimize a nameserver?").
But let us try to give you pointers.
User X wants to visit www.example.com, the following steps happen to get the IP address:
Your following description is then mostly correct. Note that at each step, by default until very recently, a recursive nameserver will send the whole name queried to each nameserver. Recently, QNAME Minimization appeared as a standard and now recursive nameservers can send to each authoritativ nameservers only the labels it neeeds to reply. This enhances privacy without changes to the protocol, but is not widespread today because some authoritative nameservers do not work correctly when queried that way.
As a domain name owner you can indeed only have an impact at the last step. But remember that recursive nameservers have caches, so the list of root nameservers as well as the list of .COM nameservers for example are so "hot" (so often needed) that they surely sit always in resolvers' caches, so basically step 1 and 2 happens do not happen often (at start when cache is empty typicaly).
I came across IP Anycast nameserver (what is also used for the 13root nameservers) and totally understand the concept of distributed machines. But what I dont understand is where the decision logic, to which of the distributed machines the user will be send, according to his "position",is implemented?
First things first, IP anycasting is not specific to the DNS, it is just hugely popular here because
it solves the load balancing/fail over problem that all big TLDs have
it works specially well with DNS over UDP which is a simple one query one reply protocol.
So any service can theoretically be anycasted. It means that a given IP address just appears at different locations in the world.
To summarize very broadly, Internet traffic between providers (AS numbers) is exchanged at peering points, where they interconnect and each provider says "I know about IP block 192.0.2.0/24, please send me all traffic for it", etc. for each blocks
(again this is a summary. Blocks are allocated by RIRs, and yes by default this is not very much authenticated so BGP hijacks happen when another provider also says "give me this traffic" when it shouldn't - and it happens because of malicious goals or just simple human errors).
For a normal (technical term: "unicast") IP address, only a single provider (AS) will announce it somewhere (technically: announce its block not just a single IP) and everything will be configured in such a way that wherever the start of the exchange is, for this single IP as destination, it will arrive at the exact same box.
On the contrary, for an anycast IP address, either a single provider or multiple ones (that is multiple Autonomous Systems) will announce this IP at various locations (peering points) in the globe. At each peering point, traffic for this IP will get taken by the provider announcing it there and then it will route this traffic to a specific server "nearby". Announcements of the same IP at peering point A and peering point B, will drive corresponding traffic on one side in datacentre X and the other for datacentre Y.
For the client, when everything works, it does not change anything, as long as all the replying servers react the same way to the same query. The client does not (and sometimes can not) even know the IP is anycasted or that it want to location X when another client doing the same thing will instead hit location Y.
So in short nameservers "decide" nothing in this regard. At each point of the DNS resolution, when they need to contact nameserver NS1 they know its IP address is IP1 and they just open an UDP (or sometimes TCP) connection to this IP, absolutely normally. It is the underlying IP and BGP protocols that will, if anycast is in action, make the response come from the appropriate "close" server.
Note that anycasting in this way, for DNS, achieve both:
fail-over : if one server dies, with appropriate monitoring, its provider withdraws its IP announcement, that is this local copy kind of disappear and the traffic will automatically (in order of seconds) shift to any other instance where the same IP is announced
load-balancing : rougly speaking, if you anycast one IP on 2 locations, each should receive 50% of traffic. It is not true in practice, and is very complicated (read: impossible) to predict or even monitor, because it all depends on the peering points, the agreements between the providers and various other policies (simple example: if you peer at two points where on first there is only one provider sending you trafic, and at other point you have 100 providers with whom you exchange traffic, then you may get more connections going to the second instance... except of course if single provider at first peering point is an ISP with millions of clients, where the other 100 providers are single small organizations...)
So, some nameservers are anycasted. Nowadays all the root ones are (but this was not true 16 months ago, see https://b.root-servers.org/news/2017/04/17/anycast.html as b.root-servers.org was the last one to board the anycast wagon) as well as all big TLDs, sometimes even with more that one "Anycast DNS providers".
For any domain name, you can get some providers that will give you a DNS service for it, based on a "cloud" of anycasted nameservers.
See for example:
https://www.pch.net/services/dns_anycast
https://www.netnod.se/dns/netnod-dns-services
https://dyn.com/dns/network-map/
http://www.cdns.net/anycast.html
https://www.rcodezero.at/en/home/
https://aws.amazon.com/route53/
https://cloud.google.com/dns/
and many others.
Now following on a totally different topic:
Beside of that i found out, that another small method to gain some speed for the user, is to only use A Records and no CName Records anymore.
This is not really something you gain things with, and CNAME records are useful in many other cases.
Again, you need to remember that there are caches.
So even if your configuration is:
www.mywebsite.example CNAME www.mywebsite.example.somegreatCDN.example
www.mywebsite.example.somegreatCDN.example A 192.0.2.128
it is true that this means on paper two DNS requests to finally be able to do an HTTP query, but in practice things will be cached (even more so today with big public open resolvers such as 1.1.1.1 or 8.8.8.8 or 9.9.9.9, that are anycasted too in fact), so the difference will be negligible (and only impacts the first time, never again until it is in cache) ... especially in the case of HTTP and everything that happens later that is opening, frequently, dozens of connections to download javscript source codes, CSS files, fonts, etc. that may be hosted elsewhere.
A lot of websites use CNAME records without negative impact. See www.amazon.com for example, right now:
;; ANSWER SECTION:
www.amazon.com. 730 IN CNAME www.cdn.amazon.com.
www.cdn.amazon.com. 11 IN CNAME d3ag4hukkh62yn.cloudfront.net.
d3ag4hukkh62yn.cloudfront.net. 11 IN A 54.239.172.122
You may however argue that some names will be more popular than others and hence stay longer in cache, which is certainly the case.
And finally:
Are there some more ways to optimize a nameserver?
Based on what? We touched various subjects above, all are compromises, you sacrifice something (it may be just "money") to gain something else (redundancy, etc.). There is no generic rule to declare when this compromise makes sense or not, it will depend a lot on your situation and what you are trying to do.
You are right, and should be congratulated about that, that you should invest some time around DNS setup, both for security and performance reasons. While a lot of money is often invested in huge HTTP setup to sustain various problems or spikes of activity (but even the best fail sometimes, see the recent Amazon Prime Day opening that was a gigantic failure), but often people forget about the DNS because it is on the infrastructure level so not well known nor understood (using UDP makes it already stand out from all other known protocols, as this is rare).
For example there is another completely different thing (it is orthogonal to anycasting, so it can work with or without it, the goals are different) that is related: "geo-DNS" means when a nameserver will reply differently based from where the client asks. This is meant to give, for example, a different IP for a webserver, one that is closer to client (so in that case the webserver itself is probably not anycasted). This is done by just looking at the source IP from the DNS packet, but it is often not good enough because the authoritative nameservers only see as source IP the one from the recursive nameserver and not the real end client one and nowadays with big open public recursive nameservers the location should be far off, so you also have a specific DNS option called EDNS Client Subnet that can be passed between recursive and authoritative nameservers so that they get the end client real IP address (in fact a block not a single IP for privacy reasons) and can act upon it.
Short answer is: you are right. The NameServers is where you can optimize and all "IP Anycast" products I have seen is just a NameServer setup that has a lot of locations.
They use the same system as the "root servers of the internet" but this does not mean that they have the same function. The IP Anycast is simply a method for multiple servers in different locations to serve the same IP address.
From WIKIPEDIA (http://en.wikipedia.org/wiki/Anycast)
On the Internet, anycast is usually implemented by using Border Gateway Protocol to simultaneously announce the same destination IP address range from many different places on the Internet. This results in packets addressed to destination addresses in this range being routed to the "nearest" point on the net announcing the given destination IP address.
If you are using a big ISP like ASCIO or someone using ULTRADNS you probably do not have to worry about this step too much, but if the NS is a local ISP it is worth considering. Make sure you have NS where your visitors are.
I assume this is where you came into contact with "IP Anycast" products. None that I have seen offers anything to attack step 1-2-3 but rather offers a large setup of NameServers allowing them to reduce resolving time due to closeness of networks.
Let me know if you are of the understanding that the offer is for a root NameServer setup, because I would like to see this.
If I just want to know if a domain name is reserved; is it sufficient to use this command and see if any domain name servers turn up, in which case it's reserved?
host -t NS example.com
It's a lot faster than visiting http://www.internic.net/whois.html and typing example.com to get much more detailed results, which I'm not interested in anyway.
Absolutely not.
A past employer registered theirname.biz solely for use on the internal network: it had DNS entries on the inward-facing network DNS server, but nowhere on the internet.
I'm not sure the trick was particularly essential, but "imap.theirname.biz" has the advantage over just "imap" that it's unambiguous if you're connected simultaneously to multiple networks (in the absence of deliberate foul play, of course), so you can just use all their internal DNS resolvers. Also the advantage over "imap.theirname.com" that once you know the convention, it's immediately obvious that it's a private server, and hence the reason you can't connect to it is that you forgot to connect VPN. There may have been other benefits to which I was not privy: I'm a coder, not an IT tech...
Various TLDs have differing requirements for whether name servers are provisioned or not. For example ".de" does require that name servers are up and running and correctly configured before they'll allow the domain registration to proceed.
The technical standards for DNS don't require it though, in fact there's nothing in the core DNS specifications to link together the registration of a name with its subsequent operation in the DNS.
Therefore, using whois is probably the most reliable method, with the caveat that you'll need a whois client that's clever enough to figure out which server to talk to for the domain in question.
That said, checking for the appropriate NS record is a very good shortcut to check that a domain is registered, you just can't use the absence of such a record to prove that it isn't!
NS records are not necessarily required for registered domains. The whois service is your most reliable option.
Note that most Unix systems and Mac OS X have a "whois" command line program that is really quick to use:
whois stackoverflow.com
I don't believe that you have to have a DNS pointing to your domain. Even if you had to have a DNS set up, there is no assurance that the box acting as the DNS server isn't down.