Must all registered domains have domain name servers assigned to them? - dns

If I just want to know if a domain name is reserved; is it sufficient to use this command and see if any domain name servers turn up, in which case it's reserved?
host -t NS example.com
It's a lot faster than visiting http://www.internic.net/whois.html and typing example.com to get much more detailed results, which I'm not interested in anyway.

Absolutely not.
A past employer registered theirname.biz solely for use on the internal network: it had DNS entries on the inward-facing network DNS server, but nowhere on the internet.
I'm not sure the trick was particularly essential, but "imap.theirname.biz" has the advantage over just "imap" that it's unambiguous if you're connected simultaneously to multiple networks (in the absence of deliberate foul play, of course), so you can just use all their internal DNS resolvers. Also the advantage over "imap.theirname.com" that once you know the convention, it's immediately obvious that it's a private server, and hence the reason you can't connect to it is that you forgot to connect VPN. There may have been other benefits to which I was not privy: I'm a coder, not an IT tech...

Various TLDs have differing requirements for whether name servers are provisioned or not. For example ".de" does require that name servers are up and running and correctly configured before they'll allow the domain registration to proceed.
The technical standards for DNS don't require it though, in fact there's nothing in the core DNS specifications to link together the registration of a name with its subsequent operation in the DNS.
Therefore, using whois is probably the most reliable method, with the caveat that you'll need a whois client that's clever enough to figure out which server to talk to for the domain in question.
That said, checking for the appropriate NS record is a very good shortcut to check that a domain is registered, you just can't use the absence of such a record to prove that it isn't!

NS records are not necessarily required for registered domains. The whois service is your most reliable option.
Note that most Unix systems and Mac OS X have a "whois" command line program that is really quick to use:
whois stackoverflow.com

I don't believe that you have to have a DNS pointing to your domain. Even if you had to have a DNS set up, there is no assurance that the box acting as the DNS server isn't down.

Related

DNS infrastructure conception Issues

I am writing to you because I have a conception problem for my DNS infrastructure.DNS. My infrastructure is composed of a DNS machine (recursive or forwarding) and another authoritatve that has say views according to the source of the client (we can assimilate it to Bind even if it is not the case). This Auhoritative machine should not be queried directly but must go through the other. To summarize here is the infrastructure:
> Client Location 1 Client Location 2 Client Location 3
> \ | /
> DNS Recursive ou Forwarding
> |
> DNS Authoratitve with 3 « views ».
I thought of different solutions to solve these problems :
Create different ports on the DNS Recursive (or Forwading), each port containing a DNS that would correspond to a view that would query the Authoritative DNS (and thus recognize the origin). But I find this solution rather ugly and that will quickly increase if the number of views increases.
Use the DNS extension : EDNS to forward the client network (but that seems pretty complicated).
I wanted to know if you have other solutions and if not what would be the best.
Thank you in advance !
The first solution does not seem really workable as there is nearly no way to change from the default DNS port in various end clients OS. You would instead need separate recursive nameservers on separate IP addresses and each client configured with the specific nameserver it needs to use.
The second solution can work, it is ECS the "EDNS Client Subnet" feature, described in RFC7871 and supported in various nameservers. See for example in Bind: https://www.isc.org/wp-content/uploads/2017/04/ecs.pages.pdf
Now are you really sure you need this setup or that this is the only way to achieve your goals? It is difficult to propose other ideas as you describe from the get go your solution but not really your problem initially nor your constraints.
For example, it may be solved in some cases by just configuring each client with a different domain search list. client1 would have client1.example.com as suffix, client2 would have client2.example.com and so on. Now, with only one standard recursive nameserver and one authoritative one for example.com without any kind of extension or complicated setup, when client1 attempts to resolve www it will (may) get a different reply than client2 also attempting to resolve www as the final two fully qualified domain name would be indeed different (www.client1.example.com vs www.client2.example.com), because of the different search lists. This of course depends a lot on what kind of applications are running on each client.
The use of simpler nameservers such as dnsmasq may also help, but again your space problem is not defined enough to be sure what to suggest.

How to configure custom nameservers with dedicated server and different domain Provider

I would like to know how to point mydomain.it to my dedicated server.
I explain my situation:
I have a Dedicated server on SingleHop.
I have the domain "mydomain.it" on Siteground.
I created on my Dedicated server the nameserver:
ns1.mydomain.it with IP 1.2.3.4 and
ns2.mydomain.it. with IP 1.2.3.5
Now, I wish to control all DNS settings on my dedicated server because of SPF record and DKIM record and SSL Certification, but I can't tell Siteground to point the IP 1.2.3.4 and 1.2.3.5 on my nameservers, because there is only space for the nameservers text (ns1.mydomain.it and ns2.mydomain.it) and there is not space for the IP field.
Without the IP, the domain's nameservers can't point to my dedicated server, and I can't manage DNS settings. So I ask myself what I could do for make sense to this story.
There is a way to do this?
Please help me,
Thank you.
Michele
When the DNS system was first conceived there were two addressing mechanisms that were used. The 32 bit IP Address, and the 16 bit octal Chaos Address. To make these systems easier to administer the NS record is specified as being a name rather than an address. (otherwise you would need different names for each protocol)
As it turned out, that wasn't needed as Chaos quickly died out (at least as an addressing scheme) but the original idea of having a name that then needs to be resolved to an address remains.
For this reason you can only specify a fully qualified domain name in the NS record. There are mechanisms that you can use if the domain name is on the domain you wish to create the record for (glue records) but that is quite a complicated aspect of DNS.
Aside from that though, I would say that it is very rarely a good idea to run your own name server. It is an extremely complicated - and expensive - thing to do correctly.
Weighing that against the simplicity and negligible cost of using a service to host your domain name. Who will invariably provide a global DNS infrastructure to ensure that your domain is constantly available.
Finally the majority of DNS Services all offer easy configuration of DKIM and SPF (SSL isn't something that is provided at the DNS level, it is merely part of the lookup to validate it)

I'm can seem to get the entire DNS reverse ip look up.

I'm trying to get all the domains linked to a record like here
http://viewdns.info/reverseip/?host=23.227.38.68&t=1 but I'm getting no luck with dig 23.227.38.68 or nslookup 23.227.38.68. Any idea what I'm doing wrong?
The design of DNS does not support discovering every domain associated with a certain IP address. You may be able to retrieve one or more DNS names associated with the IP address through reverse IP lookup (PTR records), but does not necessarily give you all domains. In fact, it rarely will.
This is because the information you seek is scattered throughout the global DNS network and there is no single authoritative node in the network that has this information. If you think about it, you can point the DNS A record of your own domain to the IP of stackoverflow.com and that's perfectly valid, but anyone seeking to know this would have to find your DNS servers to figure this out. DNS does not provide any pointers for this, though.
Yet, certain "passive DNS" services (probably including viewdns.info) seem to overcome this limitation. These services all work by aggregating DNS data seen in the wild one way or another. At least one of these services works by monitoring DNS traffic passing through major DNS resolvers, building a database from DNS queries. For instance, if someone looks up yourdomain.com that points to 1.2.3.4 and the DNS query happens to pass through the monitored resolver, they take note of that. If a query for anotherdomain.com is seen later and it also resolves to 1.2.3.4, now they have two domains associated with 1.2.3.4, and so on. Note that due to the above, none of the passive DNS services are complete or real-time (they can get pretty close to either, though).

Where is an IP Anycast Nameserver system implemented?

Ive been reading alot about nameserver the last days. For our websites we want to optimize the waiting time of the visitors that is caused by our namserver. I will have some questions about IP Anycast and the general function of the DNS. Let me start by explaining what I understood the DNS works from user side:
User X wants to visit www.example.com, the following steps happen to get the IP address:
1.Step: User X sends request to the Nameserver of his ISP or nameserver by choice.(recursive nameserver)
2.Step: If the adress is not found, the recursive nameserver will send a request to one of the 13 root nameserver to get the nameserver for the .com TLD
3.Step: Query the .com nameserver to get the auhorative nameserver
4.Step: Query the auhorative nameserver to get the ip-address for www.example.com
First I realized that as a owner of a website you can only optimize Step number 4 and all other steps are not in our hands.
I came across IP Anycast nameserver (what is also used for the 13root nameservers) and totally understand the concept of distributed machines. But what I dont understand is where the decision logic, to which of the distributed machines the user will be send, according to his "position",is implemented? I mean when i buy an anycast nameserver, the logic should be implemented on the .com nameserver (Step 3), so that this nameserver decides to which machine of my anycast nameserver the user will be send.
For me thats really hard to understand and im asking myself if it really works that way? I hope someone can help me with these understanding questions.
Beside of that i found out, that another small method to gain some speed for the user, is to only use A Records and no CName Records anymore.
Are there some more ways to optimize a nameserver?
Thanks in advance!
Your question is not really related to programming, but more to operations, and is also a little too vague ("Are there some more ways to optimize a nameserver?").
But let us try to give you pointers.
User X wants to visit www.example.com, the following steps happen to get the IP address:
Your following description is then mostly correct. Note that at each step, by default until very recently, a recursive nameserver will send the whole name queried to each nameserver. Recently, QNAME Minimization appeared as a standard and now recursive nameservers can send to each authoritativ nameservers only the labels it neeeds to reply. This enhances privacy without changes to the protocol, but is not widespread today because some authoritative nameservers do not work correctly when queried that way.
As a domain name owner you can indeed only have an impact at the last step. But remember that recursive nameservers have caches, so the list of root nameservers as well as the list of .COM nameservers for example are so "hot" (so often needed) that they surely sit always in resolvers' caches, so basically step 1 and 2 happens do not happen often (at start when cache is empty typicaly).
I came across IP Anycast nameserver (what is also used for the 13root nameservers) and totally understand the concept of distributed machines. But what I dont understand is where the decision logic, to which of the distributed machines the user will be send, according to his "position",is implemented?
First things first, IP anycasting is not specific to the DNS, it is just hugely popular here because
it solves the load balancing/fail over problem that all big TLDs have
it works specially well with DNS over UDP which is a simple one query one reply protocol.
So any service can theoretically be anycasted. It means that a given IP address just appears at different locations in the world.
To summarize very broadly, Internet traffic between providers (AS numbers) is exchanged at peering points, where they interconnect and each provider says "I know about IP block 192.0.2.0/24, please send me all traffic for it", etc. for each blocks
(again this is a summary. Blocks are allocated by RIRs, and yes by default this is not very much authenticated so BGP hijacks happen when another provider also says "give me this traffic" when it shouldn't - and it happens because of malicious goals or just simple human errors).
For a normal (technical term: "unicast") IP address, only a single provider (AS) will announce it somewhere (technically: announce its block not just a single IP) and everything will be configured in such a way that wherever the start of the exchange is, for this single IP as destination, it will arrive at the exact same box.
On the contrary, for an anycast IP address, either a single provider or multiple ones (that is multiple Autonomous Systems) will announce this IP at various locations (peering points) in the globe. At each peering point, traffic for this IP will get taken by the provider announcing it there and then it will route this traffic to a specific server "nearby". Announcements of the same IP at peering point A and peering point B, will drive corresponding traffic on one side in datacentre X and the other for datacentre Y.
For the client, when everything works, it does not change anything, as long as all the replying servers react the same way to the same query. The client does not (and sometimes can not) even know the IP is anycasted or that it want to location X when another client doing the same thing will instead hit location Y.
So in short nameservers "decide" nothing in this regard. At each point of the DNS resolution, when they need to contact nameserver NS1 they know its IP address is IP1 and they just open an UDP (or sometimes TCP) connection to this IP, absolutely normally. It is the underlying IP and BGP protocols that will, if anycast is in action, make the response come from the appropriate "close" server.
Note that anycasting in this way, for DNS, achieve both:
fail-over : if one server dies, with appropriate monitoring, its provider withdraws its IP announcement, that is this local copy kind of disappear and the traffic will automatically (in order of seconds) shift to any other instance where the same IP is announced
load-balancing : rougly speaking, if you anycast one IP on 2 locations, each should receive 50% of traffic. It is not true in practice, and is very complicated (read: impossible) to predict or even monitor, because it all depends on the peering points, the agreements between the providers and various other policies (simple example: if you peer at two points where on first there is only one provider sending you trafic, and at other point you have 100 providers with whom you exchange traffic, then you may get more connections going to the second instance... except of course if single provider at first peering point is an ISP with millions of clients, where the other 100 providers are single small organizations...)
So, some nameservers are anycasted. Nowadays all the root ones are (but this was not true 16 months ago, see https://b.root-servers.org/news/2017/04/17/anycast.html as b.root-servers.org was the last one to board the anycast wagon) as well as all big TLDs, sometimes even with more that one "Anycast DNS providers".
For any domain name, you can get some providers that will give you a DNS service for it, based on a "cloud" of anycasted nameservers.
See for example:
https://www.pch.net/services/dns_anycast
https://www.netnod.se/dns/netnod-dns-services
https://dyn.com/dns/network-map/
http://www.cdns.net/anycast.html
https://www.rcodezero.at/en/home/
https://aws.amazon.com/route53/
https://cloud.google.com/dns/
and many others.
Now following on a totally different topic:
Beside of that i found out, that another small method to gain some speed for the user, is to only use A Records and no CName Records anymore.
This is not really something you gain things with, and CNAME records are useful in many other cases.
Again, you need to remember that there are caches.
So even if your configuration is:
www.mywebsite.example CNAME www.mywebsite.example.somegreatCDN.example
www.mywebsite.example.somegreatCDN.example A 192.0.2.128
it is true that this means on paper two DNS requests to finally be able to do an HTTP query, but in practice things will be cached (even more so today with big public open resolvers such as 1.1.1.1 or 8.8.8.8 or 9.9.9.9, that are anycasted too in fact), so the difference will be negligible (and only impacts the first time, never again until it is in cache) ... especially in the case of HTTP and everything that happens later that is opening, frequently, dozens of connections to download javscript source codes, CSS files, fonts, etc. that may be hosted elsewhere.
A lot of websites use CNAME records without negative impact. See www.amazon.com for example, right now:
;; ANSWER SECTION:
www.amazon.com. 730 IN CNAME www.cdn.amazon.com.
www.cdn.amazon.com. 11 IN CNAME d3ag4hukkh62yn.cloudfront.net.
d3ag4hukkh62yn.cloudfront.net. 11 IN A 54.239.172.122
You may however argue that some names will be more popular than others and hence stay longer in cache, which is certainly the case.
And finally:
Are there some more ways to optimize a nameserver?
Based on what? We touched various subjects above, all are compromises, you sacrifice something (it may be just "money") to gain something else (redundancy, etc.). There is no generic rule to declare when this compromise makes sense or not, it will depend a lot on your situation and what you are trying to do.
You are right, and should be congratulated about that, that you should invest some time around DNS setup, both for security and performance reasons. While a lot of money is often invested in huge HTTP setup to sustain various problems or spikes of activity (but even the best fail sometimes, see the recent Amazon Prime Day opening that was a gigantic failure), but often people forget about the DNS because it is on the infrastructure level so not well known nor understood (using UDP makes it already stand out from all other known protocols, as this is rare).
For example there is another completely different thing (it is orthogonal to anycasting, so it can work with or without it, the goals are different) that is related: "geo-DNS" means when a nameserver will reply differently based from where the client asks. This is meant to give, for example, a different IP for a webserver, one that is closer to client (so in that case the webserver itself is probably not anycasted). This is done by just looking at the source IP from the DNS packet, but it is often not good enough because the authoritative nameservers only see as source IP the one from the recursive nameserver and not the real end client one and nowadays with big open public recursive nameservers the location should be far off, so you also have a specific DNS option called EDNS Client Subnet that can be passed between recursive and authoritative nameservers so that they get the end client real IP address (in fact a block not a single IP for privacy reasons) and can act upon it.
Short answer is: you are right. The NameServers is where you can optimize and all "IP Anycast" products I have seen is just a NameServer setup that has a lot of locations.
They use the same system as the "root servers of the internet" but this does not mean that they have the same function. The IP Anycast is simply a method for multiple servers in different locations to serve the same IP address.
From WIKIPEDIA (http://en.wikipedia.org/wiki/Anycast)
On the Internet, anycast is usually implemented by using Border Gateway Protocol to simultaneously announce the same destination IP address range from many different places on the Internet. This results in packets addressed to destination addresses in this range being routed to the "nearest" point on the net announcing the given destination IP address.
If you are using a big ISP like ASCIO or someone using ULTRADNS you probably do not have to worry about this step too much, but if the NS is a local ISP it is worth considering. Make sure you have NS where your visitors are.
I assume this is where you came into contact with "IP Anycast" products. None that I have seen offers anything to attack step 1-2-3 but rather offers a large setup of NameServers allowing them to reduce resolving time due to closeness of networks.
Let me know if you are of the understanding that the offer is for a root NameServer setup, because I would like to see this.

How secure is IP address filtering?

I'm planning to deploy an internal app that has sensitive data. I suggested that we put it on a machine that isn't exposed to the general internet, just our internal network. The I.T. department rejected this suggestion, saying it's not worth it to set aside a whole machine for one application. (The app has its own domain in case that's relevant, but I was told they can't block requests based on the URL.)
Inside the app I programmed it to only respect requests if they come from an internal I.P. address, otherwise it just shows a page saying "you can't look at this." Our internal addresses all have a distinct pattern, so I'm checking the request I.P. against a regex.
But I'm nervous about this strategy. It feels kind of janky to me. Is this reasonably secure?
IP filtering is better than nothing, but it's got two problems:
IP addresses can be spoofed.
If an internal machine is compromised (that includes a client workstation, e.g. via installation of a Trojan), then the attacker can use that as a jump host or proxy to attack your system.
If this is really sensitive data, it doesn't necessarily need a dedicated machine (though that is best practice), but you should at least authenticate your users somehow, and don't run less sensitive (and more easily attacked) apps on the same machine.
And if it is truly sensitive, get a security professional in to review what you're doing.
edit: incidentally, if you can, ditch the regex and use something like tcpwrappers or the firewalling features within the OS if it has any. Or, if you can have a different IP address for your app, use the firewall to block external access. (And if you have no firewall then you may as well give up and email your data to the attackers :-)
I would rather go with SSL and some certificates, or a simple username / password protection instead of IP filtering.
It depends exactly HOW secure you really need it to be.
I am assuming your server is externally hosted and not connected via a VPN. Therefore, you are checking that the requesting addresses for your HTTPS (you are using HTTPS, aren't you??) site are within your own organisation's networks.
Using a regex to match IP addresses sounds iffy, can't you just use a network/netmask like everyone else?
How secure does it really need to be? IP address spoofing is not easy, spoofed packets cannot be used to establish a HTTPS connection, unless they also manipulate upstream routers to enable the return packets to be redirected to the attacker.
If you need it to be really secure, just get your IT department to install a VPN and route over private IP address space. Set up your IP address restrictions for those private addresses. IP address restrictions where the routing is via a host-based VPN are still secure even if someone compromises an upstream default gateway.
If your application is checking the IP Address, then it is extremely vulnerable. At that point you don't have any protection at the router which is where IP filtering really needs to be. Your application is probably checking HTTP header information for the sending IP address and this is extremely easy to spoof. If you lock the IP address down at the router, that is a different story and will buy you some real security about who can access the site from where.
If are you are doing is accessing the application internally, then SSL won't buy you much unless you are trying to secure the information from parties internal to the organization, or you require client certificates. This is assuming you won't ever access the site from an external connection (VPNs don't count, because you are tunneling into the internal network and are technically part of it at that point). It won't hurt either and isn't that hard to setup, just don't think that it is going to be the solution to all your problems.
IP Whitelisting is, as others have mentioned, vulnerable to IP spoofing and Man-in-the-Middle attacks. On an MITM, consider that some switch or router has been compromised and will see the "replies". It can either monitor or even alter them.
Consider also vulnerabilities with SSL encryption. Depending on the effort, this can be foiled in a MITM as well, as well as the well-known gaffs with the reuse of the primes, etc.
Depending on the sensitivity of your data, I would not settle for SSL, but would go with StrongSWAN or OpenVPN for more security. If handled properly, these will be a lot less vulnerable to a MITM.
Reliance on whitelisting alone (even with SSL) I would consider "low grade", but may be sufficient for your needs. Just be keenly aware of the implications and do not fall into the trap of a "false sense of security".
If it's limited by IP address, then although they can spoof the IP address, they won't be able to get the reply. Of course, if it's exposed to the internet, it can still get hit by attacks other than against the app.
Just because all your internal IPs match a given regex, that doesn't mean that all IPs that match a given regex are internal. So, your regex is a point of possible security failure.
I don't know what technology you used to build your site, but if it's Windows/ASP.net, you can check the permissions of the requesting machine based on its Windows credentials when the request is made.
Like all security, it's useless on it's own. If you do have to put it on a public-facing webserver, use IP whitelisting, with basic username/password auth, with SSL, with a decent monitoring setup, with an up-to-date server application.
That said, what is the point in having the server be publicly accessible, then restrict it to only internal IP addresses? It seems like it's basically reinventing what NAT gives you for free, and with an internal-only server, plus you have to worry about web-server exploits and the likes.
You don't seem to gain anything by having it externally accessible, and there are many benefits to having having it be internal-only..
My first thought on the ressource issue would be to ask if it wouldn't be possible to work some magic with a virtual machine?
Other than that - if the IP addresses you check up against are either IPs you KNOW belongs to computers that are supposed to access the application or in the local IP range, then I cannot see how it could not be secure enough (I am actually using a similiar approach atm on a project, although it is not incredibly important that the site is kept "hidden").
A proper firewall can protect against IP spoofing, and it's not as easy as say spoofing your caller ID, so the argument /not/ to use IP filtering because of the spoofing danger is a bit antiquated. Security is best applied in layers, so you're not relying on only one mechanism. That's why we have WAF systems, username+password, layer 3 firewalls, layer 7 firewalls, encryption, MFA, SIEM and a host of other security measures, each which adds protection (with increasing cost).
If this is a web application you're talking about (which was not clear from your question), the solution is fairly simple without the cost of advanced security systems. Whether using IIS, Apache, etc. you have the ability to restrict connections to your app to a specific target URL as well as source IP address - no changes to your app necessary - on a per-application basis. Preventing IP-based web browsing of your app, coupled with IP source restrictions, should give you significant protection against casual browsing/attacks. If this is not a web app, you'll have to be more specific so people know whether OS-based security (as has been proposed by others) is your only option or not.
Your security is only as strong as your weakest link. In the grand scheme of things, spoofing an IP is child's play. Use SSL and require client certs.
It became useful first to distinguish among different kinds of IP vpn based on the administrative relationships, not the technology, interconnecting the nodes. Once the relationships were defined, different technologies could be used, depending on requirements such as security and quality of service.
Maybe this will help ? I've been looking for the same answer, and found this stackoverflow as well as this idea from Red Hat Linux Ent. I'll try it soon. I hope it helps.
iptables -A FORWARD -s 192.168.1.0/24 -i eth0 -j DROP
Where 0/24 is the LAN range you want to protect. The idea is to block "internet" facing (Forward) devices from being able to spoof the local IP network.
Ref: http://www.centos.org/docs/4/html/rhel-sg-en-4/s1-firewall-ipt-rule.html

Resources