Quite surprisingly, I don't seem to be able to find an answer for this question in Google search.
How many records are too many in an SRV record set?
10, 50 100?
If I want to provide a single hostname that balances across 50 servers in the same physical location, how best to do that?
How do I scale that to 100's of servers?
How many records are too many in an SRV record set?
This all depends on the application consuming them.
Besides the fact that more SRV records mean bigger DNS packets and hence possible problems if broken EDNS support or TCP queries.
This is in fact spelled out by RFC 2782 on SRV:
Currently there's a practical limit of 512 bytes for DNS replies.
Until all resolvers can handle larger responses, domain
administrators are strongly advised to keep their SRV replies below
512 bytes.
(but it was written on February 2000, so lots of things changed since then).
If I want to provide a single hostname that balances across 50 servers in the same physical location
The DNS might not be the perfect solution for your problem. You can have only one record pointing to a load balancer that will then handle the split between the 50 physical servers.
Note that you might want to look at the newer HTTPS/SVCB records (IETF RFC not yet published, but codepoint allowed already and they are used in the wild and consumed/published by software from Google, Apple, and CloudFlare at least for now).
Related
I had a disastrous downtime of my website domain after replacing the name servers of my registrar Host Europe by those of a service provider.
Host Europe technical support told me that they immediately delete DNS entries on their name servers if you do so.
Is it possible, that the downtime of my website was because machines still asked the old name servers and they said “don’t know”? (I don’t know much about DNS.)
And is it normal for registrars to act this way?
How does it Google Domains? How Cloudflare Registrar?
And how to avoid the problem? Is a big TLL better or a small one? I think I had set it to 10 Minutes before switching.
Your question is offtopic here as not related to programming so might get deleted, but the following was too long to put in a comment to help you:
Host Europe technical support told me that they immediately delete DNS entries on their name servers if you do so.
This is very bad behaviour. Their nameservers will still get queries for basically the amount of time being the TTL of the NS records at parent.
Is it possible, that the downtime of my website was because machines still asked the old name servers and they said “don’t know”?
Yes this is exactly what happened.
An old provider should never pull the plug immediately. There are a lot of caches in the DNS.
If you can control the TTL values on your records, you can try adjusting them upwards at old provider, before the nameservers change. It may help a little or not at all, and not all DNS providers let people choose TTL freely. Somewhere around 1 week would be a good ballpark here.
And is it normal for registrars to act this way? How does it Google Domains? How Cloudflare Registrar?
Normal as in "unfortunately widespread", probably yes, but can't comment on any specific company. Note also that here the problem is not with the registrar role, but the DNS provider role. Both can be same companies, but are different roles. There are no worldwide DNS organization, where for registrars many of them are ICANN accredited (but they say nothing about this case IIRC), and in all cases are accredited by registries. I can say for sure that at least one registry (AFNIC for .FR) does mandate/require/recommend (not sure of the wording) registrars/DNS providers to keep the old DNS configuration in case of a change. I don't think though that it is checked nor enforced unfortunately.
And how to avoid the problem? Is a big TLL better or a small one? I think I had set it to 10 Minutes before switching.
It does not matter because what comes into play is the TTL (Time To Live) of the NS records at the parent (the registry handling the TLD under which your domain is registered), which you have 0 control over.
Unfortunately there is no real proper counter measure here, your DNS provider needs to do its job properly and not cut down resolution immediately.
A partial solution could be something akin to:
add nameservers without removing current one: note that they need to be listed in the zone, AND you need to change the domain at the registry, otherwise you will be in a lame delegation case (which you can also decide to sustain, but it is bad in general)
after some time (typically again the TTL at parent), you can now remove the old servers (again both in the zone and at parent).
That way even if the old nameservers stop to work immediately for your domain, all resolvers would have time to learn about the new ones, and even if they try to contact old nameservers, and get an error, they may (not guaranteed to always work and of course at least introducing some delays) switch to the new ones. Until again the same TTL passes after second point after which all resolvers will know only about the new nameservers.
Another trick that could work but means you will be in a lame delegation case is the following. It works because a lot of resolvers, including big ones like Google Public DNS are child centric instead of parent centric: you change the zone content to list the new nameservers as NS records, removing old ones and you do NOT do any change at the registry side. This will let some resolvers (but not all) learn about new nameservers and after some time you can do the switch at the registry.
I am curious on how to setup multiple load-balancers (with different IP addresses) with a specific domain.
I understand that it is possible to setup multiple A-records in a DNS to all of my load-balancers, but I can understand that this is not ideal.
DNS' doesn't do any kind of is-alive checks, so if a load-balancer dies, the DNS will still send users to this address, right?
So how do you connect a domain/DNS with multiple load-balancers, while preventing a dead load-balancer from getting requests...
I read something about anycast, but is this the only solution?
I am just curious about how this issue is normally handled.
Thanks.
You have multiple solutions.
On a pure DNS level you can publish your records with a low TTL (say 5 minutes), and have your monitoring systems change the content of the zone by removing the dead record when detected. This does not provide immediate fail-over but can be often good enough.
It does not involve too complicated systems.
Also, some DNS servers allow some "programmed part", with a dynamic backend that can compute records based on some external parameters, like doing live checks and replying only with the live records.
Anycast is another solution indeed, and has then no relationship with the DNS anymore (although the DNS itself can be "anycasted" but then it is to resolve its possible failover needs, not the ones of your application).
Basically your multiple systems, on various places in the world, are advertised with the same IP address. So the DNS has only one record.
With the "magic" of BGP, each instance announcing a given IP address will collect all the nearby traffic, so you get load-balancing for free in fact. And you need some specific tooling so that, as soon as some local instance is dead (or in maintenance mode for example), you stop announcing its IP address there, so that all other networks in the world, again because of BGP, learn that to reach "something" behing that IP they need to go somewhere else, to another instance of yours announcing this IP.
This is far more complicated to setup as you need a proven BGP setup (and making errors in BGP can have even greater consequences than in DNS), and multiple instances located in different datacentres, and possibly multiple AS numbers, depending on how you want to do your anycast done. This clearly needs skilled professional in BGP routing where the first solution with only DNS (in the first case of just changing a static zonefile) is reachable by any enthousiastic amateur.
So the answer also slightly depend on the network locations of your load-balancers.
This is something where I get confused..
Say I acquired a domain name blabla.ge (ge is for Georgia) and hosting my files with US based hosting company. What are the downsides if any and is there an option to change the DNS server?
Cheers!
Agreed, there is no real downside. The tld is really not that important to basic usage. Yes root servers factor in here but really nothing that will impact your daily activities and you don't really need to worry.
For the nameservers, you can change these to any servers you wish and have access to manage the records. Location isn't important other than basic routing and response time. Nameservers generally should be on diverse networks and diverse locations per Best Practices. I have nameservers available in multiple countries and there's nothing wrong with that. If you are using the nameservers provided by your registrar, you likely have the diversity I mentioned, although they may be located in a single country (which is fine).
I have multiple domains registered with tlds such as .nl, .im, .com.de, etc. Some of these point to US-only nameservers, some use nameservers in multiple countries and a couple use the nameservers provided by my registrar (who I purchased the domain from).
From there, my A records point to servers in diverse locations.. Primarily the US and Netherlands. This set up works great, performance is adequate and there are no major downsides to doing it this way. You can change your nameservers for the .ge domain to use US servers or you can leave them overseas and use A records to point to your server(s) in the US. You can debate which method would be "best" given a situation but neither method is "wrong."
So in short, no major downside to doing this at all. And yes, changing your DNS server (nameserver) is always an option. Hope this helps.
Let's say there is a page with 100 different user photo's shown on the page,
that is at least 100 DNS lookups right there, would this be reduced if I were to link using the an IP instead of a domain url?
http://217.345.33.444/images/photo.jpg instead of http://domain.com/images/photo.jpg
It lowers DNS lookup overhead but will force painful, monotonous, error-prone changes if that IP ever changes down the road.
Also, once a single name is resolved, it shouldn't be looked-up again ...
Its a bit late at night for my timezone, but I thought that DNS lookups are cached in various spots, (even on the local machine??) so it is not as bad as you think.
Thus the first call to lookup the domain will travel a fair way, but the results should be cached on in-between machines so that there is less performance hits with the later calls.
I am sure that this sort of thing was thought long and hard about by the designers of the DNS protocols.
Edit notes
Its taken me 3 edits just to get my spelling and grammar straight - it is definitely too late at night for me
DNS lookups are cached by your computer, so there will only be a single lookup per unique domain.
Additionally, most people use their internet provider's DNS server, and it will typically cache DNS lookups as well, so a lot of the time, the DNS lookup will just be a single network hop away.
You have no way of knowing when the IP address of a domain will change, so I do not recommend this approach.
Is there a reason you don't store the images on your own domain? If you did that:
the DNS issue would go away.
A lot of web servers don't allow hot linking of images, so this problem would be solved as well.
that would also create the possibility of spriting images together, if the set of images shown together doesn't change often.
Why is that 100 DNS lookups? Are all the images on different domains? You should only typically incur one lookup per unique domain (and that's assuming that domain has never been resolved before).
How confident are you that your IP address will never change? Also if you had those 100 images on 4 different domains performance would increase.
Every browser I know looks up for the DNS only once and than cache it. Even if it doesn't, the system does. There's no 100 lookups as you suspected.
You can take a proof of that with any simple traffic sniffer, as I did.
Is which IPs are assigned to which ISPs public information? How do geo IP services obtain this information and maintain this information?
How can I personally figure out where a certain IP belongs without using one of these services?
For what it's worth, I worked at a senior level in the ISP industry for more than a decade so I have quite some experience with this.
Large IP ranges are allocated as needed by IANA to each of the Regional Internet Registries.
The regions are generally continental in size - IP addresses are not assigned on a per-country basis.
The RIRs in turn then allocate IP addresses to ISPs, who in turn assign them to end-users.
Each of the RIRs maintain a whois server which can be queried to find out not only which ISP has been assigned any netblock, but to a certain extent which end-user, and that end-user's address.
Note that many ISPs do not fill out this information for every single customer. Hence if you're a residential subscriber of a DSL service, it's likely that the Geo records will give the address of your ISP, and not your own address.
The various GeoLocation providers mostly work by mining these whois records. Note that the legality of doing so is something of a gray area - RIPE's database copyright statement is here.
IANA also maintains the root zone for the DNS, but that is completely separate from any IP allocation functions. It is very important to maintain the distinction between domain name operations and IP addresses.
To answer the specific question about "how it works": there's alot of manual labor involved, and the databases are to a large extent maintained manually. Just as other answers point out, there's no real correlation between IP ranges and countries, much less specific regions. Recently the system of IP address space distribution has been even more decentralized which means small private vendors can acquire IPv4 address ranges regardless of geographic region. This is why Google acquried Urchin so they could use their services for Google Analytics, which provides very accurate IP-to-geographic-region information.
If you don't want to use a service like MaxMind (free for personal use, and the database is open to some extent) or Google Analytics (free for personal use), there's free (and hence always slightly outdated) databases floating around, sometimes as flat files.
There are a variety of libraries that have mapping tables as well as services you can incorporate into your code.
The most important thing to understand is that there is no direct relationship between an IP address and any part of the world. The addresses are allocated in large blocks to organizations that are roughly geographical, which in turn allocate smaller blocks, this may happen at several levels for any given IP address (Alnitak explains the process well).
The fact is: WHOIS data does not have to be accurate. If I have an address block, I can say it is on Mars. And even if you narrow down the location of the final organization (say a very small ISP in Alaska), the user might be using dialup from Hawaii, or the server might be hosting a company from Guam.
So, there is always an element of risk/estimation in mapping an IP address (or a domain name) to a physical location. This is not to say you should never do it, there are many applications where rough or imperfect information is very useful.
Beware, the data is often slow to be updated, and even slower to replicate. My work place changed ISPs a number of years ago, and we were assigned a block of formerly Canadian IP addresses (we're based in the US), for months Google continued to give us google.ca as our default search engine. About 1/2 the time my home IP address comes up as being from my town, the other 1/2 from a town in another state.
Jason is right that the process is the same, but the updates are even slower and the data less accurate.
Alnitak's answer is pretty much on the mark.
As a side note, if you want to use a .dll to determine the user's location, then you can try this IPAddressExtension found on CodePlex. It has an internal database of ISP's to locations. As mentioned above by Alnitak, each ISP have IP blocks .. so this information is all buried inside the .dll :)
It's really easy to use. Just reference the .dll and then create an instance of a System.Net.IpAddress object! the extensions are listed on it.
I also need to declare that i'm the author of that codeplex project/product.
Please check it out :)
EDIT: added information about me being the author of that product.