Currently i'm configuring a server pool with AWS. It is a simple setup with two database servers an scalable server array and two load balancers in front of it all. Every machine has a failover standing by and it should all be pretty robust.
The load balancers should be able to failover through Round Robin DNS. So in a happy day scenario both machines get hit and distribute the traffic over the array. When one of these machines is down Round Robin DNS in combination with client browser retry should make it so that browsers should shift their target host to the machine which is still up once they hit a timeout. This is not something I came up with but seems like a very good solution.
The problem i'm experiencing is as following. The shift does actually happen but not just once for the failed request but for each and every subsequent request from the same browser. So a simple page request takes 21 seconds to load after which all images also take 21 seconds to load. All the following page request also takes this long. So the failover works but is a the same time completely useless.
Output from a dig:
; <<>> DiG 9.6.1-P2 <<>> example.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 45224
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;example.com. IN A
;; ANSWER SECTION:
www.example.com. 86400 IN A 1.2.3.4
www.example.com. 86400 IN A 1.2.3.4
;; Query time: 31 msec
;; SERVER: 172.16.0.23#53(172.16.0.23)
;; WHEN: Mon Dec 20 12:21:25 2010
;; MSG SIZE rcvd: 67
Thanks in advance!
Maarten Hoekstra
Kingsquare Information Services
When the DNS server gives a list of IP addresses to the client, this list will be ordered (possibly in a rotating manner, i.e. subsequent DNS might return them in a different order). It is likely that the browser caches the DNS response, i.e. the list it originally received. It then does not assume that a failed connection means that the server is down, but will retry the list in the same order every time.
So round-robin DNS is for load balancing at best; it is not very well suited to support fault tolerance.
There is a reason we call this "poor man's load balancing." It does work but you are the mercy of the resolver, and the time outs depending upon which IP is returned first from your dns servers. You can look at something like dnsmadeeasy.com and their dns failover (there are others that do this, but dnsmadeeasy is the one I know of). Basically they monitor app availability and can fast flux the dns changes in regards to application state..
Related
I want to create an instance in Google Cloud Engine with a custom (private) hostname. For that reason, when creating the instance from the Console (or from an SDK) I supply the hostname, or example instance0.custom.hostname.
The instance is created and the search domain is set correctly in /etc/resolv.conf For Ubuntu in particular I have to set the hostname with hostnamectl but it is irrelevant to the question.
Forward DNS lookups work as normal for instance0.custom.hostname. The problem comes when I do a reverse lookup for the private IP address of the instace. In that case the answer I get is the GCE "long" name instead of my custom hostname.
How can I make the reverse lookup reply with my custom name instead of the GCE?
I know in Azure you can use a Private DNS Zone with VM auto-registration to handle the "custom hostnames". I tried using a private zone with Google Cloud DNS (PTR records) but with no luck.
After some serious digging I found a solution and tested it.
Reverse DNS works even without a "regular" DNS records for your custom.hostname domain.
To get reverse dns working lets assume your VM's in 10.128.0.0/24 network.
Their IP's are 24,27,54,55 as in my example.
I created a private dns zone and named it "my-reverse-dns-zone" - the name is just for information and can be anything.
"DNS name" field however is very important. Since my network address starts with 10 I want all the instances that are created in that network segment to be subject to reverse dns. So the DNS name has to be 10.in-addr.arpa in this case. If you're using 192.168.... or 172.16.... then adjust everything accordingly.
If you wanted just 10.128.0 then you can put 0.128.10.in-addr.arpa. Then you select the VPC networks zone has to be visible in and voila:
Then you add the PTR records that will allow this to work. I'm setting all TTL's to 1 minute to shorten the wait :)
After accepting wait a minute (literally) and test it:
dig -x 10.128.0.24
; <<>> DiG 9.11.5-P4-5.1+deb10u6-Debian <<>> -x 10.128.0.24
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 35229
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;24.0.128.10.in-addr.arpa. IN PTR
;; ANSWER SECTION:
24.0.128.10.in-addr.arpa. 60 IN PTR instance0.custom.hostname.
;; Query time: 6 msec
;; SERVER: 169.254.169.254#53(169.254.169.254)
;; WHEN: Mon Jan 31 13:35:57 UTC 2022
;; MSG SIZE rcvd: 92
Done !
You can even put some completely other domain for one of the IP's. Have a look at my zone configuration:
dig -x 10.128.0.55 | grep PTR
;55.0.128.10.in-addr.arpa. IN PTR
55.0.128.10.in-addr.arpa. 60 IN PTR b2.example.com.
There's a similar question & answer here.
To have a better (technical) understanding of how this works have a look at PTR records in private zones documenation and about PTR records and how they work in the internal GCP's DNS.
I know you can test if a DNS server is valid by running:
dig +short test_hostname #nameserver
But what if we don't have a test_hostname to test queries with?
For example if the system we want to run this command on is within a restricted network and we don't know what hostnames they have access to or are available on their network.
Would using localhost as the test_hostname be a reliable way of checking if this is a valid DNS server?
Or I did notice that dig, host, nslookup will all return:
;; connection timed out; no servers could be reached
if you type in an invalid DNS server regardless of what test_hostname you type in, so would just running:
dig +short #nameserver
be a reliable way of checking if the DNS server is valid?
There is no need to check if the DNS server is fake/malicious or not, just if it is valid or invalid.
Try:
dig . ns #nameserver +short
Even if the server has no root nameservers configured, if it's alive it will respond to this. If there are root servers, you'll get a valid list of NS records; if not, you'll get an empty response with rcode=NOERROR.
You are not saying if you are testing a recursive or an authoritative nameserver.
You also need to define valid. Do you expect a DNS reply for your query?
You can test with known invalid/not-existing names such as whatever.test or whatever.example. You should always get a DNS reply back, even if it is NXDOMAIN, or possibly NOERROR in a case of upward referral.
Note that if you do that towards server you do not control, depending on the rate, people may start to notice and rate limit you or worse.
You can also try to query the CHaos zone, however this is often disabled.
It is one way to "identify" a given nameserver software.
Example:
$ dig #a.root-servers.net version.bind TXT chaos +short
"NSD"
Even if the feature is disabled you should get back a DNS reply with REFUSED or NOTIMP or some kind of return code like that:
$ dig #ns1.google.com version.bind TXT chaos
...
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOTIMP, id: 57909
;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0
;; WARNING: recursion requested but not available
PS: note that dig has a header-only flag that allows to send a question with just a header and no content (that is not need to specify a name).
Not all nameservers may react to that properly though, same may just timeout and not reply at all.
I saw that Cassandra client needs an array of hosts.
For example, Python uses this:
from cassandra.cluster import Cluster
cluster = Cluster(['192.168.0.1', '192.168.0.2'])
source: http://datastax.github.io/python-driver/getting_started.html
Question 1: Why do I need to pass these nodes?
Question 2: Do I need to pass all nodes? Or is one sufficient? (All nodes have the information about all other nodes, right?)
Question 3: Does the client choose the best node to connect knowing all nodes? Does the client know what data is stored in each node?
Question 4: I'm starting to use Cassandra for the first time, and I'm using Kubernetes for the first time. I deployed a Cassandra cluster with 3 Cassandra nodes. I deployed another one machine and in this machine, I want to connect to Cassandra by a Python Cassandra client. Do I need to pass all the Cassandra IPs to Python Cassandra client? Or is it sufficient to put the Cassandra DNS given by Kubernetes?
For example, when I run a dig command, I know all the Cassandra IPs. I don't know if it's sufficient to pass this DNS to the client
# dig cassandra.default.svc.cluster.local
The IPs are 10.32.1.19, 10.32.1.24, 10.32.2.24
; <<>> DiG 9.10.3-P4-Debian <<>> cassandra.default.svc.cluster.local
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 18340
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;cassandra.default.svc.cluster.local. IN A
;; ANSWER SECTION:
cassandra.default.svc.cluster.local. 30 IN A 10.32.1.19
cassandra.default.svc.cluster.local. 30 IN A 10.32.1.24
cassandra.default.svc.cluster.local. 30 IN A 10.32.2.24
;; Query time: 2 msec
;; SERVER: 10.35.240.10#53(10.35.240.10)
;; WHEN: Thu Apr 04 16:08:06 UTC 2019
;; MSG SIZE rcvd: 125
What are the disadvantages of using for example:
from cassandra.cluster import Cluster
cluster = Cluster(['cassandra.default.svc.cluster.local'])
Question 1: Why do I need to pass these nodes?
To make initial contact with the cluster. If the connection is made then there is no use with these contact points.
Question 2: Do I need to pass all nodes? Or is one sufficient? (All
nodes have the information about all other nodes, right?)
You can pass only one node as contact point but the problem is if that node is down when the driver tries to contact then, it won't be able to connect to cluster. So if you provide another contact point it will try to connect with it even if the first one failed. It would be better if you use your Cassandra seed list as contact points.
Question 3: Does the client choose the best node to connect knowing
all nodes? Does the client know what data is stored in each node?
Once the initial connection is made the client driver will have the metadata about the cluster. The client will know what data is stored in each node and also which node can be queried with less latency. you can configure all these using load balancing policies
Refer: https://docs.datastax.com/en/developer/python-driver/3.10/api/cassandra/policies/
Question 4: I'm starting to use cassandra for first time, and I'm
using kubernetes for the first time. I deployed a cassandra cluster
with 3 cassandra nodes. I deployed another one machine and in this
machine I want to connect to cassandra by a Python Cassandra client.
Do I need to pass all cassandra IPs to Python Cassandra client? Or is
it sufficient to put the cassandra DNS given by Kubernetes?
If the hostname can be resolved then it is always better to use DNS instead of IP. I don't see any disadvantage.
I am using Centos 7 and I would like to modify zone records in php so that I can add and remove domain names programatically.
Unfortunately, when I type find / named.conf into a terminal I get 0 results and I do not have the folder /var/named.
How can I find the files I need to modify so that I am able to write a script to add and removed domains names?
After running mydomain.com as #Leo suggested, I received:
dig #8.8.8.8 mydomain.com -t NS
; <<>> DiG 9.13.4 <<>> #8.8.8.8 mydomain.com -t NS
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 33602
;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;mydomain.com. IN NS
;; ANSWER SECTION:
mydomain.com. 21599 IN NS ns2.contabo.net.
mydomain.com. 21599 IN NS ns1.contabo.net.
mydomain.com. 21599 IN NS ns3.contabo.net.
;; Query time: 55 msec
;; SERVER: 8.8.8.8#53(8.8.8.8)
;; WHEN: Tue Dec 25 02:35:26 GMT 2018
;; MSG SIZE rcvd: 110
Where are the DNS Zone files located?
Somewhere internal to your hosting provider. You aren't operating the DNS server, so its zone files won't be on your server.
Your provider may offer an API to update your DNS records. If they do, use it. (I can't say for sure if they do, because you haven't mentioned who they are.)
If that isn't an option, there are plenty of third-party DNS providers available who do have an API. A couple of the bigger ones to consider are Cloudflare (free), AWS Route 53 (~$0.50/zone/month), and Google Cloud DNS (~$0.20/zone/month).
To get a hint of where to start searching, look at reality; try:
dig #8.8.8.8 your.host.name -t NS
or drill, or kdig or etc.
dig #ns1.contabo.net. version.bind CH TXT tells that ns1 (which speculative truly might be the primary) is using PowerDNS.
The most common use (of many other options) of PowerDNS is i.c.w. MySQL.
So my best guess is that your ISP offers an web interface, to their database that holds the zone data.
To answer your question "Where are the DNS Zone files located?":
the zone data resides in several rows of a database table.
It's unlikely (but not impossible) that there are static zones-files.
And if so, I highly doubt that they would collect them from your filesystem.
For real answers better ask support at your supplier.
I received the following from my domain registry in Iceland:
The setup of zone gsap.is on its nameservers appears not to
be according to ISNIC's reqirements for .IS delegations.
ask your hosting provider to add a PTR record for the nameserver:
4.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.1.0.0.0.1.0.0.1.8.8.c.f.7.0.6.2.ip6.arpa. IN PTR A.NS.ZERIGO.NET.
Can you please tell me if you can help me with this?
They want your DNS server to have reverse mappings in the DNS pointing from its IP addresses back to its name. That is an unusual requirement, yet that's what they want to see.
Your name server has two IP addresses: 64.27.57.11 and 2607:fc88:1001:1::4.
The reverse mapping for 64.27.57.11 exists and points back to the correct name:
dig -x 64.27.57.11
(...)
;; ANSWER SECTION:
11.57.27.64.in-addr.arpa. 740 IN PTR a.ns.zerigo.net.
But the reverse mapping for 2607:fc88:1001:1::4 does not exist:
dig -x 2607:fc88:1001:1::4
; <<>> DiG 9.4.3-P3 <<>> -x 2607:fc88:1001:1::4
;; global options: printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 19793
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0
;; QUESTION SECTION:
;4.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.1.0.0.0.1.0.0.1.8.8.c.f.7.0.6.2.ip6.arpa. IN PTR
(note NXDOMAIN.)
I see that the reverse zone for the 2607:fc88::/32 prefix is hosted on nameservers ns1.wehostwebsites.com and ns2.wehostwebsites.com. You need to either:
Insert the reverse mapping into the zone file for this IP address block on the above nameservers. You would probably do this if 2607:fc88::/32 is your IP address block.
Get a smaller block like 2607:fc88:1001::/48 delegated to your nameservers, set them up to serve the reverse zone (1.0.0.1.8.8.c.f.7.0.6.2.ip6.arpa), and insert the reverse mapping for your nameserver into that zone. You would probably choose this option if you aren't responsible for the whole 2607:fc88::/32 block.
Either way, the reverse entry should be this one:
4.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.1.0.0.0.1.0.0.1.8.8.c.f.7.0.6.2.ip6.arpa. IN PTR a.ns.zerigo.net.
Some cTLD registries are pushing the limit on zone structure regulations. I'm a big fan of this, however in many cases the local registries are merely trying to clean up their own street-front in a big city where all other neighbors are freely emptying their trashcans on the same street.
It's highly unusual however that local registries also enforce RFC's on ipv6. In the case you don't need ipv6 resolution you probably can fix this by just removing your AAAA records.
Obviously if you need ipv6 you shouldn't follow my advice, but submit to your local registry demands.