When a customer signs up for my service, I would like to create an A DNS entry for them:
username.mydomain.tld pointing to the IPv4 address of the server that hosts their page
This DNS system would ideally:
Be fairly light-weight
Be distributed. A master/slaves model would be fine, potentially with master failover or going read-only when the master is offline.
Support changes being made via a nice API (mainly, create/remove A entries)
Applies changes instantly (understanding that DNS takes time to propagate)
Run on Linux
Is there something awesome fitting that description?
Thanks :-)
You can just use dynamic DNS updates. Here's a very rudimentary application:
Generate a shared symmetric key which will be used by the DNS server and update client:
dnssec-keygen -a HMAC-MD5 -b 512 -n HOST key.name.
The key name is a domain name, but you can use anything you want: it's more or less just a name for the key.
Configure bind to allow this key to make changes to the zone mydomain.tld:
key "key.name." {
algorithm hmac-md5;
secret "copy-the-base64-string-from-the-key-generated-above==" ;
}
zone "mydomain.tld" {
...
allow-update { key key.name. ; };
...
}
Make changes using nsupdate:
nsupdate -k <pathname-to-file-generated-by-dnssec-keygen>
As input to the nsupdate command:
server dns.master.server.name
update delete username.mydomain.com
update add username.mydomain.com a 1.2.3.4
update add username.mydomain.com aaaa 2002:1234:5678::1
Don't forget the blank line after the update command. nsupdate doesn't send anything to the server until it sees a blank line.
As is normal with bind and other DNS servers, there is no high availability of the master server, but you can have as many slaves as you want, and if they get incremental updates (as they should by default) then changes will be propagated quickly. You might also choose to use a stealth master server whose only job is to receive and process these DDNS updates and feed the results to the slaves.
Related
I have two azure hosted Virtual Machines one containing a TIBCO installation and the other containing an Oracle database. In the TNSNAME.ORA file and the LISTENER.ORA the value is as follows:
HOST = servername.servername.f10.internal.cloudapp.net
This was obtained from ipconfig /all at the time of creation but this now seems to have changed and the connection specific DNS value is now set to reddog.microsoft.com with the hostname being set to servername and primary dns suffix being set to company.local
We have not made any changes to this environment, is it possible that this has been changed due to some sort of redeployment by microsoft?
Could you change it back manually?
Then please try to run a tool named Process Monitor to check if any process changes these settings.
Just specify the path of these settings, any operation involving these paths will be logged. (Please remember check the option "Drop Filtered Events", otherwise, there will be too many events logged.)
In case you may need, here is the path of connection specific DNS suffix:
HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\Interfaces\{GUID}\Domain
I want to set up a 3 node Rabbit cluster on EC2 (amazon linux). We'd like to have recovery implemented so if we lose a server it can be replaced by another new server automagically. We can set the cluster up manually easily using the default hostname (ip-xx-xx-xx-xx) so that the broker id is rabbit#ip-xx-xx-xx-xx. This is because the hostname is resolvable over the network.
The problem is: This hostname will change if we lose/reboot a server, invalidating the cluster. We haven't had luck in setting a custom static hostname because they are not resolvable by other machines in the cluster; thats the only part of that article that doens't make sense.
Has anyone accomplished a RabbitMQ Cluster on EC2 with a recovery implementation? Any advice is appreciated.
You could create three A records in an external DNS service for the three boxes and use them in the config. E.g., rabbit1.alph486.com, rabbit2.alph486.com and rabbit3.alph486.com. These could even be the ec2 private IP addresses. If all of the boxes are in the same region it'll be faster and cheaper. If you lose a box, just update the DNS record.
Additionally, you could assign an elastic IPs to the three boxes. Then, when you lose a box, all you'd need to do is assign the elastic IP to it's replacement.
Of course, if you have a small number of clients, you could just add entries into the /etc/hosts file on each box and update as needed.
From:
http://www.rabbitmq.com/ec2.html
Issues with hostname
RabbitMQ names the database directory using the current hostname of the system. If the hostname changes, a new empty database is created. To avoid data loss it's crucial to set up a fixed and resolvable hostname. For example:
sudo -s # become root
echo "rabbit" > /etc/hostname
echo "127.0.0.1 rabbit" >> /etc/hosts
hostname -F /etc/hostname
#Chrskly gave good answers that are the general consensus of the Rabbit community:
Init scripts that handle DNS or identification of other servers are mainly what I hear.
Elastic IPs we could not get to work without the aid of DNS or hostname aliases because the Internal IP/DNS on amazon still rotate and the public IP/DNS names that stay static cannot be used as the hostname for rabbit unless aliased properly.
Hosts file manipulations via an script are also an option. This needs to be accompanied by a script that can identify the DNS's of the other servers upon launch so doesn't save much work in terms of making things more "solid state" config wise.
What I'm doing:
Due to some limitations on the DNS front, I am opting to use bootstrap scripts to initialize the machine and cluster with any other available machines using the default internal dns assigned at launch. If we lose a machine, a new one will come up, prepare rabbit and lookup the DNS names of machines to cluster with. It will then remove the dead node from the cluster for housekeeping.
I'm using some homebrew init scripts in Python. However, this could easily be done with something like Chef/Puppet.
Update: Detail from Docs
From:
http://www.rabbitmq.com/ec2.html
Issues with hostname
RabbitMQ names the database directory using the current hostname of
the system. If the hostname changes, a new empty database is created.
To avoid data loss it's crucial to set up a fixed and resolvable
hostname. For example:
sudo -s # become root
echo "rabbit" > /etc/hostname
echo "127.0.0.1 rabbit" >> /etc/hosts
hostname -F /etc/hostname
Our application (RHEL 5/c++) uses the hostid as returned by gethostid for logging purposes. For some reason, the primary DNS server of the local network environment went offline. This resulted in massive problems in gethostid: The function call hangs for more than 60s, which lead to internal timeouts in our application. A call to hostid on the commandline also didn't return after several minutes. Once the DNS server was up again, the timeouts/problems both in the application and the hostid commandline tool disappeared.
My question is: How do I prevent gethostid from making DNS lookups? There`re some boundary conditions to the answer:
The file /etc/hostid must not exist.
Calling sethostid is not allowed.
Changing /etc/hosts is not possible.
I'm astonished this happens at all. As I understand gethostid it works like this:
Return the value of the last sethostid if it has been set manually.
Return hostid form /etc/hostid if the file exists.
Return the primary IP of the host if set.
Fail for other cases.
I don`t see the need for a DNS query.
To verify, that gethostid actually is dependend on a working DNS server, try this:
As root create/change your /etc/reslov.conf so it contains only invalid nameserver entries.
Call hostid on the commandline.
On my debian/squeeze installation this results in a hostid of 00000000 without any hangs. I assume the RedHat-version of hostid is different/older and results hangs.
I think preventing DNS lookups from gethostid is not really possible without breaking the system or violating one of the boundary conditions. On gnu.org I've found this comment on the sethostid function:
The proper way to establish the primary IP address of a system is to configure the IP address resolver to associate that IP address with the system's host name as returned by gethostname. For example, put a record for the system in /etc/hosts.
From this I conclude, that gethostid determines the IP like this:
Get the hostname from gethostname.
Determine the IP via gethostbyname (or a similar method).
Under the conditions, that the host name is not associated to an IP address in /etc/hosts and /etc/nsswitch.conf allows DNS lookups, a DNS lookup will be made by gethostid.
Is there a way to programmatically add hosts to the local name resolver under Linux?
I would rather avoid fiddling with /etc/hosts dynamically...
Example: add the name foo and bind it to the local port 127.1.2.3
Use Case: I have an application installed locally accessible through a web browser. I'd like the application to be accessible through a local URI.
add the name foo and bind it to the local port 127.0.0.1:9999
What is it that you want? You can add foo 127.0.0.1 to hosts or do the equivalent in your nameserver, but a connection to foo on port 1234 will always go to 127.0.0.1:1234 -- it's not possible to redirect that to port 9999 based on name, which is lost by the time connect is called.
On Linux you can add IPs to the loopback device (i.e. ip addr add 127.1.2.3 dev lo), and then use iptables to change all connections destined for 127.1.2.3:1234 to instead go to 127.0.0.1:9999, but I can't tell from your question if that the observable behavior you want.
If you'll only add hosts, a pretty safe way to do it is
echo -e "ip.add.re.ss\thostname" >> /etc/hosts
Now, if you want to remove them it starts getting hairy. I suspect you also want to remove them.
If this is the case you can use Dynamic DNS, for example, BIND has the nsupdate tool to update zone files:
$ nsupdate
> update delete oldhost.example.com A
> update add newhost.example.com 86400 A 172.16.1.1
> send
This does the following:
Any A records for oldhost.example.com
are deleted. And an A record for
newhost.example.com with IP address
172.16.1.1 is added. The newly-added record has a 1 day TTL (86400
seconds).
The google search term you want is "DDNS" for "Dynamic DNS". That's a technology for dynamically adding records to DNS servers, which sounds like exactly what you want. I'm pretty sure the bind in most lunix distros supports it, but you may need to read up on how to configure it.
I'll be going with a recent discovery: multicast-dns using the Avahi package. An example can be found here.
I work on a network where the systems at an IP address will change frequently. They are moved on and off the workbench and DHCP determines the IP they get.
It doesn't seem straightforward how to disable host key caching/checking so that I don't have to edit ~/.ssh/known_hosts every time I need to connect to a system.
I don't care about the host authenticity, they are all on the 10.x.x.x network segment and I'm relatively certain that nobody is MITM'ing me.
Is there a "proper" way to do this? I don't care if it warns me, but halting and causing me to flush my known_hosts entry for that IP every time is annoying and in this scenario it does not really provide any security because I rarely connect to the systems more than once or twice and then the IP is given to another system.
I looked in the ssh_config file and saw that I can set up groups so that the security of connecting to external machines could be preserved and I could just ignore checking for local addresses. This would be optimal.
From searching I have found some very strong opinions on the matter, ranging from "Don't mess with it, it is for security, just deal with it" to "This is the stupidest thing I have ever had to deal with, I just want to turn it off" ... I'm somewhere in the middle. I just want to be able to do my job without having to purge an address from the file every few minutes.
Thanks.
This is the configuration I use for our ever-changing EC2 hosts:
maxim#maxim-desktop:~$ cat ~/.ssh/config
Host *amazonaws.com
IdentityFile ~/.ssh/keypair1-openssh
IdentityFile ~/.ssh/keypair2-openssh
User ubuntu
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
This disables host confirmation StrictHostKeyChecking no and also uses a nice hack to prevent ssh from saving the host identify to a persistent file UserKnownHostsFile /dev/null note that as an added value I've added the default user with which to connect to the host and the option to try several different identify private keys.
Assuming you're using OpenSSH, I believe you can set the
CheckHostIP no
option to prevent host IPs from being checked in known_hosts. From the man page:
CheckHostIP
If this flag is set to 'yes', ssh(1)
will additionally check the host IP
address in the known_hosts file. This
allows ssh to detect if a host key
changed due to DNS spoofing. If the
option is set to 'no', the check will
not be executed. The default is
'yes'.
This took me a while to find. The most common use-case I've seen is when you've got SSH tunnels to remote networks. All the solutions here produced warnings which broke my Nagios scripts.
The option I needed was:
NoHostAuthenticationForLocalhost yes
Which, as the name suggests also only applies to localhost.
Edit your ~/.ssh/config
nano ~/.ssh/config (if there wasn't one already, don't worry, nano will create a new file)
Add the following config:
Host 192.168.*
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
If you want to disable this temporarily or without needing to change your SSH configuration files, you can use:
ssh -o UserKnownHostsFile=/dev/null username#hostname
Since every other answer explains how to disable the key checking, here are two ideas that preserve the key checking, but avoid the problem:
Use hostnames. This is easy if you control the DHCP server and can assign proper names. After that you can just use the known hostnames, the changing ips don't matter.
Use hostnames. Even if you don't control the DHCP server, you can use a service like avahi, which will broadcast the name of the server in our local network. It takes care of solving collisions and other issues.
Use host key signing. After you built a machine, sign it with a local CA (you don't need a global trusted CA for that). After that, you don't need to trust each host separately on your machine. It's enough that you trust the signing CA in the known_hosts file. More information in the ssh-keygen man page or at many blog posts (https://www.digitalocean.com/community/tutorials/how-to-create-an-ssh-ca-to-validate-hosts-and-clients-with-ubuntu)