Connection specific DNS suffix has been changed - azure

I have two azure hosted Virtual Machines one containing a TIBCO installation and the other containing an Oracle database. In the TNSNAME.ORA file and the LISTENER.ORA the value is as follows:
HOST = servername.servername.f10.internal.cloudapp.net
This was obtained from ipconfig /all at the time of creation but this now seems to have changed and the connection specific DNS value is now set to reddog.microsoft.com with the hostname being set to servername and primary dns suffix being set to company.local
We have not made any changes to this environment, is it possible that this has been changed due to some sort of redeployment by microsoft?

Could you change it back manually?
Then please try to run a tool named Process Monitor to check if any process changes these settings.
Just specify the path of these settings, any operation involving these paths will be logged. (Please remember check the option "Drop Filtered Events", otherwise, there will be too many events logged.)
In case you may need, here is the path of connection specific DNS suffix:
HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\Interfaces\{GUID}\Domain

Related

How do I use puppet to configure network settings?

I have written a module that will configure network settings on my system but I can't apply the manifest because before a manifest is applied it does "facter ipaddress" to find and present global facts.
Does it mean that in order to apply puppet manifest we must have IP configured ?
So I have a system that has no IP address configured and i want to use puppet to configure that IP address for me, for that I am asking user to input IP address which i save in a .csv file, then I am using a template to configure if-eth0 file. Template will do extlookup to fill up the fields in template and template is finally called upon inside a manifest. So the problem is that before anything is applied by puppet, it fails to run with following error
facter ip address unable to resolve IP , reason anonymous
I am not sure about the actual question (if IP is indeed needed). However, if I understand correctly you can try one of the two workarounds:
Enable DHCP
This way when your system boots will take a little bit more time (assuming there is no DHCP service on the network). A timeout will occur which, in most of the cases, results in a default IP (169.254.Y.Z if I remember). In this case you may need to stop/kill the dhcp client process before applying static IP or restart the interface to get the new configuration.
Assign default static IP
If you know that puppet configuration will be applied no matter what (maybe a call in rc.local?), you can configure your interface with a static IP (ie 10.1.1.10) to avoid the error message. This is temporary since once puppet runs, the correct configuration will be applied.
Hope it helps,
Andreas

RabbitMQ Cluster on EC2: Hostname Issues

I want to set up a 3 node Rabbit cluster on EC2 (amazon linux). We'd like to have recovery implemented so if we lose a server it can be replaced by another new server automagically. We can set the cluster up manually easily using the default hostname (ip-xx-xx-xx-xx) so that the broker id is rabbit#ip-xx-xx-xx-xx. This is because the hostname is resolvable over the network.
The problem is: This hostname will change if we lose/reboot a server, invalidating the cluster. We haven't had luck in setting a custom static hostname because they are not resolvable by other machines in the cluster; thats the only part of that article that doens't make sense.
Has anyone accomplished a RabbitMQ Cluster on EC2 with a recovery implementation? Any advice is appreciated.
You could create three A records in an external DNS service for the three boxes and use them in the config. E.g., rabbit1.alph486.com, rabbit2.alph486.com and rabbit3.alph486.com. These could even be the ec2 private IP addresses. If all of the boxes are in the same region it'll be faster and cheaper. If you lose a box, just update the DNS record.
Additionally, you could assign an elastic IPs to the three boxes. Then, when you lose a box, all you'd need to do is assign the elastic IP to it's replacement.
Of course, if you have a small number of clients, you could just add entries into the /etc/hosts file on each box and update as needed.
From:
http://www.rabbitmq.com/ec2.html
Issues with hostname
RabbitMQ names the database directory using the current hostname of the system. If the hostname changes, a new empty database is created. To avoid data loss it's crucial to set up a fixed and resolvable hostname. For example:
sudo -s # become root
echo "rabbit" > /etc/hostname
echo "127.0.0.1 rabbit" >> /etc/hosts
hostname -F /etc/hostname
#Chrskly gave good answers that are the general consensus of the Rabbit community:
Init scripts that handle DNS or identification of other servers are mainly what I hear.
Elastic IPs we could not get to work without the aid of DNS or hostname aliases because the Internal IP/DNS on amazon still rotate and the public IP/DNS names that stay static cannot be used as the hostname for rabbit unless aliased properly.
Hosts file manipulations via an script are also an option. This needs to be accompanied by a script that can identify the DNS's of the other servers upon launch so doesn't save much work in terms of making things more "solid state" config wise.
What I'm doing:
Due to some limitations on the DNS front, I am opting to use bootstrap scripts to initialize the machine and cluster with any other available machines using the default internal dns assigned at launch. If we lose a machine, a new one will come up, prepare rabbit and lookup the DNS names of machines to cluster with. It will then remove the dead node from the cluster for housekeeping.
I'm using some homebrew init scripts in Python. However, this could easily be done with something like Chef/Puppet.
Update: Detail from Docs
From:
http://www.rabbitmq.com/ec2.html
Issues with hostname
RabbitMQ names the database directory using the current hostname of
the system. If the hostname changes, a new empty database is created.
To avoid data loss it's crucial to set up a fixed and resolvable
hostname. For example:
sudo -s # become root
echo "rabbit" > /etc/hostname
echo "127.0.0.1 rabbit" >> /etc/hosts
hostname -F /etc/hostname

Distributed DNS system with API

When a customer signs up for my service, I would like to create an A DNS entry for them:
username.mydomain.tld pointing to the IPv4 address of the server that hosts their page
This DNS system would ideally:
Be fairly light-weight
Be distributed. A master/slaves model would be fine, potentially with master failover or going read-only when the master is offline.
Support changes being made via a nice API (mainly, create/remove A entries)
Applies changes instantly (understanding that DNS takes time to propagate)
Run on Linux
Is there something awesome fitting that description?
Thanks :-)
You can just use dynamic DNS updates. Here's a very rudimentary application:
Generate a shared symmetric key which will be used by the DNS server and update client:
dnssec-keygen -a HMAC-MD5 -b 512 -n HOST key.name.
The key name is a domain name, but you can use anything you want: it's more or less just a name for the key.
Configure bind to allow this key to make changes to the zone mydomain.tld:
key "key.name." {
algorithm hmac-md5;
secret "copy-the-base64-string-from-the-key-generated-above==" ;
}
zone "mydomain.tld" {
...
allow-update { key key.name. ; };
...
}
Make changes using nsupdate:
nsupdate -k <pathname-to-file-generated-by-dnssec-keygen>
As input to the nsupdate command:
server dns.master.server.name
update delete username.mydomain.com
update add username.mydomain.com a 1.2.3.4
update add username.mydomain.com aaaa 2002:1234:5678::1
Don't forget the blank line after the update command. nsupdate doesn't send anything to the server until it sees a blank line.
As is normal with bind and other DNS servers, there is no high availability of the master server, but you can have as many slaves as you want, and if they get incremental updates (as they should by default) then changes will be propagated quickly. You might also choose to use a stealth master server whose only job is to receive and process these DDNS updates and feed the results to the slaves.

what is the Order thru which dns name is resolved for any web application?

I have java based application hosted on my local tomcat server.As per my understanding whenever I type http://us.states.com/myApplication. There is a order browser will try to resolve the DNS name us.states.com. I.e
First it will look for us.states.com in hosts file
Secondly it will look for us.states.com on local DNS server (if it is there)
Last it will look for us.states.com on web (with appending of www in front of us.states.com)
Is that correct?
The first two are correct, the third is not. If it appends www-or not is normally a redirect issue. Hence, DNS name resolving will only be against either local host file or against one or more DNS servers.
One and two are correct. First the hosts file is checked, then your DNS server. There is no 3.
Also, step 2 is not necessarily a DNS server local to your network. It can be specified on your machine (separately from DHCP) or it can be specified by the network. Usually the DNS server is on a machine owned by your ISP, unless you explicitly use a different one. For example I sometimes use Google's public DNS servers (8.8.8.8 / 8.8.4.4) or Level3's (4.2.2.1 through 4.2.2.7 or so).

Query DNS in Ubuntu

I use two DNS servers a public one (8.8.8.8)
and a local one (192.168.1.20)
In ubuntu, If I wrote both DNSs 192.168.1.20, 8.8.8.8
it will always query the first and until the first is down and then it will start querying the second.
And of course I have to make the local point again to 8.8.8.8
Like this i have almost no problems, I can resolve local addresses and also public ones
but when I'm out of the office that's were all the problems start.
Having the local DNS first makes ubuntu checks for it every single time it needs to resolve.
So I end up switching switching the priority of the DNS every (8.8.8.8, 192.168.1.20) time I change my location.
This is not the case if I was using windows. It somehow sends to both DNSs at once or something of that sort.
Is there a way to avoid changing the DNS for every location?
Ubuntu also must query each server in /etc/resolv.conf if there is no answer from the first server.
Give an output of 'dig google.com' please
You wrote 'until the first is down'...
The system of course will connect other servers ONLY if has no respond from the first one!!
The servers are listed in preferable order
Not an answer but a possible work around.
Are you able to use different network interfaces for each network?
If so you can specify different "dns-nameservers" in the "/etc/network/interfaces" file.

Resources