I want all my puppet mannaged hosts to have a list of these hosts in a configuration file.
My first idea (which might not be a good one) is to use a template file to insert the list of hosts into the good configuration file.
When a new host is configured, puppet will compute the template and the new host will get a proper configuration file.
But what about the other hosts ? The template file do not change so puppet will not want to re-propagate it. So I guess all the other hosts won't know about the new list of hosts.
The precise use case is to whitelist my hosts in /etc/ssh/sshd_config :
AllowUsers: root#host1 root#host2 ... root#newhost
The template reaches for the SQL ENC to get the list of nodes.
Any hint ?
Puppet will re-evaluate the template every time a server performs a Puppet run, as the Puppet agent will request a new catalog. If the ENC changes its data to provide the new list of hosts then the template output will change, the clients will get a new catalog and apply the new contents of the file.
The Puppet agent often runs as a daemon, executing a run every 20 minutes. When it runs, the file will get updated.
Related
I had a bad situation where my puppet master running on ec2 instance got terminated.I managed to setup a new master server ,but now my clients are not able to connect to the new master .I use the same vip which is configured on route 53.Is there a way that I can direct my clients to my new master and force them to create a new client certificate ?
You can delete the clients' current certificates (location depends on Puppet version, configuration, and user; check the docs). Having done so, they should issue certificate requests to the master on the next catalog run. It sounds like the new master is reachable at the same name / location as the old, so you should not need to modify client configurations. You will need to either turn on certificate autosigning at the new master or manually sign the new certs.
I have two azure hosted Virtual Machines one containing a TIBCO installation and the other containing an Oracle database. In the TNSNAME.ORA file and the LISTENER.ORA the value is as follows:
HOST = servername.servername.f10.internal.cloudapp.net
This was obtained from ipconfig /all at the time of creation but this now seems to have changed and the connection specific DNS value is now set to reddog.microsoft.com with the hostname being set to servername and primary dns suffix being set to company.local
We have not made any changes to this environment, is it possible that this has been changed due to some sort of redeployment by microsoft?
Could you change it back manually?
Then please try to run a tool named Process Monitor to check if any process changes these settings.
Just specify the path of these settings, any operation involving these paths will be logged. (Please remember check the option "Drop Filtered Events", otherwise, there will be too many events logged.)
In case you may need, here is the path of connection specific DNS suffix:
HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\Interfaces\{GUID}\Domain
I have setup a cloud test bed using OpenStack. I used the 3 node architecture.
The IP assigned to each node is as given below
Compute Node : 192.168.9.19/24
Network Node : 192.168.9.10/24
Controller Node : 192.168.9.2/24
The link of instance created is like this :
http://controller:6080/vnc_auto.html?token=2af0b9d8-0f83-42b9-ba64-e784227c119b&title=hadoop14%28f53c0d89-9f08-4900-8f95-abfbcfae8165%29
At first this instance was accessible only when I substitutes controller:8090 with 192.168.9.2:8090. I solved this by setting a local DNS server and resolving 192.168.9.2 to controller.local. Now instead of substituting the IP it works when I substitute controller.local.
Is there any other way to do it?? Also how can I access this instance from another subnet other than 192.168.9.0/24, without specifying the IP.
If I understood your question correctly, yes there is another way, you don't need to set up a DNS server!
On the machine that you would like to access the link, perform the operations below:
Open /etc/hosts file with a text editor.
Add this entry: 192.168.9.2 controller
Save the file, and that's it.
I suggest you to do these on all your nodes so that you can use these hostnames on your OpenStack configuration files instead of their IPs. This would also save you from tons of modifications if you have to make a change on the subnet IPs.
So for example your /etc/hosts files on your nodes should look like these:
#controller
192.168.9.2 controller
#network
192.168.9.10 network
#compute
192.168.9.19 compute
I have written a module that will configure network settings on my system but I can't apply the manifest because before a manifest is applied it does "facter ipaddress" to find and present global facts.
Does it mean that in order to apply puppet manifest we must have IP configured ?
So I have a system that has no IP address configured and i want to use puppet to configure that IP address for me, for that I am asking user to input IP address which i save in a .csv file, then I am using a template to configure if-eth0 file. Template will do extlookup to fill up the fields in template and template is finally called upon inside a manifest. So the problem is that before anything is applied by puppet, it fails to run with following error
facter ip address unable to resolve IP , reason anonymous
I am not sure about the actual question (if IP is indeed needed). However, if I understand correctly you can try one of the two workarounds:
Enable DHCP
This way when your system boots will take a little bit more time (assuming there is no DHCP service on the network). A timeout will occur which, in most of the cases, results in a default IP (169.254.Y.Z if I remember). In this case you may need to stop/kill the dhcp client process before applying static IP or restart the interface to get the new configuration.
Assign default static IP
If you know that puppet configuration will be applied no matter what (maybe a call in rc.local?), you can configure your interface with a static IP (ie 10.1.1.10) to avoid the error message. This is temporary since once puppet runs, the correct configuration will be applied.
Hope it helps,
Andreas
When a customer signs up for my service, I would like to create an A DNS entry for them:
username.mydomain.tld pointing to the IPv4 address of the server that hosts their page
This DNS system would ideally:
Be fairly light-weight
Be distributed. A master/slaves model would be fine, potentially with master failover or going read-only when the master is offline.
Support changes being made via a nice API (mainly, create/remove A entries)
Applies changes instantly (understanding that DNS takes time to propagate)
Run on Linux
Is there something awesome fitting that description?
Thanks :-)
You can just use dynamic DNS updates. Here's a very rudimentary application:
Generate a shared symmetric key which will be used by the DNS server and update client:
dnssec-keygen -a HMAC-MD5 -b 512 -n HOST key.name.
The key name is a domain name, but you can use anything you want: it's more or less just a name for the key.
Configure bind to allow this key to make changes to the zone mydomain.tld:
key "key.name." {
algorithm hmac-md5;
secret "copy-the-base64-string-from-the-key-generated-above==" ;
}
zone "mydomain.tld" {
...
allow-update { key key.name. ; };
...
}
Make changes using nsupdate:
nsupdate -k <pathname-to-file-generated-by-dnssec-keygen>
As input to the nsupdate command:
server dns.master.server.name
update delete username.mydomain.com
update add username.mydomain.com a 1.2.3.4
update add username.mydomain.com aaaa 2002:1234:5678::1
Don't forget the blank line after the update command. nsupdate doesn't send anything to the server until it sees a blank line.
As is normal with bind and other DNS servers, there is no high availability of the master server, but you can have as many slaves as you want, and if they get incremental updates (as they should by default) then changes will be propagated quickly. You might also choose to use a stealth master server whose only job is to receive and process these DDNS updates and feed the results to the slaves.