Puppet auto add new nodes - puppet

Ran in to some problems with puppet when doing an automatic expansion of our virtual cluster. As the puppetmaster only have the existing nodes in /etc/hosts and the new nodes we create are not in the DNS we get this error in the puppet master log when the new node tries to connect to master:
node-4 has a waiting certificate request
Signed certificate request for node-4
Removing file Puppet::SSL::CertificateRequest node-4 at /var/lib/puppet/ssl/ca/requests/node-4.pem'
Error: Could not resolve 192.168.0.37: no name for 192.168.0.37
Is there anyway to get around this? (Except adding in DNS/etc/hosts)

Found the issue, it is actually a false error, the root cause was that the site.pp file had a misspelled module name in the expanded node type.

Related

{puppetserver ca list --all} default host name and output format

While migrating from puppet 5 to 6 (6.20), it was noticed that puppet cert ca list --all was deprecated with puppetserver ca list --all. When we execute this command we get an error:
Fatal error when running action 'list' Error: Failed connecting to
https://puppet:8140/puppet-ca/v1/certificate_statuses/any_key Root
cause: Failed to open TCP connection to puppet:8140 (getaddrinfo: Name
or service not known)
This error indicates that puppet is looking for a host names puppet whereas the puppetserver host alias in /etc/hosts is puppetserver.
When we add another alias as puppet to /etc/hosts then this command gets executed successfully. I have 2 queries on this
Is there a mechanism to execute this command without modifying /etc/hosts
What is the output structure of this command. I need to parse this using a script. In version 5 the format was having a prefix of +, - etc, which does not exists now.
Concerning 1-st question.
Now days it's due to default hard-coded or configured hostname under the hood.
Name may be defined in puppet.conf and no need to support /etc/hosts
[server]
server = fqdn.fqdn
or
[server]
ca_server = fqdn.fqdn
As described in configuration manual at https://puppet.com/docs/puppet/7/configuration.html#ca-server
ca_server
The server to use for certificate authority requests. It's a separate server because it cannot and does not need to horizontally scale.
Default: $server
However there is life hack:
$ cat /etc/hosts | grep puppet
127.0.0.1 puppetmaster puppet
And puppetserver ca will succeed too in certain cases.

Agent not reading /etc/sysconfig/puppet server=

We have several servers working with puppet as agents today, but I'm having a problem with a new server running CentOS 7. Normally I would update the /etc/sysconfig/puppet file with the puppet master name and then start the daemon and move to signing the certificate on the master. However, puppet agent doesn't appear to be reading the server = myhost.domain in my config file.
I get the following error in /var/log/messages:
puppet-agent[11133]: Could not request certificate: getaddrinfo: Name or service not known
I tried:
myserver:root$ puppet agent --configprint server
puppet
myserver:root$
but the /etc/sysconfig/puppet file has:
PUPPET_SERVER=myserver.domain.com
Can you please help me understand why puppet agent doesn't get the server from the config file?
The /etc/sysconfig/puppet file is not typically read by the Puppet agent. (I'm not very familiar with CentOS operations, but I suppose that this location might hold some settings that are external to the process, such as environment, command line switches etc.)
You will want to use the proper puppet configuration file:
/etc/puppet/puppet.conf for Puppet 3.x and earlier
/etc/puppetlabs/puppet.conf for Puppet 4.x
so ran the following:
"puppet agent --no-daemonize --verbose --onetime --server puppetmaster.xxx.com"
this started puppet properly, requested certificate and I was able to sign on master. Then added:
server = puppetmaster.xxx.com
to /etc/puppet/puppet.conf and "systemctl restart puppet"
and it worked. Thanks for posts here and other places.

Hadoop 2.2.0 multi-node cluster setup on ec2 - 4 ubuntu 12.04 t2.micro identical instances

I have followed this tutorial to setup Hadoop 2.2.0 multi-node cluster on Amazon EC2. I have had a number of issues with ssh and scp which i was either able to resolve or workaround with help of articles on Stackoverflow but unfortunately, i could not resolve the latest problem.
I am attaching the core configuration files core-site.xml, hdfs-site.xml etc. Also attaching a log file which is a dump output when i run the start-dfs.sh command. It is the final step for starting the cluster and it is giving a mix of errors and i don't have a clue what to do with them.
So i have 4 nodes exactly the same AMI is used. Ubuntu 12.04 64 bit t2.micro 8GB instances.
Namenode
SecondaryNode (SNN)
Slave1
Slave2
The configuration is almost the same as suggested in the tutorial mentioned above.
I have been able to connect with WinSCP and ssh from one instance to the other. Have copied all the configuration files, masters, slaves and .pem files for security purposes and the instances seem to be accessible from one another.
If someone could please look at the log, config files, .bashrc file and let me know what am i doing wrong.
Same security group HadoopEC2SecurityGroup is used for all the instances. All TCP traffic is allowed and ssh port is open. Screenshot in the zipped folder attached. I am able to ssh from Namenode to secondary namenode (SSN). Same goes for slaves as well which means that ssh is working but when i start the hdfs every thing goes down. The error log is not throwing any useful exceptions either. All the files and screenshots can be found as zipped folder here.
Excerpt from error output on console looks like
Starting namenodes on [OpenJDK 64-Bit Server VM warning: You have loaded library /usr/local/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c ', or link it with '-z noexecstack'.
ec2-54-72-106-167.eu-west-1.compute.amazonaws.com]
You: ssh: Could not resolve hostname you: Name or service not known
have: ssh: Could not resolve hostname have: Name or service not known
loaded: ssh: Could not resolve hostname loaded: Name or service not known
VM: ssh: Could not resolve hostname vm: Name or service not known
library: ssh: Could not resolve hostname library: Name or service not known
Server: ssh: Could not resolve hostname server: Name or service not known
warning:: ssh: Could not resolve hostname warning:: Name or service not known
which: ssh: Could not resolve hostname which: Name or service not known
guard.: ssh: Could not resolve hostname guard.: Name or service not known
have: ssh: Could not resolve hostname have: Name or service not known
might: ssh: Could not resolve hostname might: Name or service not known
.....
Add the following entries to .bashrc where HADOOP_HOME is your hadoop folder:
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native export
HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
Hadoop 2.2.0 : "name or service not known" Warning
hadoop 2.2.0 64-bit installing but cannot start

Puppet agent can't find server

I'm new to puppet, but picking it up quickly. Today, I'm running into an issue when trying to run the following:
$ puppet agent --no-daemonize --verbose --onetime
**err: Could not request certificate: getaddrinfo: Name or service not known
Exiting; failed to retrieve certificate and waitforcert is disabled**
It would appear the agent doesn't know what server to connect to. I could just specify --server on the command line, but that will be of no use to me when this runs as a daemon in production, so instead, I specify the server name in /etc/puppet/puppet.conf like so:
[main]
server = puppet.<my domain>
I do have a DNS entry for puppet.<my domain> and if I dig puppet.<my domain>, I see that the name resolves correctly.
All puppet documentation I have read states that the agent tries to connect to a puppet master at puppet by default and your options are host file trickery or do the right thing, create a CNAME in DNS, and edit the puppet.conf accordingly, which I have done.
So what am I missing? Any help is greatly appreciated!
D'oh! Need to sudo to do this! Then everything works.
I had to use the --server flag:
sudo puppet agent --server=puppet.example.org
I actually had the same error but I was using the two learning puppet vm and trying run the 'puppet agent --test' command.
I solved the problem by opening the file /etc/hosts on both the master and the agent vm and the line
***.***.***.*** learn.localdomain learn puppet.localdomain puppet
The ip address (the asterisks) was originally some random number. I had to change this number on both vm so that it was the ip address of the master node.
So I guess for experienced users my advice is to check the /etc/hosts file to make sure that the ip addresses in here for the master and agent not only match but are the same as the ip address of the master.
for other noobs like me my advice is to read the documentation more clearly. This was a step in the 'setting up an agent vm' process the I totally missed xD
In my case I was getting same error but it was due to the cert which should been signed to node on puppetmaster server.
to check pending certs run following:
puppet cert list
"node.domain.com" (SHA256) 8D:E5:8A:2*******"
sign the cert to node:
puppet cert sign node.domain.com
Had the same issue today on puppet 2.6 on CentOS 6.4
All I did to resolve the issue was to check the usual stuff such as hosts and resolv.conf to ensure they were as expected (compared with a working server) and then;
Removed /var/lib/puppet directory rm -rf /var/lib/puppet
Cleared the certificate on the puppet master puppetca --clean
servername
Restarted the network service network restart
Re-ran puppet
Even though the resolv.conf was identical to the working server, puppet updated resolv.conf and immediately re-signed the certificate and replaced all the puppet lib files.
Everything was fine after that.

Set node name in puppet to noop option

I'm using puppet and want to test it with noop, but some configuration depends on the hostname like the node types.
How can I set the node name and run puppet with noop to check the node configuration that match the node name?, currently i got this as error message (my laptop is solaria):
Could not find default node or by name with 'solaria, solaria.lan' on node solaria.lan
Thanks.
puppetd --test --noop --fqdn="hostname.example.com"
Or with 2.6, this may be preferable:
puppet agent --test --noop--fqdn="hostname.example.com"
This will tend to create new certificates on the puppet master, so you'll probably need to run puppetca --clean hostname.example.com on the puppet master afterwords, otherwise when you finally get hosts with those names they'll be unable to set up an SSL relationship with the master.
I just figure out one possible solution, adding this to my config file
nodename = cert
certname = hostname

Resources