{puppetserver ca list --all} default host name and output format - puppet

While migrating from puppet 5 to 6 (6.20), it was noticed that puppet cert ca list --all was deprecated with puppetserver ca list --all. When we execute this command we get an error:
Fatal error when running action 'list' Error: Failed connecting to
https://puppet:8140/puppet-ca/v1/certificate_statuses/any_key Root
cause: Failed to open TCP connection to puppet:8140 (getaddrinfo: Name
or service not known)
This error indicates that puppet is looking for a host names puppet whereas the puppetserver host alias in /etc/hosts is puppetserver.
When we add another alias as puppet to /etc/hosts then this command gets executed successfully. I have 2 queries on this
Is there a mechanism to execute this command without modifying /etc/hosts
What is the output structure of this command. I need to parse this using a script. In version 5 the format was having a prefix of +, - etc, which does not exists now.

Concerning 1-st question.
Now days it's due to default hard-coded or configured hostname under the hood.
Name may be defined in puppet.conf and no need to support /etc/hosts
[server]
server = fqdn.fqdn
or
[server]
ca_server = fqdn.fqdn
As described in configuration manual at https://puppet.com/docs/puppet/7/configuration.html#ca-server
ca_server
The server to use for certificate authority requests. It's a separate server because it cannot and does not need to horizontally scale.
Default: $server
However there is life hack:
$ cat /etc/hosts | grep puppet
127.0.0.1 puppetmaster puppet
And puppetserver ca will succeed too in certain cases.

Related

Facing issues in puppetserver - puppet-agent configuration

I am trying to set up Puppet for DevOps. I have puppet server in Ubuntu 14.04 and puppet-agent in Windows 10. When I am generating certificate for the first time from puppet-agent (Windows 10) the SSL certficate is generating without any issues and even I can sign the same certificate from puppetserver (Ubuntu 14.04), however after signing when I am trying to update the status in puppet-agent (Windows 10) by "puppet agent -t" getting error as,
Error: Could not request certificate: SSL_connect returned=1 errno=0 state=error: certificate verify failed: [unable to get local issuer certificate for /CN=username-virtualbox.domain.com]
The puppet configuration file details:
puppet.config (puppetserver [Ubuntu]):
Troubleshooting steps already tried,
Time zone in both the environments are in sync
Deleted "ssl" folders containing the details of the ssl certificates from both the environments several times and re-tried
8140 ports are enabled on both Windows and Ubuntu
I faced the same problem,
Try to run cmd as admin
I am having the same issue -- been working it for a few weeks now. I cannot guarantee yet that mine is working correctly all the time. Here are some steps I have taken. I hope they are helpful to others.
I am running Puppet Enterprise 2018.1.4. Puppet Agent 5.5.6 on RHEL 7.4.
1) The SSL routine uses a time stamp. Ensure the time is the same between Master & Client.
2) clean/remove the agent cert from the Master AND the Client. On my RHEL, the Client cert is is in /etc/puppetlabs/puppet/ssl/* -- remove any files with the agent name in here.
3) Make sure your puppet enabled on your agent: puppet agent --enable
4) If a client does not contact the puppet master "for a while" the master will drop the client from it's node list, but NOT remove the cert. In theory, the master SHOULD return the node to an active status.
5) Can you run the puppet agent on the master & get the expected results? If not -> problem with puppet code, otherwise, problem with agent.
6) Is puppet.conf configured correctly? Under the [main] section, do you have the server entry correct? Under [agent] are you set to the correct environment? Is noop set to true?
7) It is possible that you have an error in a puppet module that is causeing the agent to exit quietly. Run puppet parser validate on all of your .pp files
8) Can the master resolve the IP address of the master and the client? Can the client resolve the the IP address of the master and the client? Is resolv.conf set correctly on both hosts?
9) hostnames of the client & master should be correct. Each server should know it's shortname, FQDN and IP. On RHEL, I run: hostname; hostname -f; and hostname -i, respectively.
10) File permissions on all the directories & modules should be correct. Check out a working module, see it's owner, group & permissions. Ensure your module is the same.
11) Only root/admin can correctly run puppet agent.
12) On RHEL, the logs are under /var/log/puppet. Do you see any errors there?
13) run puppet agent with the --debug or the --trace option in addition to -t. Pipe this output to a file and see if you can spot any errors.
14) Can you force the master to run the puppet agent on the client successfully?
Many of these things have been narrowing down my issue. I don't know yet if it is fixed, as It takes a while for a node to drop out. Hopefully these will fix your issue.
Hope it helps. There are LOTS of things that could be going wrong.

Agent not reading /etc/sysconfig/puppet server=

We have several servers working with puppet as agents today, but I'm having a problem with a new server running CentOS 7. Normally I would update the /etc/sysconfig/puppet file with the puppet master name and then start the daemon and move to signing the certificate on the master. However, puppet agent doesn't appear to be reading the server = myhost.domain in my config file.
I get the following error in /var/log/messages:
puppet-agent[11133]: Could not request certificate: getaddrinfo: Name or service not known
I tried:
myserver:root$ puppet agent --configprint server
puppet
myserver:root$
but the /etc/sysconfig/puppet file has:
PUPPET_SERVER=myserver.domain.com
Can you please help me understand why puppet agent doesn't get the server from the config file?
The /etc/sysconfig/puppet file is not typically read by the Puppet agent. (I'm not very familiar with CentOS operations, but I suppose that this location might hold some settings that are external to the process, such as environment, command line switches etc.)
You will want to use the proper puppet configuration file:
/etc/puppet/puppet.conf for Puppet 3.x and earlier
/etc/puppetlabs/puppet.conf for Puppet 4.x
so ran the following:
"puppet agent --no-daemonize --verbose --onetime --server puppetmaster.xxx.com"
this started puppet properly, requested certificate and I was able to sign on master. Then added:
server = puppetmaster.xxx.com
to /etc/puppet/puppet.conf and "systemctl restart puppet"
and it worked. Thanks for posts here and other places.

Hadoop 2.2.0 multi-node cluster setup on ec2 - 4 ubuntu 12.04 t2.micro identical instances

I have followed this tutorial to setup Hadoop 2.2.0 multi-node cluster on Amazon EC2. I have had a number of issues with ssh and scp which i was either able to resolve or workaround with help of articles on Stackoverflow but unfortunately, i could not resolve the latest problem.
I am attaching the core configuration files core-site.xml, hdfs-site.xml etc. Also attaching a log file which is a dump output when i run the start-dfs.sh command. It is the final step for starting the cluster and it is giving a mix of errors and i don't have a clue what to do with them.
So i have 4 nodes exactly the same AMI is used. Ubuntu 12.04 64 bit t2.micro 8GB instances.
Namenode
SecondaryNode (SNN)
Slave1
Slave2
The configuration is almost the same as suggested in the tutorial mentioned above.
I have been able to connect with WinSCP and ssh from one instance to the other. Have copied all the configuration files, masters, slaves and .pem files for security purposes and the instances seem to be accessible from one another.
If someone could please look at the log, config files, .bashrc file and let me know what am i doing wrong.
Same security group HadoopEC2SecurityGroup is used for all the instances. All TCP traffic is allowed and ssh port is open. Screenshot in the zipped folder attached. I am able to ssh from Namenode to secondary namenode (SSN). Same goes for slaves as well which means that ssh is working but when i start the hdfs every thing goes down. The error log is not throwing any useful exceptions either. All the files and screenshots can be found as zipped folder here.
Excerpt from error output on console looks like
Starting namenodes on [OpenJDK 64-Bit Server VM warning: You have loaded library /usr/local/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c ', or link it with '-z noexecstack'.
ec2-54-72-106-167.eu-west-1.compute.amazonaws.com]
You: ssh: Could not resolve hostname you: Name or service not known
have: ssh: Could not resolve hostname have: Name or service not known
loaded: ssh: Could not resolve hostname loaded: Name or service not known
VM: ssh: Could not resolve hostname vm: Name or service not known
library: ssh: Could not resolve hostname library: Name or service not known
Server: ssh: Could not resolve hostname server: Name or service not known
warning:: ssh: Could not resolve hostname warning:: Name or service not known
which: ssh: Could not resolve hostname which: Name or service not known
guard.: ssh: Could not resolve hostname guard.: Name or service not known
have: ssh: Could not resolve hostname have: Name or service not known
might: ssh: Could not resolve hostname might: Name or service not known
.....
Add the following entries to .bashrc where HADOOP_HOME is your hadoop folder:
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native export
HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
Hadoop 2.2.0 : "name or service not known" Warning
hadoop 2.2.0 64-bit installing but cannot start

Puppet agent can't find server

I'm new to puppet, but picking it up quickly. Today, I'm running into an issue when trying to run the following:
$ puppet agent --no-daemonize --verbose --onetime
**err: Could not request certificate: getaddrinfo: Name or service not known
Exiting; failed to retrieve certificate and waitforcert is disabled**
It would appear the agent doesn't know what server to connect to. I could just specify --server on the command line, but that will be of no use to me when this runs as a daemon in production, so instead, I specify the server name in /etc/puppet/puppet.conf like so:
[main]
server = puppet.<my domain>
I do have a DNS entry for puppet.<my domain> and if I dig puppet.<my domain>, I see that the name resolves correctly.
All puppet documentation I have read states that the agent tries to connect to a puppet master at puppet by default and your options are host file trickery or do the right thing, create a CNAME in DNS, and edit the puppet.conf accordingly, which I have done.
So what am I missing? Any help is greatly appreciated!
D'oh! Need to sudo to do this! Then everything works.
I had to use the --server flag:
sudo puppet agent --server=puppet.example.org
I actually had the same error but I was using the two learning puppet vm and trying run the 'puppet agent --test' command.
I solved the problem by opening the file /etc/hosts on both the master and the agent vm and the line
***.***.***.*** learn.localdomain learn puppet.localdomain puppet
The ip address (the asterisks) was originally some random number. I had to change this number on both vm so that it was the ip address of the master node.
So I guess for experienced users my advice is to check the /etc/hosts file to make sure that the ip addresses in here for the master and agent not only match but are the same as the ip address of the master.
for other noobs like me my advice is to read the documentation more clearly. This was a step in the 'setting up an agent vm' process the I totally missed xD
In my case I was getting same error but it was due to the cert which should been signed to node on puppetmaster server.
to check pending certs run following:
puppet cert list
"node.domain.com" (SHA256) 8D:E5:8A:2*******"
sign the cert to node:
puppet cert sign node.domain.com
Had the same issue today on puppet 2.6 on CentOS 6.4
All I did to resolve the issue was to check the usual stuff such as hosts and resolv.conf to ensure they were as expected (compared with a working server) and then;
Removed /var/lib/puppet directory rm -rf /var/lib/puppet
Cleared the certificate on the puppet master puppetca --clean
servername
Restarted the network service network restart
Re-ran puppet
Even though the resolv.conf was identical to the working server, puppet updated resolv.conf and immediately re-signed the certificate and replaced all the puppet lib files.
Everything was fine after that.

Set node name in puppet to noop option

I'm using puppet and want to test it with noop, but some configuration depends on the hostname like the node types.
How can I set the node name and run puppet with noop to check the node configuration that match the node name?, currently i got this as error message (my laptop is solaria):
Could not find default node or by name with 'solaria, solaria.lan' on node solaria.lan
Thanks.
puppetd --test --noop --fqdn="hostname.example.com"
Or with 2.6, this may be preferable:
puppet agent --test --noop--fqdn="hostname.example.com"
This will tend to create new certificates on the puppet master, so you'll probably need to run puppetca --clean hostname.example.com on the puppet master afterwords, otherwise when you finally get hosts with those names they'll be unable to set up an SSL relationship with the master.
I just figure out one possible solution, adding this to my config file
nodename = cert
certname = hostname

Resources