Puppet ignores my node.pp entry - puppet

My Puppet master and agent are on the same machine. The master node.pp file contains this:
node 'pear.myserver.com' {
include ntp
}
The ntp.pp file contains this:
class ntp {
package { "ntp":
ensure => installed
}
service { "ntp":
ensure => running,
}
}
The /etc/hosts file contains the line:
96.124.119.41 pear.myserver.com pear
I was able to successfully launch puppetmaster, but when I execute this, ntp doesn't get installed (it is not installed already, I checked).
puppet agent --test --server='pear.myserver.com'
It just reports this:
info: Caching catalog for pear.myserver.com
info: Applying configuration version '1387782253'
notice: Finished catalog run in 0.01 seconds
I don't know what else I could have missed. Can you please help? Note that I replaced the actual server name with 'myserver' for security reasons.
I was following this tutorial: http://bitfieldconsulting.com/puppet-tutorial

$puppet agent --test
This will fetch compiled catalog from Master puppet, which is in /etc/puppetlabs/puppet/manifests/site.pp and run locally.
$puppet apply /etc/puppet/modules/ntp/manifests/ntp.pp
Will apply locally

Related

Puppet7 agent can't find catalog from server

I'm learning Puppet now. Everything is new to me... After installed a puppet7 server and agent on my two learning VMs--
192.168.160.131 puppet-mst.eisen #The puppet server
192.168.160.140 sles12.eisen #The puppet agent
And I've successfully signed the node "sles12.eisen" to the server "puppet-mst.eisen" --
[root#puppet-mst manifests]# puppetserver --version
puppetserver version: 7.4.1
[root#puppet-mst manifests]# puppetserver ca list --all
Signed Certificates:
puppet-mst.eisen (SHA256) 0B:3F:DA:60:2F:2D:D3:91:94:58:E2:B6:32:28:50:8E:D4:1C:A0:8F:A0:CF:94:99:6E:EE:99:46:B4:1D:30:58 alt names: ["DNS:puppet-mst.eisen"] authorization extensions: [pp_cli_auth: true]
puppet-mst (SHA256) C8:89:47:D2:15:74:6E:49:E7:9A:27:B5:EA:10:9B:81:C4:DC:68:E8:B4:01:07:5D:63:34:5A:AF:B6:66:C9:EE alt names: ["DNS:puppet-mst"]
sles12.eisen (SHA256) C5:40:D7:8A:C6:64:BD:E8:BF:D3:BB:5D:01:24:66:03:57:96:84:31:84:42:DF:36:AA:D1:25:14:76:4D:A5:99 alt names: ["DNS:sles12.eisen"]
Then I wrote a testing module --filetest1, and hope it can put a file to the agent node in /tmp/puppettest --
[root#puppet-mst manifests]# cat /etc/puppetlabs/code/environments/production/modules/filetest1/manifests/init.pp
class filetest1{
file {'/tmp/puppettest/filetest1':
ensure => file,
content => 'Hello World!',
}
}
[root#puppet-mst manifests]# cat /etc/puppetlabs/code/environments/production/manifests/site.pp
node 'sles12.eisen'{
include filetest1
}
But the "puppet agent --test" can't work, it's said it either server can't find agent node, or the test module's catalog is missing --
sles12:/tmp/puppettest # puppet --version
7.12.0
sles12:/tmp/puppettest # hostname -f
sles12.eisen
sles12:/tmp/puppettest # puppet agent --test --verbose
Info: Using environment 'production'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Error: Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Failed when searching for node sles12.eisen: Failed to find sles12.eisen via exec: Execution of '/etc/puppetlabs/puppet/node.rb sles12.eisen' returned 1:
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
I don't know what's wrong here. Please kind help. Thanks
Regards
Eisen
The error message suggests that you have configured Puppet to use an external node classifier (/etc/puppetlabs/puppet/node.rb), and either the attempt to execute it is failing altogether, or it is terminating with a failure status, or it is not outputting anything.
You may want to explore ENCs later, but now is probably not the time for that. To disable use of an ENC, edit /etc/puppetlabs/puppet/puppet.conf and either remove the node_terminus setting or change its value to plain.

Puppet agent can't be deployed module from master

I'm just start learning Puppet, really new to this world. I'm using puppet 2.7.26 on my two learning VMs --
puppet-master 192.168.160.131
eisen-suse11 192.168.160.129
Follow the turial, I've signed the node "eisen-suse11" to puppet-master successfully--
puppet-master:/etc/puppet/modules/motd/manifests # puppet cert --list --all
+ "eisen-suse11" (A0:7F:E2:77:30:9A:96:E3:79:FD:F7:1E:59:35:5B:1E)
+ "puppet-master" (38:90:B5:8A:68:8A:A7:44:8A:2F:07:D3:F3:AC:E8:80) (alt names: "DNS:puppet", "DNS:puppet-master", "DNS:puppet-master.suse11", "DNS:puppet.suse11")
+ "puppet-master.suse11" (5D:9E:A4:D9:0C:5F:69:07:FA:55:13:C3:38:6D:9B:26)
Then follow the book, I write a module -- motd -- which should put a file to client node --
puppet-master:/etc/puppet/modules/motd/manifests # cat init.pp
class motd{
package{ 'setup':
ensure => present,
}
file{ '/etc/motd':
ensure => present,
owner => 'root',
group => 'root',
mode => '0644',
source => "puppet://$puppetserver/modules/motd/etc/motd",
require => Package['setup']
}
}
puppet-master:/etc/puppet/modules/motd/manifests # cat site.pp
$puppetserver='puppet-master.suse11'
node 'eisen-suse11'{
include motd
}
But when I tested "puppet agent --test --trace" on the client node -- eisen-suse11 --- it's all quiet --
eisen-suse11:~ # puppet agent --test --trace
info: Caching catalog for eisen-suse11
info: Applying configuration version '1633779962'
notice: Finished catalog run in 0.01 seconds
eisen-suse11:~ # ls /etc/motd
ls: cannot access /etc/motd: No such file or directory
That "/etc/motd" is not copied from puppet-master --
Does anyone can help? Any idea would be appreciated.
RGS
Eisen
The problem is that your node is receiving an empty catalog, which is happening because you put your site.pp file in the wrong place. Puppet will not find it inside the module. It has been a very long time since I wrote code for Puppet 2 (and I hung on to that version much longer than was healthy), but as I recall, the correct directory for that file would be /etc/puppet/manifests.
But again, as I wrote in comments, Puppet 2 is utterly obsolete and well past the end of its life. Ditch it, and also ditch any books you have that teach it. The only reason I can think of to learn this version of Puppet is that you have an existing legacy infrastructure that you are obligated to maintain, but if you are faced with such a Puppet code base in 2021 then it would be best to rewrite from scratch for Puppet 7.

How do I run a Puppet Manifest on a Windows server with Puppet Agent?

I have done it in the past. I don't know why I cannot do it this way below. I am using CentOS 7 for the Puppet Master server. I am using Windows Server 2012 with Puppet Agent.
All the content below was taken from the Puppet Master server. Here is site.pp (which is in /etc/puppet/manifests):
node 'fqdnOfWindowsServer' { import 'good.pp'}
node 'fqdnOfLinuxServer' {}
Here is good.pp (which is in /etc/puppet/manifests):
file { 'c:/fun.ps1':
ensure => 'present',
source => '/tmp/special.ps1',
source_permissions => 'ignore',
}
Here is what happens when I run puppet agent -t:
...Caching catalog for fqdnOfLinuxServer... Error: Failed to apply
catalog: Parameter path failed on File[c:/fun.ps1]: File paths must be
fully qualified, not 'c:/fun.ps1' at /etc/puppet/manifests/good.pp:5
How do I input a fully qualified path? It seems to be having a problem with a Windows server as the Puppet Agent. Paths are different from Linux Puppet Agents.
From what I can make of the error message, you're trying to create a Windows file resource on a Linux server (the error mentions caching catalog for fqdnOfLinuxServer). If that's the case, the error message makes sense because on Linux, the agent expects file paths to start with a forward slash.

Trouble Using Puppet Forge Module example42/splunk

I want to use https://forge.puppetlabs.com/example42/splunk to setup splunk on some of my systems.
So on my puppet master I did puppet module install example42-splunk.
I use the PE console so I added the class splunk and associated splunk with a group that has one of my nodes, my-mongo-1.
I logon to my-mongo-1 and execute ...
[root#my-mongo-1 ~]# puppet agent -t
...
Info: Caching catalog for my-mongo-1
Info: Applying configuration version '1417030622'
Notice: /Stage[main]/Splunk/Package[splunk]/ensure: created
Notice: /Stage[main]/Splunk/Exec[splunk_create_service]/returns: executed successfully
Notice: /Stage[main]/Splunk/File[splunk_change_admin_password]/ensure: created
Info: /Stage[main]/Splunk/File[splunk_change_admin_password]: Scheduling refresh of Exec[splunk_change_admin_password]
Notice: /Stage[main]/Splunk/Service[splunk]/ensure: ensure changed 'stopped' to 'running'
Info: /Stage[main]/Splunk/Service[splunk]: Unscheduling refresh on Service[splunk]
Notice: /Stage[main]/Splunk/Exec[splunk_change_admin_password]/returns: Could not look up HOME variable. Auth tokens cannot be cached.
Notice: /Stage[main]/Splunk/Exec[splunk_change_admin_password]/returns:
Notice: /Stage[main]/Splunk/Exec[splunk_change_admin_password]/returns: In handler 'users': The password cannot be set to the default password.
Error: /Stage[main]/Splunk/Exec[splunk_change_admin_password]: Failed to call refresh: /opt/splunkforwarder/bin/puppet_change_admin_password returned 22 instead of one of [0]
Error: /Stage[main]/Splunk/Exec[splunk_change_admin_password]: /opt/splunkforwarder/bin/puppet_change_admin_password returned 22 instead of one of [0]
Notice: Finished catalog run in 11.03 seconds
So what am I doing wrong here?
Why do I get the Could not look up HOME variable. Auth tokens cannot be cached. error?
I saw you asked this on Ask Puppet, and gave it a quick test in Vagrant, and there are two solutions:
1) Give a different password for Splunk in Puppet (as it's complaining about using the default password)
class { "splunk":
install => "server",
admin_password => 'n3wP4assw0rd',
}
2) Upgrade the module to a newer version that doesn't have this issue:
puppet module upgrade example42-splunk --force

Whats the best approach to create a repo of the installers to be used for installing and upgrading in the puppet managed nodes

Lets take the example, I am having a jboss-4.2.3 installers as a .tar file. In general to install jboss, i ll
1. untar the jboss-4.2.3 into a prefefined folder (opt/server/jbossas/) into multiple servers
2. untar the openjdk into a preferined path (/opt/software/java)set the path in the bash.profile
3. Create server profile in the place where jboss is installed
4. Start the server.
Lets say that I have to do this in 16 nodes (servers).
Now, I should store the jboss and openjdk installers at a central location and it should be transferred to the nodes before the 1st step can begin.
I wrote the manifest to perform the requirements form 1 to 4. But not sure how can I automate the transfer of the installers from a central repo. I am not worried about the type of central repo. It can be a ftp or puppet or anything else.
Please help me. I was going through filebucket. Will this help or should i write a manifest to get this file from a ftp server?
How to create a file repo which can be referred in puppet manifests?
I am not sure about your exact problem, but you can have a look at this and get an idea...
In most of the usage the files are transferred from the puppetmaster to the clients. If you have your policies defined in a module to untar and install the packages, e.g. module name jboss, you can keep the tarball in these kind of structure in the puppet master and run puppet agent from puppet client :
/etc/puppet/module/jboss/files/jboss_pkg.tar
Your policy for your clients should then say something like the following in the :
In e.g,
/etc/puppet/modules/jboss/manifests/init.pp
class jboss {
file { '/tmp/installation/jboss_pkg.tar' :
source => "puppet:///modules/jboss/jboss_pkg.tar",
}
#You can then right a small script that will execute all the installation process. You can use 'exec' in puppet to do that.
exec { 'install_jboss' :
command => "/path/to/install_jboss.sh",
require => File["/tmp/installation/jboss_pkg.tar"],
onlyif => "/check/that/it/is/not/installed/already",
}
## and write other execs to start the server or enable services etc...
}
# In site.pp
node 'client.mytest.org' {
include jboss
}
The general solution to provide installers to Puppet is to set up your own package repository (rather than just a file repo).
http://www.techrepublic.com/blog/opensource/create-your-own-yum-repository/609
Then, you can use Puppet's built in package resource for easy install/upgrade/uninstall
http://docs.puppetlabs.com/references/latest/type.html#package
The following projects seem to provide a rpm/deb version of JBoss that you can publish to your repository
https://github.com/floreal/jboss-deb-package
http://code.google.com/p/jboss-rpm/

Resources