Facts not updated on puppetboard - puppet

Facts are not updated by applying new timezone Europe/Berlin. Puppetboard shows old value.
It always shows UTC instead of CEST.
Role class profile::timezone was correctly applied and facter -y shows CEST, but in puppetboard is still UTC.
After applying puppet agent -t facts were correctly updated.
Role:
class profile::timezone {
class { 'timezone':
timezone => 'Europe/Berlin',
}
}
puppet master version 6.6.0
puppet agent version 6.2.0
Expected value is CEST when new role is applied. So is not necessary to do it manually by puppet agent -t.

Related

Puppet Passing Consul Token as Fact Into Hiera

I am converting one of my masterless modules to use Consul. How do I use external facts to pass in the Consul host and Consul token? These change in every environment and are not managed by Puppet. I am using the Puppet module 'lynxman-hiera_consul', '0.1.2'. Note that I had to downgrade my hiera.yaml to version 3 to use it with this module.
Before my Puppet masterless run I export some facts
export FACTER_CONSULHOST=consul-randomid..us-west-2.elb.amazonaws.com
export FACTER_MYTOKEN=some-token
I can test this works with
facter mytoken; puppet facts --debug|grep mytoken
facter consulhost;puppet facts --debug|grep consulhost
My hiera.yaml looks like this Hiera.yaml Gist. This works fine if I replace the fact interpolation with strings.
With the basic issue being with the fact interpolation on line 15
:token: "%{facts.mytoken}"
This is my example manifest for testing this
Consul.pp Gist

puppetdb stores hash facts as string for some of my nodes

Some facts supposed to be hash are stored as string in puppetdb.
For example:
curl -X GET http://localhost:8080/v3/nodes/tcentos/facts/partitions
[ {
"value" : "sda1uuid1b97126a-beb2-4843-8c7b-4e6e12cbfbb7mount/bootsize1024000filesystemext4sda2size40916992filesystemLVM2_member",
"name" : "partitions",
"certname" : "tcentos"
} ]
while it should be like this:
curl -X GET http://localhost:8080/v3/nodes/tfedora20/facts/partitions
[ {
"value" : "{\"sda1\"=>{\"uuid\"=>\"8e6cda9b-54b7-4daa-a25c-1864a163f7a8\", \"size\"=>\"1024000\", \"mount\"=>\"/boot\", \"filesystem\"=>\"ext4\"}, \"sda2\"=>{\"size\"=>\"15751168\", \"filesystem\"=>\"LVM2_member\"}}",
"name" : "partitions",
"certname" : "tfedora20"
} ]
When I run facter partitions on tcentos node, the return value is OK.
[root#tcentos ~]# facter partitions
{"sda1"=>{"mount"=>"/boot", "filesystem"=>"ext4", "size"=>"1024000", "uuid"=>"1b97126a-beb2-4843-8c7b-4e6e12cbfbb7"}, "sda2"=>{"filesystem"=>"LVM2_member", "size"=>"40916992"}}
my Puppet Environment:
Puppet Master : 3.7.5
Puppet Agent : 3.7.5
PuppetDB : 2.3.0
The puppet agents and facter on all my nodes are exactly the same version. Does anybody have the same issue?
-----------------------Update---------------------
I got the reason with the help of #FelixFrank from the comments. This is about stringify_facts configuration.
we can set stringify_facts=false in puppet.conf (main section on all agents and masters) to disable flattening fact values to strings. According to the official documentation, structured facts support is enabled by default from 3.7 and later. But I guess the default behaviour is different on Open Source Puppet, so we have to explicitly add this setting.
curl -X GET http://localhost:8080/v3/nodes/tcentos/facts/partitions
[ {
"value" : "{\"sda1\":{\"filesystem\":\"ext4\",\"mount\":\"/boot\",\"size\":\"1024000\",\"uuid\":\"1b97126a-beb2-4843-8c7b-4e6e12cbfbb7\"},\"sda2\":{\"filesystem\":\"LVM2_member\",\"size\":\"40916992\"}}",
"name" : "partitions",
"certname" : "tcentos"
} ]
Although the result is still a string instead of actual JSON data, but it's way better than a key-value concatenation and I can parse it easily.

Trouble Using Puppet Forge Module example42/splunk

I want to use https://forge.puppetlabs.com/example42/splunk to setup splunk on some of my systems.
So on my puppet master I did puppet module install example42-splunk.
I use the PE console so I added the class splunk and associated splunk with a group that has one of my nodes, my-mongo-1.
I logon to my-mongo-1 and execute ...
[root#my-mongo-1 ~]# puppet agent -t
...
Info: Caching catalog for my-mongo-1
Info: Applying configuration version '1417030622'
Notice: /Stage[main]/Splunk/Package[splunk]/ensure: created
Notice: /Stage[main]/Splunk/Exec[splunk_create_service]/returns: executed successfully
Notice: /Stage[main]/Splunk/File[splunk_change_admin_password]/ensure: created
Info: /Stage[main]/Splunk/File[splunk_change_admin_password]: Scheduling refresh of Exec[splunk_change_admin_password]
Notice: /Stage[main]/Splunk/Service[splunk]/ensure: ensure changed 'stopped' to 'running'
Info: /Stage[main]/Splunk/Service[splunk]: Unscheduling refresh on Service[splunk]
Notice: /Stage[main]/Splunk/Exec[splunk_change_admin_password]/returns: Could not look up HOME variable. Auth tokens cannot be cached.
Notice: /Stage[main]/Splunk/Exec[splunk_change_admin_password]/returns:
Notice: /Stage[main]/Splunk/Exec[splunk_change_admin_password]/returns: In handler 'users': The password cannot be set to the default password.
Error: /Stage[main]/Splunk/Exec[splunk_change_admin_password]: Failed to call refresh: /opt/splunkforwarder/bin/puppet_change_admin_password returned 22 instead of one of [0]
Error: /Stage[main]/Splunk/Exec[splunk_change_admin_password]: /opt/splunkforwarder/bin/puppet_change_admin_password returned 22 instead of one of [0]
Notice: Finished catalog run in 11.03 seconds
So what am I doing wrong here?
Why do I get the Could not look up HOME variable. Auth tokens cannot be cached. error?
I saw you asked this on Ask Puppet, and gave it a quick test in Vagrant, and there are two solutions:
1) Give a different password for Splunk in Puppet (as it's complaining about using the default password)
class { "splunk":
install => "server",
admin_password => 'n3wP4assw0rd',
}
2) Upgrade the module to a newer version that doesn't have this issue:
puppet module upgrade example42-splunk --force

puppet: "applying configuration version ", what does it refer to?

When I run
sudo puppet agent -t
after a long phase of catalog loading, I get a message:
info: Applying configuration version '1403590182'
What is that number 1403590182 referring to?
In fact I have noticed that if I run twice in a row sudo puppet agent -t, I get different configuration version numbers even if the modules have not changed!
How can I determine which version of each module is being applied to the node?
from the documentation config_version
How to determine the configuration version. By default, it will be the
time that the configuration is parsed, but you can provide a shell
script to override how the version is determined. The output of this
script will be added to every log message in the reports, allowing you
to correlate changes on your hosts to the source version on the
server.
Setting a global value for config_version in puppet.conf is not
allowed (but it can be overridden from the commandline). Please set a
per-environment value in environment.conf instead. For more info, see
https://puppet.com/docs/puppet/latest/environments_about.html
The time is represented as a unix time stamp as such yours indicates "06/24/2014 # 6:09am" (and i just realised how old this Q was)
If the manifests are git controlled the administrator can let the Puppet master know how to describe the version with the statement below in /etc/puppet/puppet.conf (on the Puppet master). One such statement goes in each environment section with the path adjusted to where the environment looks for manifests.
config_version = git --git-dir $confdir/modules/production/.git describe --all --long
If you use some other version control system i'm sure there's some equivalent command to get an indication of the revision.

Puppet ignores my node.pp entry

My Puppet master and agent are on the same machine. The master node.pp file contains this:
node 'pear.myserver.com' {
include ntp
}
The ntp.pp file contains this:
class ntp {
package { "ntp":
ensure => installed
}
service { "ntp":
ensure => running,
}
}
The /etc/hosts file contains the line:
96.124.119.41 pear.myserver.com pear
I was able to successfully launch puppetmaster, but when I execute this, ntp doesn't get installed (it is not installed already, I checked).
puppet agent --test --server='pear.myserver.com'
It just reports this:
info: Caching catalog for pear.myserver.com
info: Applying configuration version '1387782253'
notice: Finished catalog run in 0.01 seconds
I don't know what else I could have missed. Can you please help? Note that I replaced the actual server name with 'myserver' for security reasons.
I was following this tutorial: http://bitfieldconsulting.com/puppet-tutorial
$puppet agent --test
This will fetch compiled catalog from Master puppet, which is in /etc/puppetlabs/puppet/manifests/site.pp and run locally.
$puppet apply /etc/puppet/modules/ntp/manifests/ntp.pp
Will apply locally

Resources