Puppet configuration version is different for the noop and apply - puppet

I have few puppet modules and I have already applied them. But the problem is, even though I have applied, the puppet is still forcing me with the changes, that means puppet still shows the differences that is going to be applied (even after it is being applied)
I did a puppet noop:
puppet agent -vt --noop
And it gives the following output:
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Applying configuration version '1632762925'
Notice: /Stage[main]/Apim_common/Exec[stop-server]/returns: current_value 'notrun', should be ['0'] (noop) (corrective)
Info: /Stage[main]/Apim_common/Exec[stop-server]: Scheduling refresh of Exec[delete-pack]
Notice: /Stage[main]/Apim_common/Exec[delete-pack]: Would have triggered 'refresh' from 1 event
Notice: /Stage[main]/Apim_common/Exec[unzip-update]/returns: current_value 'notrun', should be ['0'] (noop) (corrective)
Notice: Class[Apim_common]: Would have triggered 'refresh' from 3 events
Notice: /Stage[main]/Monitoring/Exec[Restart awslogsd service]/returns: current_value 'notrun', should be ['0'] (noop) (corrective)
Notice: Class[Monitoring]: Would have triggered 'refresh' from 1 event
Notice: Stage[main]: Would have triggered 'refresh' from 2 events
Notice: Applied catalog in 5.70 seconds
And then I did a puppet apply:
puppet agent -vt
Info: Using environment 'test'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Caching catalog for amway-320-test-api-analytics-worker-1-i-00d684727d24cc360.intranet.local
Info: Applying configuration version '1632762946'
Notice: /Stage[main]/Apim_common/Exec[stop-server]/returns: executed successfully (corrective)
Info: /Stage[main]/Apim_common/Exec[stop-server]: Scheduling refresh of Exec[delete-pack]
Notice: /Stage[main]/Apim_common/Exec[delete-pack]: Triggered 'refresh' from 1 event
Notice: /Stage[main]/Apim_common/Exec[unzip-update]/returns: executed successfully (corrective)
Notice: /Stage[main]/Monitoring/Exec[Restart awslogsd service]/returns: executed successfully (corrective)
Notice: /Stage[main]/Apim_analytics_worker/File[/mnt/apim_analytics_worker/test-analytics-3.2.0/conf/worker/deployment.yaml]/content:
--- /mnt/apim_analytics_worker/testam-analytics-3.2.0/conf/worker/deployment.yaml 2021-05-18 02:13:05.000000000 -0400
+++ /tmp/puppet-file20210927-468-19w731k 2021-09-27 13:15:56.250247257 -0400
## -14,16 +14,16 ##
# limitations under the License.
################################################################################
- # Carbon Configuration Parameters
+# Carbon Configuration Parameters
test.carbon:
type: test-apim-analytics
- # value to uniquely identify a server
+ # value to uniquely identify a server
id: test-am-analytics
.
.
.
And everytime I do a puppet agent -vt, it is producing this output over and over, which it shouldn't as the changes are already being applied. I tried removing the cache directory in /opt/puppet/... directory but still no luck.
Can someone please help me on this?

You're using a lot of Exec resources. That's not wrong per se, but it has bad code smell.
It looks like you are managing some things via Execs that might be better modeled as Service resources (and maybe better set up as bona fide system services, too, which is a separate question). There may be other things that would be better managed as Files or Packages.
Where you do use an Exec, you should either make it refreshonly or give it an appropriate unless, onlyif, or creates parameter, or both, to establish criteria for whether its command will run. Puppet does not track whether the Exec was synced on some earlier run, and it wouldn't matter if it did because having been synced on an earlier run does not necessarily mean that it should not be synced again.

Related

packer option to avoid warning while shell provisioning

Is there a way to avoid warning while doing packer shell provisioning. My packer build exits with this warning:
googlecompute:
/usr/local/lib/python2.7/dist-packages/pip/vendor/urllib3/util/ssl.py:160:
InsecurePlatformWarning: A true SSLContext object is not available.
This prevents urllib3 from configuring SSL appropriately and may cause
certain SSL connections to fail. You can upgrade to a newer version of
Python to solve this. For more information, see
https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
googlecompute: InsecurePlatformWarning
==> googlecompute: Deleting instance...
googlecompute: Instance has been deleted!
==> googlecompute: Deleting disk...
googlecompute: Disk has been deleted! Build 'googlecompute' errored: Script exited with non-zero exit status: 1
That's not a warning, it's an error.
You could suppress it by forcing your script to exit with 0. But you probably want to fix the error instead.
If you provide your script i can give more detailed guidance.

puppet-acl module on Windows throws transactionstore.yaml corrupt error

Trying out puppet-acl module on Windows Server 2016, Preview5. I'm getting the weirdest error on the second puppet run. If i remove the trnsactionstore.yaml file, and re-run the puppet agent, the behavior is repeatable. Im running puppet4 with latest agent version.
This is my codeblock
acl { "c:/temp":
permissions => [
{ identity => 'Administrator', rights => ['full'] },
{ identity => 'Users', rights => ['read','execute'] }
],
}
This is the output from the puppet-run.
PS C:\ProgramData\PuppetLabs\puppet\cache\state> puppet agent -t
Info: Using configured environment 'local'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts
Info: Caching catalog for opslowebf02n02.local
Error: Transaction store file C:/ProgramData/PuppetLabs/puppet/cache/state/transactionstore.yaml is corrupt (wrong number of arguments (0 for 1..2)); replacing
Error: Transaction state file C:/ProgramData/PuppetLabs/puppet/cache/state/transactionstore.yaml is valid YAML but not returning a hash. Check the file for corruption, or remove it before continuing.
Info: Applying configuration version '1471436916'
Notice: /Stage[main]/platform_base_system::Role::Windows/Exec[check-powershell-exection-policy]/returns: executed successfully
Notice: /Stage[main]/configs_iis::Profile::Default/Exec[check-iis-global-anonymous-authentication]/returns: executed successfully
Notice: Applied catalog in 7.42 seconds
In the transactionstore.yaml file, this is the error section:
Acl[c:/temp]:
parameters:
permissions:
system_value:
- !ruby/hash:Puppet::Type::Acl::Ace {}
- !ruby/hash:Puppet::Type::Acl::Ace {}
inherit_parent_permissions:
system_value: :true
This has been resolved by dowwngrading the puppet agent to 4.5.3.
Behavior of the 4.6.0 version must have changed.
With 4.5.3 i still see the error in the logfile, but the puppetrun does not fail
I'll try to talk to the people at puppet about this.
Acl[c:/temp]:
parameters:
permissions:
system_value:
- !ruby/hash:Puppet::Type::Acl::Ace {}
- !ruby/hash:Puppet::Type::Acl::Ace {}
inherit_parent_permissions:
system_value: :true
This is being tracked as https://tickets.puppetlabs.com/browse/PUP-6629. It's almost coincidental that you created https://tickets.puppetlabs.com/browse/PUP-6630 right afterwards.

Freeswitch pauses on check_ip at boot on centos 7.1

During an investigation into a different problem (Inconsistent systemd startup of freeswitch) I discovered that both the latest freeswitch 1.6 and 1.7 paused for several minutes at a time (between 4 and 14) during boot up on centos 7.1. Whilst it was intermittent, it was as often as one time in 3 or 4.
Running this from the command line :
/usr/bin/freeswitch -nonat -db /dev/shm -log /usr/local/freeswitch/log -conf /usr/local/freeswitch/conf -run /usr/local/freeswitch/run
caused the following output (note the time difference between the Add task 2 and the line after it) :
2015-10-23 15:40:14.160101 [INFO] switch_event.c:685 Activate Eventing Engine.
2015-10-23 15:40:14.170805 [WARNING] switch_event.c:656 Create additional event dispatch thread 0
2015-10-23 15:40:14.272850 [INFO] switch_core_sqldb.c:3381 Opening DB
2015-10-23 15:40:14.282317 [INFO] switch_core_sqldb.c:1693 CORE Starting SQL thread.
2015-10-23 15:40:14.285266 [NOTICE] switch_scheduler.c:183 Starting task thread
2015-10-23 15:40:14.293743 [DEBUG] switch_scheduler.c:249 Added task 1 heartbeat (core) to run at 1445611214
2015-10-23 15:40:14.293837 [DEBUG] switch_scheduler.c:249 Added task 2 check_ip (core) to run at 1445611214
2015-10-23 15:49:47.883158 [NOTICE] switch_core.c:1386 Created ip list rfc6598.auto default (deny)
When I ran it from 1.6 on centos6.7 using the same command line as above I got this - note the delay is a more reasonable 14 seconds :
2015-10-23 10:31:00.274533 [INFO] switch_event.c:685 Activate Eventing Engine.
2015-10-23 10:31:00.285807 [WARNING] switch_event.c:656 Create additional event dispatch thread 0
2015-10-23 10:31:00.434780 [INFO] switch_core_sqldb.c:3381 Opening DB
2015-10-23 10:31:00.465158 [INFO] switch_core_sqldb.c:1693 CORE Starting SQL thread.
2015-10-23 10:31:00.481306 [DEBUG] switch_scheduler.c:249 Added task 1 heartbeat (core) to run at 1445610660
2015-10-23 10:31:00.481446 [DEBUG] switch_scheduler.c:249 Added task 2 check_ip (core) to run at 1445610660
2015-10-23 10:31:00.481723 [NOTICE] switch_scheduler.c:183 Starting task thread
2015-10-23 10:31:14.286702 [NOTICE] switch_core.c:1386 Created ip list rfc6598.auto default (deny)
It's the same on FS 1.7 as well.
This suggests heavily that centos 7.1 & FS have an issue together. Can anyone help me diagnose further or shine some more light on this, please?
This all came to light as I tried to understand why FS would not accept the cli connection for several minutes after I thought it had booted up (using -nc from systemd service).
Thanks to the FS userlist and ultimately Anthony Minessale, the issue was to do with RNG entropy.
This is a good explanation -
https://www.digitalocean.com/community/tutorials/how-to-setup-additional-entropy-for-cloud-servers-using-haveged
Here are some extracts :
There are two general random devices on Linux: /dev/random and
/dev/urandom. The best randomness comes from /dev/random, since it's a
blocking device, and will wait until sufficient entropy is available
to continue providing output.
The key here is that it's a blocking device, so any program waiting for a random number from /dev/random will pause until sufficient entropy is available for a "safe" random number.
This is a headless server, so the usual sources of entropy such as mouse/keyboard activity (and many others) do not apply. Thus the delays,
The fix is this :
Based on the HAVEGE principle, and previously based on its associated
library, haveged allows generating randomness based on variations in
code execution time on a processor......(google the rest!)
Install like this :
yum install haveged
and start it up like this :
haveged -w 1024
making sure it restarts on reboot :
chkconfig haveged on
Hope this helps someone.

puppet: Not authorized to call find

I'm running puppet 2.7.26 because that's what the redhat package provides.
I'm trying to serve files that are NOT stored within any puppet modules. The files are maintained in another location on the puppet server, and that is where I need to serve them from.
I have this in my /etc/puppet/fileserver.conf
[files]
path /var/www/cobbler/pub
allow *
And then I have a class file like this:
class etchostfile
(
$hostfile /* declare that this class has one parameter */
)
{
File
{
owner => 'root',
group => 'root',
mode => '0644',
}
file { $hostfile :
ensure => file,
source => "puppet:///files/hosts-${hostfile}.txt",
path => '/root/hosts',
}
}
But when my node calls
class { 'etchostfile' :
hostfile => foo,
}
I get this error
err: /Stage[main]/Etchostfile/File[foo]: Could not evaluate: Error 400
on SERVER: Not authorized to call find on
/file_metadata/files/hosts-foo.txt with {:links=>"manage"} Could not
retrieve file metadata for puppet:///files/hosts-foo.txt: Error 400 on
SERVER: Not authorized to call find on
/file_metadata/files/hosts-foo.txt with {:links=>"manage"} at
/etc/puppet/modules/etchostfile/manifests/init.pp:27
This post
https://viewsby.wordpress.com/2013/04/05/puppet-error-400-on-server-not-authorized-to-call-find/
indicates that this is all I need to do. But I must be missing something.
UPDATE
When I run the master in debug mode, I get no error.
The master responds thusly:
info: access[^/catalog/([^/]+)$]: allowing 'method' find
info: access[^/catalog/([^/]+)$]: allowing $1 access
info: access[^/node/([^/]+)$]: allowing 'method' find
info: access[^/node/([^/]+)$]: allowing $1 access
info: access[/certificate_revocation_list/ca]: allowing 'method' find
info: access[/certificate_revocation_list/ca]: allowing * access
info: access[^/report/([^/]+)$]: allowing 'method' save
info: access[^/report/([^/]+)$]: allowing $1 access
info: access[/file]: allowing * access
info: access[/certificate/ca]: adding authentication any
info: access[/certificate/ca]: allowing 'method' find
info: access[/certificate/ca]: allowing * access
info: access[/certificate/]: adding authentication any
info: access[/certificate/]: allowing 'method' find
info: access[/certificate/]: allowing * access
info: access[/certificate_request]: adding authentication any
info: access[/certificate_request]: allowing 'method' find
info: access[/certificate_request]: allowing 'method' save
info: access[/certificate_request]: allowing * access
info: access[/]: adding authentication any
info: Inserting default '/status' (auth true) ACL because none were found in '/etc/puppet/auth.conf'
info: Expiring the node cache of agent.redacted.com
info: Not using expired node for agent.redacted.com from cache; expired at Thu Aug 13 14:18:48 +0000 2015
info: Caching node for agent.redacted.com
debug: importing '/etc/puppet/modules/etchostfile/manifests/init.pp' in environment production
debug: Automatically imported etchostfile from etchostfile into production
debug: File[foo]: Adding default for selrange
debug: File[foo]: Adding default for group
debug: File[foo]: Adding default for seluser
debug: File[foo]: Adding default for selrole
debug: File[foo]: Adding default for owner
debug: File[foo]: Adding default for mode
debug: File[foo]: Adding default for seltype
notice: Compiled catalog for agent.redacted.com in environment production in 0.11 seconds
info: mount[files]: allowing * access
debug: Received report to process from agent.redacted.com
debug: Processing report from agent.redacted.com with processor Puppet::Reports::Store
and the agent responds thusly:
info: Caching catalog for agent.redacted.com
info: Applying configuration version '1439475588'
notice: /Stage[main]/Etchostfile/File[foo]/ensure: defined content as '{md5}75125a96a68a0ff0d42f91f10dca8336'
notice: Finished catalog run in 0.42 seconds
and the file is properly installed/updated.
So it works when the master is in debug mode, but it errors when the master is in standard (?) mode. I can go back and forth, in and out of debug mode at will, and it works every time in debug mode, and it fails every time in standard mode.
UPDATE 2
Running puppetmasterd from the command line, and everything works.
Running service puppetmaster start or /etc/init.d/puppetmaster start from the command line, and it fails. So at least I'm getting closer.
/etc/sysconfig/puppetmaster is entirely commented out. So as of now, I do not see any difference between just starting puppetmasterd and using the service script.
UPDATE 3
I think it's an SELinux problem.
With SELinux "enforcing" on the master, service puppetmaster restart, and I get the error.
I change SELinux to "Permissive" on the master, and I still get the error.
But now that SELinux is set to Permissive, if I service puppetmaster restart, my files get served properly.
But now that it's working, I set SELinux to Enforcing, and I get a different error:
err: /Stage[main]/Etchostfile/File[foo]: Could not evaluate: Could not
retrieve information from environment production source(s)
puppet:///files/hosts-foo.txt at
/etc/puppet/modules/etchostfile/manifests/init.pp:27
Then I do a service puppetmaster restart and I'm back to the original error.
So the situation changes depending on
how I started the service (puppetmasterd or service)
what SELinux was set to when I started the service
what SELinux is set to when the agent runs.
The closer I get, the more confused I get.
UPDATE 4
I think I found it. Once I started looking at SELinux, I found the policy changes I needed to make (allowing ruby/puppet to access cobbler files) and now it appears to be working...
This turned out to be an SELinux problem. I eventually found this error message
SELinux is preventing /usr/bin/ruby from read access
on the file /var/www/cobbler/pub/hosts-foo.txt .
which led me to the audit2allow rules I needed to apply to allow puppet to access my cobbler files.
I was getting this error with puppet server on ubuntu 20.
Error: /Stage[main]/Dvod_tocr/File[/install/wine-data.tar.gz]: Could not evaluate: Could not retrieve file metadata for puppet:///extra_files/wine-data.tar.gz: Error 500 on SERVER: Server Error: Not authorized to call find on /file_metadata/extra_files/wine-data.tar.gz with {:rest=>"extra_files/wine-data.tar.gz", :links=>"manage", :checksum_type=>"sha256", :source_permissions=>"ignore"}
My fileserver.conf file was in the wrong location. The correct location for this puppet version and on ubuntu 20 is /etc/puppetlabs/puppet/fileserver.conf

Can't find rbenv after puppet install

I am using puppet to setup a ruby on rails server (14.04). The install seems to work fine but then I can't find rbenv or bundler and ruby -v reports the system ruby 1.9.3.
Install plugin module
puppet module install jdowning-rbenv
pp file
class rails-test_server {
include ruby
class { rbenv: }
rbenv::plugin { 'sstephenson/ruby-build': }
rbenv::build { '2.2.0': global => true }
}
in the module
[$install_dir]
# This is where rbenv will be installed to.
# Default: '/usr/local/rbenv'
#
# [$owner]
# This defines who owns the rbenv install directory.
# Default: 'root'
Here is the output
Info: Retrieving plugin
Info: Loading facts
Info: Caching catalog for rails-test
Info: Applying configuration version '1424804476'
Notice: /Stage[main]/Rbenv::Deps::Debian/Package[build-essential]/ensure: ensure changed 'purged' to 'present'
Notice: /Stage[main]/Rbenv::Deps::Debian/Package[libssl-dev]/ensure: ensure changed 'purged' to 'present'
Notice: /Stage[main]/Rbenv::Deps::Debian/Package[libffi-dev]/ensure: ensure changed 'purged' to 'present'
Notice: /Stage[main]/Rbenv::Deps::Debian/Package[libreadline6-dev]/ensure: ensure changed 'purged' to 'present'
Notice: /Stage[main]/Git/Package[git]/ensure: ensure changed 'purged' to 'present'
Notice: /Stage[main]/Rbenv/Exec[git-clone-rbenv]/returns: executed successfully
Notice: /Stage[main]/Rbenv/File[/usr/local/rbenv]/group: group changed 'root' to 'adm'
Notice: /Stage[main]/Rbenv/File[/usr/local/rbenv]/mode: mode changed '0755' to '0775'
Notice: /Stage[main]/Rbenv/File[/usr/local/rbenv/shims]/ensure: created
Notice: /Stage[main]/Rbenv/File[/usr/local/rbenv/plugins]/ensure: created
Notice: /Stage[main]/Rbenv/File[/usr/local/rbenv/versions]/ensure: created
Notice: /Stage[main]/Rails-test_server/Rbenv::Plugin[sstephenson/ruby-build]/Exec[install-sstephenson/ruby-build]/returns: executed successfully
Info: /Stage[main]/Rails-test_server/Rbenv::Plugin[sstephenson/ruby-build]/Exec[install-sstephenson/ruby-build]: Scheduling refresh of Exec[rbenv-permissions-sstephenson/ruby-build]
Notice: /Stage[main]/Rails-test_server/Rbenv::Plugin[sstephenson/ruby-build]/Exec[rbenv-permissions-sstephenson/ruby-build]: Triggered 'refresh' from 1 events
Notice: /Stage[main]/Rbenv/File[/etc/profile.d/rbenv.sh]/ensure: defined content as '{md5}1895fedb6a7fdc5feed9b2cbbb8bbb60'
Notice: /Stage[main]/Rails-test_server/Rbenv::Build[2.2.0]/Exec[own-plugins-2.2.0]/returns: executed successfully
Notice: /Stage[main]/Rails-test_server/Rbenv::Build[2.2.0]/Exec[git-pull-rubybuild-2.2.0]/returns: executed successfully
Notice: /Stage[main]/Rails-test_server/Rbenv::Build[2.2.0]/Exec[rbenv-install-2.2.0]/returns: executed successfully
Info: /Stage[main]/Rails-test_server/Rbenv::Build[2.2.0]/Exec[rbenv-install-2.2.0]: Scheduling refresh of Exec[rbenv-ownit-2.2.0]
Notice: /Stage[main]/Rails-test_server/Rbenv::Build[2.2.0]/Exec[rbenv-ownit-2.2.0]: Triggered 'refresh' from 1 events
Info: /Stage[main]/Rails-test_server/Rbenv::Build[2.2.0]/Exec[rbenv-ownit-2.2.0]: Scheduling refresh of Exec[rbenv-global-2.2.0]
Notice: /Stage[main]/Rails-test_server/Rbenv::Build[2.2.0]/Exec[rbenv-global-2.2.0]: Triggered 'refresh' from 1 events
Notice: /Stage[main]/Rails-test_server/Rbenv::Build[2.2.0]/Rbenv::Gem[bundler-2.2.0]/Exec[gem-install-bundler-2.2.0]/returns: executed successfully
Info: /Stage[main]/Rails-test_server/Rbenv::Build[2.2.0]/Rbenv::Gem[bundler-2.2.0]/Exec[gem-install-bundler-2.2.0]: Scheduling refresh of Exec[rbenv-rehash-bundler-2.2.0]
Notice: /Stage[main]/Rails-test_server/Rbenv::Build[2.2.0]/Rbenv::Gem[bundler-2.2.0]/Exec[rbenv-rehash-bundler-2.2.0]: Triggered 'refresh' from 1 events
Info: /Stage[main]/Rails-test_server/Rbenv::Build[2.2.0]/Rbenv::Gem[bundler-2.2.0]/Exec[rbenv-rehash-bundler-2.2.0]: Scheduling refresh of Exec[rbenv-permissions-bundler-2.2.0]
Notice: /Stage[main]/Rails-test_server/Rbenv::Build[2.2.0]/Rbenv::Gem[bundler-2.2.0]/Exec[rbenv-permissions-bundler-2.2.0]: Triggered 'refresh' from 1 events
Notice: Finished catalog run in 473.25 seconds
The rbenv module creates a file /etc/profile.d/rbenv.sh that needs to be sourced before the rbenv will be available on the command line.
[root#ptierno-puppetmaster modules]# which rbenv
/usr/bin/which: no rbenv in (/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin)
[root#ptierno-puppetmaster modules]# source /etc/profile.d/rbenv.sh
[root#ptierno-puppetmaster modules]# which rbenv
/usr/local/rbenv/bin/rbenv
You can either source the file as above, or login and logout again to get a new login shell.
Hope this helps.

Resources