I have an issue wherein i am trying to set external facts and then copy a template file which gets populated with values from hiera yaml file. The template file is dependent on certain facts(such as the owner and group of the of the template file) which gets set by the external facts file. Below is the puppet code.
file {['/etc/facter/','/etc/facter/facts.d']:
ensure => directory,
owner => 'root',
group => 'root',
mode => '0755',
}
file {"/etc/facter/facts.d/domain_facts.sh":
ensure => present,
owner => 'root',
group => 'root',
mode => '0755',
source => $::hostname?{
'hostname1' => 'puppet:///modules/vfecare/hostname1.sh',
},
require => File['/etc/facter/','/etc/facter/facts.d'],
}
file {"/tmp/testing123":
ensure => present,
owner => "${::remoteuser}",
group => "${::remotegroup}",
content => template("vfecare/testscript.erb"),
require => File["/etc/facter/facts.d/domain_facts.sh"]
}
However during execution, i see that the template gets copied first to the puppet agent machine and since the template has some values that it needs from the external facts file, it cannot find and it throws error saying "Invalid owner and group value".
Below is the content of the external facts file
#!/bin/bash
echo "remoteuser=tempuser"
echo "remotegroup=tempuser"
Why does puppet seem to ignore the dependency cycle here?
Facts are collected by the agent at the very start of a Puppet run, before the catalog containing your file resources gets executed. It isn't possible to deploy an external fact during the run and use it like this as the facts will be missing.
Instead, you need to rely on Puppet's "pluginsync" mechanism that copies external facts from the master to the agent before it collects facts.
Move the vfecare/files/hostname1.sh fact file in the module to vfecare/facts.d/hostname1.sh, remove the file resources you have for /etc/facter and copying the external fact, then re-run the agent. It should first download the hostname1.sh fact file, then evaluate the /tmp/testing123 file resource correctly with the remoteuser/group values.
See the docs at Auto-download of agent-side plugins for more information.
Related
I have following problem with my Puppet installation:
I would like to copy (overwrite) a file only, if a new version of RPM package was installed.
I have something like this:
package { 'my-rpm':
ensure => $version,
source => $rpm_file,
notify => File[$my_file],
}
file { $my_file:
ensure => present,
source => $my_file_template,
replace => true, # Overwrite (default behaviour). Needed in case new RPM was installed.
}
The problem is, that the "file" get executed also, if no new version of RPM was installed. This happens, since I change the $my_file file afterwards using "file_line"
file_line { 'disable_my_service':
ensure => present,
path => $my_file,
line => ' <deployment name="My.jar" runtime-name="My.jar" enabled="false">',
match => ' <deployment name="My.jar" runtime-name="My.jar">',
}
This change of the content of the $my_file triggers copying fresh version from the template on each and every Puppet run.
I could add "repace => false" to my file copy define, but this would break any further updates...
The long story short: I have the following loop
Copy file -> change file -> copy file -> ...
How can I break this loop?
UPDATE:
Clarification:
The "file_line" define is executed optionally, controlled by a Puppet hiera-property and so the "enabled" part can't be included in the RPM.
The entire file can't be turned into a template (IMHO). The problem: Puppet module must be able to install different (future) versions of the file.
The problem remains unsolved for the time being.
I think the problem you're getting here is that you're trying to manage $my_file using both the file and file_line resource types and this is going to cause the file to change during the catalog application.
Pick one or the other, manage it as a template or by file line.
I suspect what's happening here during the Puppet run is the file resource changes $my_file to look like this;
<deployment name="My.jar" runtime-name="My.jar">
Because that's what is in the template then, the file_line resource changes it to;
<deployment name="My.jar" runtime-name="My.jar" enabled="false">
Then on the next run the exact same thing happens, file changes $my_file to match the template and then file_line changes it to modify that line.
I would remove the notify => File[$my_file], it's not actually doing anything, you're defining the desired state in code so if that file changes for any reason, manual change or RPM update, Puppet is going to bring that file back into the desired state during the run. You may want to consider;
file { $my_file:
ensure => present,
source => $my_file_template,
require => Package['my-rpm'],
}
This ensures the file desired state is enforced after the package resource so if the package changes the file the file will be corrected in the same run.
https://puppet.com/docs/puppet/7.4/lang_relationships.html
You may also want to consider;
file { $my_file:
ensure => present,
source => $my_file_template,
require => Package['my-rpm'],
notify => Service['my-service'],
}
So the service provided by the rpm restarts when the config file is changed.
Copy overwriting a file only on RPM update
The problem is, that the "file" get executed also, if no new version of RPM was installed. This happens, since I change the $my_file file afterwards using "file_line"
Yes, File resources in a node's catalog are applied on every run. In fact, it's best to take the view that every resource that makes it into in a node's catalog is applied on every run. Resources' attributes affect what applying them means and / or what it means for them to be in sync, not whether they are applied at all. In the case of File, for example, setting replace => false says that as long as the file initially exists, its content is in sync (and therefore should not be modified), whereas replace => true says that the file's content is in sync only if it is an exact match to the specified source or content.
Generally speaking, it does not work well to manage the same or overlapping physical resources via multiple Puppet resources, and that's what you're running into here. The most idiomatic approach when you run into a problem with that is often to write a custom resource type with which to manage the target object in detail. But in this case, it looks like you could work around the issue by using an Exec to perform the one-time post-update copy:
package { 'my-rpm':
ensure => $version,
source => $rpm_file,
}
~> exec { "Start with default ${my_file}":
command => "cp '${my_file_template}' '${my_file}'",
# this is important:
refreshonly => true,
}
-> file { $my_file:
ensure => 'file',
replace => false,
# no source or content
owner => 'root', # or whatever
group => 'root', # or whatever
mode => '0644',
# ...
}
-> file_line { 'disable_my_service':
ensure => present,
path => $my_file,
# ...
}
You can, of course, use relationship metaparameters instead of the chaining arrows if you prefer or have need.
That approach gives you:
management of the package via the package manager;
copying the packaged default file to the target file only when triggered by the package being updated (by Puppet -- you won't get this if the package is updated manually);
managing properties of the file other than its contents via the File resource; and
managing a specific line of the file's contents via the File_line resource.
I have two classes in puppet.
Here is the first one:
class nagios::is_production {
file { '/etc/puppetlabs/facter/':
ensure => directory,
}
file { '/etc/puppetlabs/facter/facts.d/':
ensure => directory,
}
file { '/etc/puppetlabs/facter/facts.d/production.txt':
ensure => file,
content => epp('nagios/production.epp')
}
}
This creates a custom fact (production=yes/no based on the node name)
This class on its own assigns the fact correctly.
The second class:
class nagios::client {
if $facts[production] =~ yes {
##nagios_host {"${::hostname}":
ensure => present,
address => $::ipaddress,
hostgroups => "production, all-servers",
notifications_enabled => $notifications_enabled,
use => 'generic-server',
}
} else {
##nagios_host {"${::hostname}":
ensure => present,
address => $::ipaddress,
hostgroups => "non-production, all-servers",
notifications_enabled => $notifications_enabled,
owner => root,
use => 'generic-server',
}
}
}
This creates the exported resource for the host and adds it to either the production/non-production hostgroup.
If the custom fact exists, the host gets created with the hostgroup correctly.
I created a 3rd class to pull in these 2 just to keep track of it a little easier for myself:
class nagios::agent {
Class['nagios::is_production']->Class['nagios::client']
include nagios::is_production
include nagios::client
}
This seems like it should make ::is_production run before ::client. When I include this class on the node for the puppet run, I get this error:
Error: Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Evaluation Error: Left match operand must result in a String value. Got an Undef Value. at /etc/puppetlabs/code/environments/production/modules/nagios/manifests/client.pp:3:6 on node
So the fact seems like it's not getting set causing the export of the host to fail.
What am I missing?
Followup to answer
I am trying to do this:
if domain name contains 'something'
production=yes
else
production=no
Then in the nagios module,
if $facts[production] =~ yes
Assign to production host group.
Bash:
#!/bin/bash
if [[ $(hostname) =~ '512' ]] ; then
echo production=yes
else
echo production=no
fi
Id like to be able to use $facts[something] in here to make create other facts based off things like OS and IP.
I read here: Custom Facts Walkthrough
But I wasn't able to understand the custom facts load path as I didn't have that directory. I'm very new to puppet...
Also new to stack overflow... did I do this right?
Thanks
Facts are generated during pluginsync. Since you are trying to place the external fact during catalog execution, it is not available earlier during catalog compilation, which occurs after pluginsync.
You need to remove your nagios::production class and place your external fact directly in the module to take advantage of pluginsync. It should be located in your module structure like this:
nagios
|__facts.d
|__production.txt
The external fact will then be copied over during pluginsync and the fact will be generated. It will then be available later during catalog compilation. Facter will expect your production.txt to be key:value pairs too.
Check here for more information on properly using external facts: https://docs.puppet.com/facter/3.5/custom_facts.html#external-facts
How can I make the below logic work? My aim is to compare the value of custom fact $environment and the content of the file /etc/facter/facts.d/oldvalue.
If the custom fact $environment is not equal to the content of file /etc/facter/facts.d/oldvalue, then execute the following code.
exec {'catenvchange' :
command => "/bin/cat /root/oldvalue"}
if $environment != exec['catenvchange'] {#code#}
Exec resources do not work that way. In fact, no resource works that way, or any way remotely like that. Moreover, the directory /etc/facter/facts.d/ serves a special purpose, and your expectation for how it might be appropriate to use a file within is not consistent with that purpose.
What you describe wanting to do looks vaguely like setting up an external fact and testing its value. If you drop an executable script named /etc/facter/facts.d/anything by some means (manually, plugin sync, File resource, ...) then that script will be executed before each Puppet run as part of the process of gathering node facts. The standard output generated by the script would be parsed for key=value pairs, each defining a fact name and its value. The facts so designated, such as one named "last_environment" will be available during catalog building. You could then use it like so:
if $::environment != $::last_environment {
# ...
}
Update:
One way to use this mechanism to memorialize the value that a given fact, say $::environment, has on one run so that it can be read back on the next run would be to declare a File resource managing an external fact script. For example,
file { '/etc/facter/facts.d/oldvalues':
ensure => 'file',
owner => 'root',
group => 'root',
mode => '0755',
content => "#!/bin/bash\necho 'last_environment=${::environment}'\n"
}
I may have misunderstood how "puppet agent --noop" works:
In the definition of a class I set the existence of a file and I set it's user&group ownership and this is what I have when I un "puppet agent --noop" :
If the file doesn't exist, "puppet agent --noop" works fine
If the file exists but user or group doesn't exist, then "puppet agent --noop" fails
complaining about the missing user or group.
If I simply run "puppet agent" (without "--noop") it works fine: Doesn't
matter if the user, group or file exists or not previously: it
creates the group, the user and/or the file.
1st question: I suppose that the "--noop" run doesn't verify if the catalog is asking the missing resources to be created. Isn't it?
2nd question: Is there any way to do any kind of mocking to avoid the problem of missing resources when launching "--noop"?
Let's paste some code to show it:
# yes, it should better be virtual resources
group { $at_group:
ensure => "present"
}
user { $at_user:
ensure => present,
gid => "$at_group",
require => Group[$at_group],
}
file { '/etc/afile':
owner => $at_user,
group => $at_group,
mode => '0440',
content => template('......erb')
require => User[$at_user]
}
output:
# puppet agent --test --noop
Info: Retrieving plugin
Info: Loading facts in /var/lib/puppet/lib/facter/puppet_vardir.rb
Info: Loading facts in /var/lib/puppet/lib/facter/facter_dot_d.rb
Info: Loading facts in /var/lib/puppet/lib/facter/pe_version.rb
Info: Loading facts in /var/lib/puppet/lib/facter/root_home.rb
Info: Caching catalog for pagent02
Info: Applying configuration version '1403055383'
Notice: /Stage[main]/Agalindotest::Install/Group[my_group]/ensure: current_value absent, should be present (noop)
Notice: /Stage[main]/Agalindotest::Install/User[my_user]/ensure: current_value absent, should be present (noop)
Error: Could not find user my_user
Error: /Stage[main]/Agalindotest::Install/File[/etc/afile]/owner: change from 1001 to my_user failed: Could not find user my_user
Error: Could not find group my_group
Error: /Stage[main]/Agalindotest::Install/File[/etc/afiles]/group: change from 1001 to my_group failed: Could not find group my_group
Let's show how it works if the file doesn't exist:
then "puppet agent --test --noop" works like a charm:
Notice: /Stage[main]/Agalindotest::Install/Group[my_group]/ensure: current_value absent, should be present (noop)
Notice: /Stage[main]/Agalindotest::Install/User[my_user]/ensure: current_value absent, should be present (noop)
Notice: /Stage[main]/Agalindotest::Install/File[/etc/afile]/ensure: current_value absent, should be file (noop)
Thanks a lot!!
/ Angel
Unfortunately, there is currently no way to overcome this limitation.
The ensure property doesn't fail just on account of a missing owner - I believe the file will just end up owned by root. That is why the output is more pleasant when the file doesn't exist.
As for the behavior with an existing file: Each resource is considered individually, and the file resource must admit failure if the group does not exist when the file is evaluated. The fact that the group would (likely) be created without noop cannot be easily accounted for.
As for you idea of ignoring the issue under noop conditions if there is a user resource - that has merit, I believe. Would you raise that as a feature request at Puppet's Jira?
Update
As of Puppet 3.3 you can use rely on the $clientnoop value that is supplied by the agent along with Facter facts. Please note that tailoring your manifest to avoid failures in noop mode has two consequences.
The manifest itself becomes much less maintainable and comprehendible.
The reporting from noop runs becomes inaccurate, because the "unsafe" property values are not part of the noop catalog
You could build the manifest like this:
# this scenario does not actually call for virtual resources at all :-)
group { $at_group:
ensure => "present"
}
user { $at_user:
ensure => present,
gid => "$at_group",
require => Group[$at_group],
}
file { '/etc/afile':
mode => '0440',
content => template('......erb')
# require => User[$at_user] # <- not needed at all, Puppet autorequires the user and group
}
if ! $::clientnoop {
File['/etc/afile'] {
owner => $at_user,
group => $at_group,
}
}
The owner and group properties are ignored in noop mode, with the pros and cons as discussed above.
All things considered, I feel that this is not worth the hassle at all.
I am writing a puppet manifest on puppet master to monitor a folder with a list of files on a agent.
I do not know how do i specify a remote value for the "source" attribute of my file resource type, since the folder is located on agent and i do not want to copy the folder with its content on my master since that would unnessaryly utillize some space.
file { '/XYZ/ybc/WebSphere85dev/AppServer/properties':
ensure => directory,
owner => wsuser,
group => webapp,
source => "??????",
recurse => true,
show_diff => true,
What value should i specify for source?
If you specify a source, the file resource that you have created will be synced with the source (it can be in the master, or in the agent node), and the diffs will be present in the puppet report (it's the default, you don't need the show_diff attribute). If you don't specify a source attribute you won't get the diffs you are expecting, since there is nothing to compare with.
If you only want to be warned about changes in that directory you can use the audit attribute. However, you won't get the diffs that you are expecting, just a message saying that the contents have changed (again, there's nothing to compare):
file {
'/XYZ/ybc/WebSphere85dev/AppServer/properties':
ensure => directory,
audit => content,
recurse => true,
show_diff => true,
}
You can specify all, any attribute or array of attributes to be audited: http://docs.puppetlabs.com/references/latest/metaparameter.html#audit
Also, bear in mind that with the manifest that you posted you are changing the owner and group of the directory /XYZ/ybc/WebSphere85dev/AppServer/properties and its contents.