I am writing a puppet manifest on puppet master to monitor a folder with a list of files on a agent.
I do not know how do i specify a remote value for the "source" attribute of my file resource type, since the folder is located on agent and i do not want to copy the folder with its content on my master since that would unnessaryly utillize some space.
file { '/XYZ/ybc/WebSphere85dev/AppServer/properties':
ensure => directory,
owner => wsuser,
group => webapp,
source => "??????",
recurse => true,
show_diff => true,
What value should i specify for source?
If you specify a source, the file resource that you have created will be synced with the source (it can be in the master, or in the agent node), and the diffs will be present in the puppet report (it's the default, you don't need the show_diff attribute). If you don't specify a source attribute you won't get the diffs you are expecting, since there is nothing to compare with.
If you only want to be warned about changes in that directory you can use the audit attribute. However, you won't get the diffs that you are expecting, just a message saying that the contents have changed (again, there's nothing to compare):
file {
'/XYZ/ybc/WebSphere85dev/AppServer/properties':
ensure => directory,
audit => content,
recurse => true,
show_diff => true,
}
You can specify all, any attribute or array of attributes to be audited: http://docs.puppetlabs.com/references/latest/metaparameter.html#audit
Also, bear in mind that with the manifest that you posted you are changing the owner and group of the directory /XYZ/ybc/WebSphere85dev/AppServer/properties and its contents.
Related
I have following problem with my Puppet installation:
I would like to copy (overwrite) a file only, if a new version of RPM package was installed.
I have something like this:
package { 'my-rpm':
ensure => $version,
source => $rpm_file,
notify => File[$my_file],
}
file { $my_file:
ensure => present,
source => $my_file_template,
replace => true, # Overwrite (default behaviour). Needed in case new RPM was installed.
}
The problem is, that the "file" get executed also, if no new version of RPM was installed. This happens, since I change the $my_file file afterwards using "file_line"
file_line { 'disable_my_service':
ensure => present,
path => $my_file,
line => ' <deployment name="My.jar" runtime-name="My.jar" enabled="false">',
match => ' <deployment name="My.jar" runtime-name="My.jar">',
}
This change of the content of the $my_file triggers copying fresh version from the template on each and every Puppet run.
I could add "repace => false" to my file copy define, but this would break any further updates...
The long story short: I have the following loop
Copy file -> change file -> copy file -> ...
How can I break this loop?
UPDATE:
Clarification:
The "file_line" define is executed optionally, controlled by a Puppet hiera-property and so the "enabled" part can't be included in the RPM.
The entire file can't be turned into a template (IMHO). The problem: Puppet module must be able to install different (future) versions of the file.
The problem remains unsolved for the time being.
I think the problem you're getting here is that you're trying to manage $my_file using both the file and file_line resource types and this is going to cause the file to change during the catalog application.
Pick one or the other, manage it as a template or by file line.
I suspect what's happening here during the Puppet run is the file resource changes $my_file to look like this;
<deployment name="My.jar" runtime-name="My.jar">
Because that's what is in the template then, the file_line resource changes it to;
<deployment name="My.jar" runtime-name="My.jar" enabled="false">
Then on the next run the exact same thing happens, file changes $my_file to match the template and then file_line changes it to modify that line.
I would remove the notify => File[$my_file], it's not actually doing anything, you're defining the desired state in code so if that file changes for any reason, manual change or RPM update, Puppet is going to bring that file back into the desired state during the run. You may want to consider;
file { $my_file:
ensure => present,
source => $my_file_template,
require => Package['my-rpm'],
}
This ensures the file desired state is enforced after the package resource so if the package changes the file the file will be corrected in the same run.
https://puppet.com/docs/puppet/7.4/lang_relationships.html
You may also want to consider;
file { $my_file:
ensure => present,
source => $my_file_template,
require => Package['my-rpm'],
notify => Service['my-service'],
}
So the service provided by the rpm restarts when the config file is changed.
Copy overwriting a file only on RPM update
The problem is, that the "file" get executed also, if no new version of RPM was installed. This happens, since I change the $my_file file afterwards using "file_line"
Yes, File resources in a node's catalog are applied on every run. In fact, it's best to take the view that every resource that makes it into in a node's catalog is applied on every run. Resources' attributes affect what applying them means and / or what it means for them to be in sync, not whether they are applied at all. In the case of File, for example, setting replace => false says that as long as the file initially exists, its content is in sync (and therefore should not be modified), whereas replace => true says that the file's content is in sync only if it is an exact match to the specified source or content.
Generally speaking, it does not work well to manage the same or overlapping physical resources via multiple Puppet resources, and that's what you're running into here. The most idiomatic approach when you run into a problem with that is often to write a custom resource type with which to manage the target object in detail. But in this case, it looks like you could work around the issue by using an Exec to perform the one-time post-update copy:
package { 'my-rpm':
ensure => $version,
source => $rpm_file,
}
~> exec { "Start with default ${my_file}":
command => "cp '${my_file_template}' '${my_file}'",
# this is important:
refreshonly => true,
}
-> file { $my_file:
ensure => 'file',
replace => false,
# no source or content
owner => 'root', # or whatever
group => 'root', # or whatever
mode => '0644',
# ...
}
-> file_line { 'disable_my_service':
ensure => present,
path => $my_file,
# ...
}
You can, of course, use relationship metaparameters instead of the chaining arrows if you prefer or have need.
That approach gives you:
management of the package via the package manager;
copying the packaged default file to the target file only when triggered by the package being updated (by Puppet -- you won't get this if the package is updated manually);
managing properties of the file other than its contents via the File resource; and
managing a specific line of the file's contents via the File_line resource.
I have an issue wherein i am trying to set external facts and then copy a template file which gets populated with values from hiera yaml file. The template file is dependent on certain facts(such as the owner and group of the of the template file) which gets set by the external facts file. Below is the puppet code.
file {['/etc/facter/','/etc/facter/facts.d']:
ensure => directory,
owner => 'root',
group => 'root',
mode => '0755',
}
file {"/etc/facter/facts.d/domain_facts.sh":
ensure => present,
owner => 'root',
group => 'root',
mode => '0755',
source => $::hostname?{
'hostname1' => 'puppet:///modules/vfecare/hostname1.sh',
},
require => File['/etc/facter/','/etc/facter/facts.d'],
}
file {"/tmp/testing123":
ensure => present,
owner => "${::remoteuser}",
group => "${::remotegroup}",
content => template("vfecare/testscript.erb"),
require => File["/etc/facter/facts.d/domain_facts.sh"]
}
However during execution, i see that the template gets copied first to the puppet agent machine and since the template has some values that it needs from the external facts file, it cannot find and it throws error saying "Invalid owner and group value".
Below is the content of the external facts file
#!/bin/bash
echo "remoteuser=tempuser"
echo "remotegroup=tempuser"
Why does puppet seem to ignore the dependency cycle here?
Facts are collected by the agent at the very start of a Puppet run, before the catalog containing your file resources gets executed. It isn't possible to deploy an external fact during the run and use it like this as the facts will be missing.
Instead, you need to rely on Puppet's "pluginsync" mechanism that copies external facts from the master to the agent before it collects facts.
Move the vfecare/files/hostname1.sh fact file in the module to vfecare/facts.d/hostname1.sh, remove the file resources you have for /etc/facter and copying the external fact, then re-run the agent. It should first download the hostname1.sh fact file, then evaluate the /tmp/testing123 file resource correctly with the remoteuser/group values.
See the docs at Auto-download of agent-side plugins for more information.
I need to install a number of files into a directory, which is not itself Puppet-managed. The source of each file is under files/ subdirectory of my module.
I'd like to install them all in one go, because their ownership and permissions are all the same. But what do I put for source? I was hoping, simply specifying the directory would work:
file {[
"${rcdir}/foo", "${rcdir}/bar",
]:
source => "puppet:///${module_name}/",
group => 'wheel',
owner => 'root',
mode => '0644'
}
but, unfortunately, Puppet (using 3.7.5 here) is not smart enough to automatically append the foo and the bar appropriately.
Is there a nice way to do it, or do I have to painstakingly enumerate each file? Thank you!
There are multiple techniques to achieving what you are doing here, with advantages and disadvantages to each.
The first, and most explicit, which gives you the ability to configure each file independently as well as see the complete list of files you are managing, is to define each file independently. To reduce the amount of code duplication, you could utilize type defaults (although this is not always appropriate). This would look something like the following:
File {
group => 'wheel',
owner => 'root',
mode => '0644',
}
file { "${rcdir}/foo":
source => "puppet:///modules/${module_name}/foo",
}
file { "${rcdir}/bar":
source => "puppet:///modules/${module_name}/bar",
}
This obviously becomes very unwieldy quite quickly though.
A second strategy would be to utilize a defined type. It's a bit of a heavy tool to use for something like this, but it'll do the trick. This would look something like this:
define myclass::file_array (
$dest_base,
$source_base,
$group = 'wheel',
$owner = 'root',
$mode = '0644',
) {
file { "${dest_base}/${name}":
source => "${source_base}/${name}",
group => $group,
owner => $owner,
mode => $mode,
}
}
class myclass (){
$files_to_manage = ['foo', 'bar', 'baz']
myclass::file_array { $files_to_manage:
source_base => "puppet:///modules/${module_name}",
dest_base => $rcdir,
}
}
This requires you to add in a relatively arbitrary defined type and ends up requiring you to add lots of other parameters if you want to pass through all the properties available to the core file type, however for your situation it would suffice.
However, the simplest, and cleanest way to do what you are attempting is by allowing the file resource to utilize its recursive functionality, and place all the necessary files into their own directory in your module (assuming you have other files that are unrelated to this destination directory). It does require that you allow Puppet to manage the existence of the directory, but it is difficult to imagine that this is a problem for you (as any of this code would fail if the destination directory didn't exist already anyhow). This would look something like this:
file { $rcdir:
ensure => directory,
recurse => true,
source => "puppet:///modules/${module_name}/rc_files",
owner => 'root',
group => 'wheel',
mode => '0644',
}
// module directory 'files/rc_files' is where foo and bar would exist
I'm pretty sure that last one is your ideal solution, and you can utilize other aspects of the file resource (https://docs.puppet.com/puppet/latest/reference/type.html#file-attribute-recurse) such as purge to validate that no extra files end up in your destination.
There's other techniques out there, but hopefully one of these will do the trick for you.
Part of my puppet manifest checks for the existence of a custom sshd_config. If one is found, I use that. If it's not then I use my default. I'm just wondering if there is a more "puppet" way of doing this
if file("/etc/puppetlabs/puppet/files/${::fqdn}/etc/ssh/sshd_config", '/dev/null') != '' {
$sshd_config_source = "puppet:///private/etc/ssh/sshd_config"
} else {
$sshd_config_source = "puppet:///public/etc/ssh/sshd_config"
}
file { '/etc/ssh/sshd_config':
ensure => 'present',
mode => '600',
source => $sshd_config_source,
notify => Service['sshd'],
}
This code works but it's a little odd as file I have to give it the full path on the puppet master but when assigning $sshd_config_source I have to use the puppet fileserver path (puppet:///private/etc...).
Is there a better way of doing this?
It's a little known feature of the file type that you can supply multiple source values.
From the docs:
Multiple source values can be specified as an array, and Puppet will use the first source that exists. This can be used to serve different files to different system types:
file { "/etc/nfs.conf":
source => [
"puppet:///modules/nfs/conf.$host",
"puppet:///modules/nfs/conf.$operatingsystem",
"puppet:///modules/nfs/conf"
]
}
So you should just specify both the specific and the generic file URL, in that order, and Puppet will do the right thing for you.
I know I can sync directory in Puppet:
file { 'sqls-store':
path => '/some/dir/',
ensure => directory,
source => "puppet:///modules/m1/db-updates",
recurse => true,
purge => true
}
So when the new files are added they are copied to '/some/dir/'. However what I need is to perform some action for every new file. If I "Subscribe" to such resource, I don't get an array of new files.
Currently I created external shell script which finds new files in that dir and executes action for each of them.
Naturally, I would prefer not to depend on external script. Is there a way to do that with Puppet?
Thanks!
The use case for that is applying changes to DB schema that are being made from time to time and should be applied to all clients managed by puppet. In the end it's mysql [args] < update.sql for every such file.
Not sure I would recommend to have puppet applying the db changes for me.
For small db, it may work but for real world db... you want to be aware of when and how these kind of changes got applied (ordering of the changes, sometime require temp disk space adjustement, db downtime, taking backup before/after, reorg,...), most of the times your app should be adapted at the same time. You want more orchestration (and puppet isn't good at orchestration)
Why not using a tool dedicated to this task like
liquid-base
rails db migrations and capistrano
...
A poor men solution would be to use vcs-repo module and an exec to list modified files since last "apply".
I agree with mestachs, puppet dealing with db updates it's not a great idea
You can try some kind of define:
define mydangerousdbupdate($name, $filename){
file { "/some/dir/$filename":
ensure => present,
source => "puppet:///modules/m1/db-updates/$filename",
}
exec{"apply $name":
command => "/usr/bin/mysql [args] < /some/dir/$filename > /some/dir/$filename.log",
creates => "/some/dir/$filename.log"
}
}
And then, you can instantiate with the different patches, in the preferred order
mydangerousdbupdate{"first_change":
name => "first",
filename => "first.sql",
}->mydangerousdbupdate{"second_change":
name => "second",
filename => "second.sql",
}