I'm using RHEL Satellite 6.6 with puppet 5.5.12. I have a module which, among other things, copies a file from a network mapped folder to the local drive, then executes it. When I run the module against the satellite server, it succeeds without a hitch. When I apply that same module against another server (same hardware type, same OS freshly installed, non-satellite client) it dies while transferring the file with a rather useless error. The relevant parts of the module are as follows (identifying information obfuscated):
$installer_name = 'installer.bin'
$installer_src = "/mnt/svr/path/${installer_name}"
$installer_path = "/tmp/${installer_name}"
...
file { "$installer_path":
ensure => present,
owner => 'root',
group => 'root',
mode => '0755',
source => "${installer_src}",
require => [ File_line['modify prop1'], File_line['modify prop2'], ]
}
On the satellite server when this block executes, it logs ... defined content as '{md5}####' and proceeds, while on the target server I get the following error:
Error: /Stage[main]/MODULE/File[/tmp/installer.bin]: Could not evaluate: Could not retrieve information from environment KT_PROG_NAME_Development_RHEL7_Core_2 source(s) file:/mnt/svr/path/installer.bin
On the list of things I've already attempted:
Changed ensure => present, to ensure => file,
Moved ${installer_path} down to a path => ... property and given the block a name.
Changed to source => "file://${installer_src}",
Replaced all variables with hard coded values
None have significantly changed the outcome. I've repeatedly verified that the network mount point is present, the parent folder has 0775 perms and root:root ownership, while the file has 0755 perms and root:root ownership.
I've contemplated wrapping installer.bin up in the module's internal file path, but that is less desirable because of file sizes, program guidelines, and etc..
Otherwise, I'm running out of ideas. The puppet docs seem to say I'm doing it right, so at this point I'm open to trying out any suggestions the community has to offer. Thank you!
Related
Actually I am downloading list of files from ftp and from the downloaded path I am reading all the list of filenames for processing.
In exec{"download from ftp ${value}" I am downloading directories and sub directories with files from ftp to local. From that path am getting the list using custom facts $facts['listdirectory']
My problem is that Facts['listdirectory'] is executed before being downloaded from ftp.
How to add dependency to $datadir=$facts['listdirectory'] or how to make this facts get executed after download?
class classname{
exec{"download from ftp ${value}":
command => "wget -r --user=${ftp_username} --
password=${ftp_password} ${value}/* -P ${patch_download_path}",
path => ['/usr/bin', '/usr/sbin',],
timeout => 1800,
user =>'root',
}
$datadir=$facts['listdirectory']
}
My problem is that Facts['listdirectory'] is executed before being downloaded from ftp.
It looks like you mean that the fact's value is determined before the directory contents (not the fact implementation) is downloaded. Certainly that's what will happen, in any case.
All facts that will inform a given catalog-building run are evaluated first, then delivered as a group to the catalog builder (which typically runs remotely on a puppet master). This gives the catalog builder a consistent snapshot of machine state to work from as it computes the desired target state by evaluating your manifests in light of the facts presented. The result is delivered in the form of a catalog of classes and resources, which the local Puppet then applies.
Only at the catalog-application stage will the command specified by your Exec resource run. This is after the whole catalog has been built, and long after fact evaluation. If you want to adapt dynamically to what has been downloaded then you must either do so on the next Puppet run, or script it and run the script via the same or another Exec resource, or write a custom type and provider that encompass the whole process (probably including the download, too).
(Logstash 1.4.2 on Windows)
In our system, a "product" is a high level grouping of related web applications. Each web application is configured to write a dedicated log file, named after the application name (eg MyProduct.ApplicationA.log and MyProduct.ApplicationB.log). All web applications for a given product write their log files to the same folder (c:\Logs\MyProduct\; c:\Logs\MyOtherProduct).
I need to set up logstash to monitor all log files for all applications for all products. I had hoped to use:
input {
file {
path => "c:/Logs/**/*.log"
exclude => ["Info.*", "Warn.*", "Error.*"]
sincedb_path => "c:/logstash/.sincedb"
sincedb_write_interval => 1
}
}
On first run, I can see lots of output going to stdout output, which I presume is what the docs refer to as "first contact".
Once all the log files (from more than one application) have been initially parsed, if applications generate log entries, they appears to be picked up and output. All is well.
However, if I restart logstash, ALL the logfiles seem to be parsed again - as if sincedb is not honoured. I have looked at the other SO questions detailing similar experience of duplicates and reparsing (eg logstash + elasticsearch : reloads the same data), however I believe that I have extra information that may indicate that I am actually using the file input incorrectly.
If I instead setup multiple file inputs like so:
file {
path => "c:/Logs/MyProduct/MyProduct.ApplicationA.log"
exclude => ["Info.*", "Warn.*", "Error.*"]
sincedb_path => "c:/logstash/.sincedb_A"
sincedb_write_interval => 1
}
file {
path => "c:/Logs/MyProduct/MyProduct.ApplicationB.log"
exclude => ["Info.*", "Warn.*", "Error.*"]
sincedb_path => "c:/logstash/.sincedb_B"
sincedb_write_interval => 1
}
Then restarts of logstash do not reparse existing files and do honour the sincedb for the logical grouping. This leads me to believe that perhaps I have been thinking about the file input in the wrong way: will I have to configure an individual file inputs for each application?
(Looking at the content of sincedb, there is only ever a single line eg
0 0 2 661042
and it becomes obvious that multiple files cannot be tracked)
Am I missing something that would allow me to have a generic globular style global declaration, without needing to do individual per-application configuration?
Looks like you're running into a known sincedb bug on Windows
Your workaround of adding a file {} block with separate sincedb_path for each file is probably the best solution until the bug is fixed.
When referring to a file resource on the puppet master, does it have to reside under the modulepath? The docs here seem to indicate it.
The file I'm using was put under the profiles folder instead. I'm trying to refer to it like this:
source => puppet:///profiles/a_subfolder/myfile
(The physical path on the box is /profiles/files/a_subfolder/myfile)
I'm not having any luck so far and wanted to confirm that I can point a file resource somewhere besides the modulepath, and that my URI is correct.
Also, if my subfolder doesn't exist yet on the puppet agent, do I need to set some extra flags to both create the folder path and put the file in place? Here's what I have now:
ensure => 'present',
source => 'puppet:///profiles/a_subfolder/myfile',
mode => '0755',
owner => 'specialuser'
I found the following solution worked..
source => 'puppet:///modules/profiles/',
in your case -
source => 'puppet:///modules/profiles/a_subfolder/myfile',
Hope this helps
I'm new to puppet but as far as I understood you need to set up a puppet file server if you want to use puppet:// URIs.
https://docs.puppetlabs.com/guides/file_serving.html
If you want the file to get from your puppet master, please do the following:
1) create the folder on you puppet master. Let's take it as /opt/puppet_dev
2) edit /etc/puppet/fileserver.conf and add:
[puppet_dev]
path /opt/puppet_dev
allow *
3) In your manifest write:
file { '/opt/on_my_node/slave_path':
source => "puppet:///puppet_dev/my_folder_I_want_to_move",
ensure => present,
}
4) restart puppetmaster service ( you change fileserver, I recommend to restart) and run the agent.
Note: you can control the recurse and the recursive limit with file. Always use this when writing puppet: https://docs.puppetlabs.com/references/latest/type.html
Hopes this is what your were looking for :)
I know puppet modules always have a files directory and I know where it's supposed to be and I have used the source => syntax effectively from my own, handwritten modules but now I need to learn how to deploy files using Hiera.
I'm starting with the saz-sudo module and I've read the docs but I can't see anything about where to put the sudoers file; the one I want to distribute.
I'm not sure whether I need to set up a site-wide files dir in /etc/puppetlabs/puppet and then make subdirs in there for every module or what. And does Hiera know to look in /etc/puppetlabs/puppet/files/sudo if I say, source => "puppet:///files/etc/sudoers" ? Do I need to add a pathname in /etc/hiera.yaml? Add a line - files ?
Thanks for any clues.
My cursory view of the puppet module, given their example of using hiera:
sudo::configs:
'web':
'source' : 'puppet:///files/etc/sudoers.d/web'
'admins':
'content' : "%admins ALL=(ALL) NOPASSWD: ALL"
'priority' : 10
'joe':
'priority' : 60
'source' : 'puppet:///files/etc/sudoers.d/users/joe'
Suggest it assumes you have a "files" puppet module. So under you puppet modules section:
mkdir -p files/files/etc/sudoers.d/
Drop your files in there.
Explanation:
The url 'puppet:///files/etc/sudoers.d/users/joe' is broken down thus:
puppet: protocol
///: Three slashes indicate the source of the file is in a module.
files: name of the module
etc/sudoers.d/users/joe: full path to the file within the module's "files" directory.
You don't.
The idea of a module (Hiera backed or not) is to lift the need to manage the whole sudoers file from you. Instead, you can manage each single entry in the sudoers file.
I recommend reviewing the documentation carefully. You should definitely not have a file { "/etc/sudoers": } resource in your manifest.
Hiera doesn't have to do anything with Files.
Hiera is like a Variables Database, and servers you based on the hierarchy you have.
the files inside puppet, are usually accessed in methods like source => but also these files are using some basic structure.
In most cases when you call an file or template.
A template can serve your needs to automatically build an sudoers based on that.
There are also modules that supports modifying sudoers too.
It is up to you what to do.
In this case, saz stores the location of the file in hiera, but the real location can be a file inside your puppet (like a module file or something similar).
Which is completely unrelated.
Read about puppet file server
If you have questions, just ask.
V
I'm running Puppet 2.7.14 on RHEL 6.2 (both master and nodes have this configuration).
For the life of me, I can't figure out why I can't make custom mount points work.
If for example, I edit /etc/puppet/fileserver.conf to include the following:
[foo]
path /etc/puppet/files/foo
allow *
And put the file bar.txt in /etc/puppet/files/foo/bar.txt
Then I would expect resources like the following to resolve with no trouble:
file { "bar.txt":
ensure => present,
path => "/var/foo/bar.txt",
source => "puppet:///foo/bar.txt",
}
But this doesn't work! I consistently see error messages like the following:
... Could not evaluate: Could not retrieve information from environment production source(s) puppet:///foo/bar.txt ...
According to all documentation I have read, I have done this correctly, but I just can't get it to work.
Any thoughts?
Seems there's a "gotcha" at work here. A tabstop before the path or allow attribute is not allowed. Very surprising.