I'm hoping to use puppet to manage my rc files (i.e. sharing configuration files between work and home). I keep my rc files in a subversion respository. Some machines, I have sudo privileges on, some I don't. And none of the machines are on the same network.
I have a simple puppet file:
class bashResources ( $home, $svn ) {
file { "$home/.bash" :
ensure => 'directory',
}
file { "$home/.bash/bashrc.d" :
ensure => 'directory',
}
file { "$home/.bash/bashrc.d/bashrc" :
ensure => present,
target => "$home/$svn/rc/bashrc",
}
}
node 'ubuntuwgu290' {
class { 'bashResources':
home => '/home/dshaw',
svn => 'mysvn',
}
}
I have a simple config file that I'm using to squelch some errors:
[main]
report=false
When I run puppet, I get an annoying error about not being able to execute chown:
dshaw#ubuntuwgu290:~/mysvn/rc$ puppet apply rc.pp --config ./puppet.conf
Notice: Compiled catalog for ubuntuwgu290.maplesoft.com in environment production in 0.12 seconds
Error: Failed to apply catalog: Operation not permitted # rb_file_chown - /home/dshaw/.puppet/var/state/state.yaml20170316-894-rzkggd
Error: Could not save last run local report: Operation not permitted # rb_file_chown - /home/dshaw/.puppet/var/state/last_run_summary.yaml20170316-894-l9embs
I have attempted to squelch the error by adding reports=none to my config file, but it has not been effective.
How can I squelch these errors? Alternatively, is there a more lightwieght tool for managing rc files?
Thanks,
Derek
The error is related to Puppet trying to manage its own metadata in /home/dshaw/.puppet, not any of the files enrolled in Puppet's catalog for management. This is not normally a problem, even when you run Puppet as an ordinary user. In fact, supporting this sort of thing is one of the reasons why per-user Puppet metadata exists.
The files that Puppet is trying to chown do not already belong to you (else Puppet would not be trying to chown them), but they should belong to you, where "you" means the puppet process's (e)UID and (e)GID. You might be able to solve the problem by just removing Puppet's state directory, and letting it rebuild it on the next run. Alternatively, you might be able to perform or arrange for a manual chown such as Puppet is trying to perform.
On the other hand, it's unclear how this situation arose in the first place, and some of the mechanisms I can imagine would render those suggestions ineffective.
Related
I am trying to get a backup of a subdirectory before deleting the parent directory by copying the subdirectory into a different location.
This is how I have done this:
exec { "install_path_exists":
command => "/bin/true",
onlyif => "/usr/bin/test -d ${install_path}",
path => ['/usr/bin','/usr/sbin','/bin','/sbin'],
}
file { "server_backup_dir" :
ensure => 'directory',
path => "${distribution_path}/backup/server",
recurse => true,
source => "file:///${install_path}/repository/deployment/server",
require => Exec["install_path_exists"],
}
Exec checks if the directory exists, and returns true if so. The "server_backup_dir" file resource requires the "install_path_exists" exec to return true if the directory exists.
When the directory does not exist, and "install_path_exists" returns false, "server_backup_dir" executes anyway, and produces the following error.
Error: /Stage[main]/Is/File[server_backup_dir]: Could not evaluate:
Could not retrieve information from environment production source(s)
file:////usr/local/{project_location}/repository/deployment/server
What is wrong with my approach, and how can I fix this? Thanks in advance.
I'll break this up into two parts, what is wrong, and how to fix it.
What is wrong with my approach ...
You are misunderstanding the 'require' line and the nature of relationships in Puppet, and also how Puppet uses the return code of the command executed in an Exec.
When you use any of the four so-called metaparameters for relationships in Puppet - those being: require, before, subscribe & notify - you tell Puppet that you want the application of one resource to be ordered in time in relation to another. (Additionally, the 'subscribe' and 'notify' respond to refresh events but that's not relevant here.)
So, when Puppet applies a catalog built from your code, it will firstly apply the Exec resource, i.e. execute the /bin/true command, if and only if the install path exists; and then it will secondly manage the server_backup_dir File resource. Note also that it will apply the File resource irrespective of whether the Exec command actually was executed; the only guarantee being that /bin/true will never be run after the File resource.
Furthermore, the return code of the command in the Exec functions differently to what you're expecting. An exit status of 0 as the /bin/true command returns only tells Puppet to allow configuration to continue; compare that to an Exec command returning a non-zero exit status, which would cause Puppet to halt execution with an error.
Here's a simple demonstration of that:
▶ puppet apply -e "exec { '/usr/bin/false': }"
Notice: Compiled catalog for alexs-macbook-pro.local in environment production in 0.08 seconds
Error: '/usr/bin/false' returned 1 instead of one of [0]
Error: /Stage[main]/Main/Exec[/usr/bin/false]/returns: change from 'notrun' to ['0'] failed: '/usr/bin/false' returned 1 instead of one of [0]
Notice: Applied catalog in 0.02 seconds
For more info, read the page I linked above carefully. It generally takes a bit of time to get your head around relationships and ordering in Puppet.
how can I fix this?
You would normally use a custom fact like this:
# install_path.rb
Facter.add('install_path') do
setcode do
Facter::Core::Execution.execute('/usr/bin/test -d /my/install/path')
end
end
And then in your manifests:
if $facts['install_path'] {
file { "server_backup_dir" :
ensure => 'directory',
path => "${distribution_path}/backup/server",
recurse => true,
source => "file:///my/install/path/repository/deployment/server",
}
}
Consults docs for more info on writing and including custom facts in your code base.
Note:
I notice at the end that you reuse $install_path in the source parameter. If your requirement is to have a map of install paths to distribution paths, you can also build a structured fact. Without knowing exactly what you're trying to do, however, I can't be sure how you would write that piece.
I'm a puppet beginner - so bear with me :)
I'm trying to write a module that does the following :
Check if a package is installed with the latest version in the repos
If the package needs to be installed, then config files will be copied from puppet source location, to client. Then the package will be installed
Once files are copied and package installed, run the script that will use the config files on the client to apply the necessary settings.
Once all of this are done, remove the copied files on client
I've come up with the following :
class somepackage(
$package_files_base = "/var/tmp",
$package_setup = "/var/tmp/package-setup.sh",
$ndc_file = "/var/tmp/somefile.ndc",
$osd_file = "/var/tmp/somefile.osd",
$nds_file = "/var/tmp/somefile.nds",
$configini_file = "/var/tmp/somefile.ini",
$required_files = ["$package_setup", "$ndc_file", "$osd_file", $nds_file", "$configini_file"])
{
package { 'some package':
ensure => 'latest',
notify => Exec['Package Setup'],
}
file { 'Package Setup Files':
path => $package_files_base,
ensure => directory,
replace => false,
recurse => true,
source => "puppet:///modules/somepackage/${::domain}",
mode => '0755',
}
exec { 'Package Setup':
command => "$package_setup",
logoutput => true,
timeout => 1800,
require => [ File['Package Setup Files']],
refreshonly => true,
notify => Exec['Remove config files'],
}
exec { 'Remove config files':
path => ['/usr/bin','/usr/sbin','/bin','/sbin'],
command => "rm \"${package_setup}\" \"${ndc_file}\" \"${osd_file}\" \"${nds_file}\" \"${configini_file}\"",
refreshonly => true,
}
}
While this achieves most of what I want to do, I notice that upon rerunning puppet apply the files, although they were being removed, were being recopied.
I can understand why this happens, but I don't know how to code it so that the files get copied ONLY if the package gets updated/installed (e.g. package wasn't installed or old). Otherwise the files will get copied over and over again every time puppet runs every 30 min (default setup) on the client I assume... I tried using the replace => false to prevent this but that just means the files wont ever get removed from /var/tmp after the first run of the class, because it only prevents subsequent runs of the class to re-copy the files (from my testing). This does prevent the redundant, repetitive copying - however I just want the files to be gone the first time!
Is this possible? Head hurts :(
Thanks in advance! We're running Puppet version 3.8.6 on EL7.3.
EDIT: To be clear, this is the bit that I'm struggling with: the resource file { 'Package Setup Files':. This keeps getting files copied even though the package isn't updated/installed. How do I prevent this from happening?
Here are some suggestions.
1) Recommendation for a short term solution
Stop trying to clean up those files if you do not need to. Put them in /opt and forget about them. Better still, have Puppet place a README file in there with them that will explain to your future self and to your fellow admins what they are and why they are there.
While I completely understand the desire to clean up, you need to weigh the cost of having a few old files in a directory somewhere against the cost of having complicated logic in the Puppet code that will not make any sense to anyone in a few months.
This is what I would do and in my experience it is also what most Puppet module authors do with these sorts of set up files.
2) Consider an orchestration framework
That said, it appears to me that you are trying to use Puppet to do operational tasks, and while it can kind of do operational tasks (via features like ensure => latest etc) it is really intended to be a configuration management tool.
I recommend people use Puppet to ensure => installed for packages (make sure Puppet can install the app properly if you need to fully rebuild the node); then delegate the problem of applying version upgrades and hotfixes etc outside of Puppet.
There are a few reasons for this.
Puppet is a declarative configuration management system; your Puppet code should define an end-state. Puppet is not like a shell script, where instead of an end-state, you define steps that change the state of a server imperatively, "one step at a time".
The first problem with ensure => latest is philosophical.
latest does not define a single end-state. The behaviour of your code at time X is different from the behaviour at time Y. So your code is not idempotent.
The second problem is practical. You can never solve the problem of RPM updates in a general way using Puppet, because Puppet can never know about all of the RPMs and their dependencies in your system. So, one way or another, you still need a specialised tool for managing the version updates.
So, since you will need a specialised tool for managing the version updates anyway, it is cleaner to draw a clear boundary between the two tools' roles: always use Puppet to manage the configuration and the initial installation; and then always use the other tool to manage the updates.
Ok, great. I see in your comments that you already have a Red Hat Satellite server, and you have written:
...some hosts within the Satellite have got an older version of the
software within yum. But we don't update this software very
often.....maybe once every year.
So, it sounds like you are using Puppet here to work around a problem in the way you are using Satellite. Is it possible to address this by fixing the way you use Satellite? If so, I think that will be cleaner.
Of course, sometimes the right thing to do is use a work-around, and that's why I provided some other options.
3) If you really really want Puppet to clean up those files
Perhaps move the logic inside a shell script. Something like:
class somepackage {
$shell =
'#!/bin/bash
# maybe use wget instead of puppet to get the files
wget http://a.b/c.tgz
tar zxf c.tgz
# install stuff
# clean up stuff
'
file { '/usr/local/bin/installer.sh':
ensure => file,
mode => '0755',
content => $shell,
}
package { 'some package':
ensure => latest,
notify => Exec['installer'],
}
exec { 'installer':
command => '/usr/local/bin/installer.sh',
refreshonly => true,
require => File['/usr/local/bin/installer.sh'],
}
}
The following is a simplified manifest I am running:
package {'ruby2.4':
ensure => installed
}
exec { "gem2.4_install_bundler":
command => "/usr/bin/gem2.4 install bundler",
require => Package['ruby2.4']
}
Puppet apply runs this manifest correctly i.e
installs ruby2.4 package (which includes gem2.4)
Installs bundler using gem2.4
However, puppet apply --noop FAILS because puppet cannot find the executable '/usr/bin/gem2.4' because ruby2.4 is not installed with --noop.
My question is if there is a standard way to test a scenario like this with puppet apply --noop? To validate that my puppet manifest is executing correctly?
It occurs to me that I may have to parse the output and validate the order of the executions. If this is the case, is there a standard way/tool for this?
A last resort is a very basic check that the puppet at least runs, which can be determined with the --detailed-exitcodes option. (a code different to 1).
Thank you in advance
rspec-puppet is the standard tool for that level of verification. It can build a catalog from the manifest (e.g. for a class, defined type, or host) and then you can write tests to verify the contents.
In your case you could verify that the package resource exists, that the exec resource exists, and verify the ordering between them. This would be just as effective as running the agent with --noop mode and parsing the output - but easier and cheaper to run.
rspec-puppet works best with modules, so assuming you follow the setup for your module from the website (adding rspec-puppet to your Gemfile, running rspec-puppet-init), and let's say this is in a class called ruby24, a simple spec in spec/classes/ruby24_spec.rb would be:
require 'spec_helper'
describe 'ruby24' do
it { is_expected.to compile.with_all_deps }
it { is_expected.to contain_package('ruby2.4').with_ensure('installed') }
it { is_expected.to contain_exec('gem2.4_install_bundler').with_command('/usr/bin/gem2.4 install bundler') }
it { is_expected.to contain_exec('gem2.4_install_bundler').that_requires('Package[ruby2.4]') }
end
I wrote a simple module to install a package (BioPerl) on a Ubuntu VM. The whole init.pp file is here:
https://gist.github.com/anonymous/17b4c31bf7309aff14dfdcd378e44f40
The problem is it doesn't work, and it gives me no feedback to let me know why it doesn't work. There are 3 simple steps in the module. I checked and it didn't do any of them. Heres the first 2:
Step 1: Download an archive and save it to /usr/local/lib
exec { 'bioperl-download':
command => "sudo /usr/bin/wget --no-check-certificate -O ${archive_path} ${package_uri}",
require => Package['wget']
}
Step 2: Extract the archive
exec { 'bioperl-extract':
command => "sudo /usr/bin/tar zxvf ${archive_path} --directory ${install_path}; sudo rm ${archive_path}",
require => Exec['bioperl-download']
}
pretty simple. But I have no idea where the problem is because I can't see what its doing. The provisioner is set to verbose mode, and here are the output lines for my module:
==> default: Notice: /Stage[main]/Bioperl/Exec[bioperl-download]/returns: executed successfully
==> default: Notice: /Stage[main]/Bioperl/Exec[bioperl-extract]/returns: executed successfully
==> default: Notice: /Stage[main]/Bioperl/Exec[bioperl-path]/returns: executed successfully
So all I know is it executed these three steps successfully. It doesn't tell me anything about whether the steps did their job properly or not. I know that it didn't download the archive to /usr/local/lib that directory, and that it didn't add an environment variable file to /usr/profile.d. Maybe the issue is the variables containing the directories are wrong. Maybe the variable containing the archives download URI is wrong. How can I find these things out?
UPDATE:
It turns out the module does work. But to improve the module (since I want to upload it to forge.puppetlabs.com, I tried implementing the changes suggested by Matt. Heres the new code:
file { 'bioperl-download':
path => "${archive_path}",
source => "http://cpan.metacpan.org/authors/id/C/CJ/CJFIELDS/${archive_name}",
ensure => "present"
}
exec { 'bioperl-extract':
command => "sudo /bin/tar zxvf ${archive_name}",
cwd => "${bioperl_target_dir}",
require => File['bioperl-download']
}
A problem: It gives me an error telling me that the source cannot be http://. I see in the docs that they do indeed allow http:// files as the source for the file resource. Maybe I'm using an older version of puppet?
I want to try out the puppet-archive module, but I'm not sure how I can set it as a required dependency. By that, I mean how I can make sure its installed first. Do I need to get my module to download the module from github and save it to the modules directory? Or is there a way to let puppet install it automatically? I added it as a dependency to the metadata.json file, but that doesn't install it. I know I can just get my module to download the package, but I was wondering what best practice for this is.
The initial problem you describe is acceptance testing. Verifying that the Puppet resources and code you wrote actually resulted in the desired end state you wanted is normally accomplished with Serverspec: http://serverspec.org/. For example, you can write a Puppet module to deploy an application, but you only know that Puppet did what you told it to, and not that the application actually successfully deployed. Note Serverspec is also what people generally use to solve this problem for Ansible and Chef also.
You can write a Serverspec test similar to the following to help test your module's end state:
describe file('/usr/local/lib/bioperl.tar.gz') do
it { expect(subject).to be_file }
end
describe file('/usr/profile.d/env_file') do
it { expect_subject).to be_file }
its(:content) { is_expected.to match(/env stuff/) }
end
However, your problem also seems to deal with debugging why your acceptance tests failed. For that, you need unit testing. This is normally solved with RSpec-Puppet: http://rspec-puppet.com/. I would show you how to write some tests for your situation, but I don't think you should be writing your Puppet module the way that you did, so it would render the unit tests irrelevant.
Instead, consider using a file resource with the source attribute and a HTTP URI to grab the tarball instead of an exec with wget: https://docs.puppet.com/puppet/latest/type.html#file-attribute-source. Also, you might want to consider using the Puppet archive module to assist you: https://forge.puppet.com/puppet/archive.
If you have questions on how to use these tools to provide unit and acceptance testing, or have questions on how to refactor your module, then don't hesitate to write followup questions on StackOverflow and we can help you.
I am starting using puppet to manage many servers, the problem is that whenever I try to use a class, new relic for example:
node 'mynode' {
class {'newrelic::server::linux':
newrelic_license_key => '***',
}
}
It fails, and returns the following error:
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Puppet::Parser::AST::Resource failed with error ArgumentError: Could not find declared class newrelic::server::linux at /etc/puppet/manifests/site.pp:3 on node mynode
I have installed fsalum-newrelic on the master, and everything works fine when using files, packages, services etc. What am I doing wrong?
The catalog compiler will look for class newrelic::server::linux at newrelic/manifests/server/linux.pp relative to each directory in your module path. (Note: newrelic, NOT fsalum-newrelic.) Make certain that you indeed did install the module such that such a file exists in your modulepath, and make sure that it is readable by the puppetmaster process.
Note, too, that "readable by the puppetmaster process" means more than just the ownership and permissions of the file itself. It also involves ownership and permissions of all the directories in the path to that file, and possibly other forms of access control, such as ACLs and SELinux conext and policy.
Find out where you are actually installing the new puppet forge modules using perhaps a unix utility like "locate".
Then look in the the /etc/puppet/puppet.conf at the "basemodulepath" and check that the place it is installed is in the path
Here is my basemodulepath
basemodulepath = $confdir/environments/production/modules:$confdir/environments/production/local_modules:/etc/puppet/modules
The external modules I am using are either in /etc/puppet/modules or in /etc/puppet/enviroments/production/modules