I am trying to run a docker container using the puppetlabs/docker module. However when the Puppet agent attempts to run the container I receive the error
Error: Failed to apply catalog: No such file or directory - docker
It would seem that neither the docker daemon nor the docker client are being installed before Puppet attempts to create the container.
An excerpt of the configuration is as follows:
Puppetfile
# frozen_string_literal: true
forge 'https://forge.puppet.com'
# Modules from the Puppet Forge
# ...
mod 'puppetlabs-docker', '4.4.0'
# ...
agent.pp
class profile::runner::agent (
Enum[present, absent] $ensure = present,
String $version = undef,
String $image = undef,
String $container_name = "${facts['group']}-agent",
Array[String] $container_environment = [],
) {
class { 'docker':
version => $version,
}
# ...
docker::run { $container_name:
ensure => $ensure,
image => $image,
env => $container_environment,
net => 'host',
restart => 'unless-stopped',
}
# ...
}
My understanding is that this configuration from the puppetlabs/docker module is supposed to ensure docker is installed before any container is started. I considered making a bug report on the module itself but this issue would surely be so common I'd be surprised if it had not already been reported. So I concluded that I must be doing something incorrect.
I have tried the usual metaparameters but none seem to have any effect and result in the same error. For example I have tried
docker::run { $container_name:
ensure => $ensure,
image => $image,
env => $container_environment,
net => 'host',
restart => 'unless-stopped',
require => Class['docker'], # and also Package['docker']
}
It may be worth mentioning that the Puppet agent is running on RockyLinux so therefore a RedHat OS family. If I remove the docker::run from the configuration and then run the Puppet agent then the catalog is applied successfully but of course the container does not run. Then adding the docker::run back to the configuration and running the agent again will run the container successfully. This is what has indicated to me that there is a dependancy issue that I have not been able to resolve.
Since neither chaining arrows nor metaparameters seem to have an affect in this specific situation I have had to resolve this issue using a guard clause that checks if Docker is installed using the facts provided by the puppetlabs/docker module
if $facts['docker_client_version'] != undef {
docker::run { $container_name:
ensure => $ensure,
image => $image,
env => $container_environment,
net => 'host',
restart => 'unless-stopped',
}
}
This is not ideal as it has the effect of requiring the catalog to be applied twice before the container will be run but it does solve the problem.
Declared classes are not automatically ordered relative to each other or relative to resources declared in the same context. There are good reasons for that, but they are tangential to the question.
However, you can use the same techniques to impose relative ordering on classes that you can use for resources: the before, require, notify, and subscribe metaparameters, and the chaining arrows. For your particular case, you might also be able to use the require function (even if you also use a resource like class declaration of class docker, but in the case, the resource-like declaration must precede the require).
For example, this is a pretty good way to make sure that class docker is applied before every docker::run instance, wherever declared:
class { 'docker':
version => $version,
}
-> Docker::Run<||>
That does, however, have the side effect of realizing any virtual docker::run instances you might have declared. If that's a problem then go with one of the other alternatives.
Related
I am trying to run a power shell script just before the Pacakage starts installing an msi installer. The script cleans up certain un-managed resources that would block installer.
What I tried is,
file { 'c:\Test\cleanup.ps1.file' :
path => 'c:\Test\cleanup.ps1',
ensure => present,
source => 'puppet:///modules/test/cleanup.ps1',
source_permissions => ignore,
}
exec { 'c:\Test\cleanup.ps1.file':
refreshonly => true,
provider => powershell,
}
package { 'ServiceInstaller':
ensure => $version,
require => [Package['Service1Installer'],Exec['c:\Test\cleanup.ps1']],
}
But require attribute doesn't fire the command. Could someone please help me to achieve this behaviour.
There is this notify command, that would send a EXEC notification, but that happens after the installation. What I need is before the installation. Thanks in advance.
The trick to getting this working properly is that something has to write c:\Test\cleanup.ps1.file only when you need the script to be triggered to run, and the exec resource has to subscribe to it. Otherwise, if that file doesn't change, and the exec isn't subscribed, the exec resource does not think it needs to run so the puppet run completes but the script never fires.
Based on the code you pasted here, it looks like you're specifying $version in the class? And I'm guessing you're updating this either in the class or in hiera explicitly when you want to upgrade? If so, you could have the c:\Test\cleanup.ps1.file file write an inline template and just put the version number in that file. When you update the version in the class/hiera/wherever in puppet you're doing this, the file would update and the exec would kick off.
This would look something like:
file { 'c:\Test\cleanup.ps1.file' :
path => 'c:\Test\cleanup.ps1',
ensure => present,
content => inline_template("<%= #version %>"),
source_permissions => ignore,
}
exec { 'c:\Test\cleanup.ps1.file':
refreshonly => true,
provider => powershell,
subscribe => File['c:\Test\cleanup.ps1.file'],
}
package { 'ServiceInstaller':
ensure => $version,
require => [Package['Service1Installer'],Exec['c:\Test\cleanup.ps1']],
}
This is assuming you were just trying to use the cleanup.ps1.file as a trigger for the exec. If there is stuff in that file you need for other purposes, then leave that declaration as you had it, and make another file declaration as a trigger file with just the version in an inline template, and subscribe the exec to that one instead of the cleanup.ps1.file
I'm a puppet beginner - so bear with me :)
I'm trying to write a module that does the following :
Check if a package is installed with the latest version in the repos
If the package needs to be installed, then config files will be copied from puppet source location, to client. Then the package will be installed
Once files are copied and package installed, run the script that will use the config files on the client to apply the necessary settings.
Once all of this are done, remove the copied files on client
I've come up with the following :
class somepackage(
$package_files_base = "/var/tmp",
$package_setup = "/var/tmp/package-setup.sh",
$ndc_file = "/var/tmp/somefile.ndc",
$osd_file = "/var/tmp/somefile.osd",
$nds_file = "/var/tmp/somefile.nds",
$configini_file = "/var/tmp/somefile.ini",
$required_files = ["$package_setup", "$ndc_file", "$osd_file", $nds_file", "$configini_file"])
{
package { 'some package':
ensure => 'latest',
notify => Exec['Package Setup'],
}
file { 'Package Setup Files':
path => $package_files_base,
ensure => directory,
replace => false,
recurse => true,
source => "puppet:///modules/somepackage/${::domain}",
mode => '0755',
}
exec { 'Package Setup':
command => "$package_setup",
logoutput => true,
timeout => 1800,
require => [ File['Package Setup Files']],
refreshonly => true,
notify => Exec['Remove config files'],
}
exec { 'Remove config files':
path => ['/usr/bin','/usr/sbin','/bin','/sbin'],
command => "rm \"${package_setup}\" \"${ndc_file}\" \"${osd_file}\" \"${nds_file}\" \"${configini_file}\"",
refreshonly => true,
}
}
While this achieves most of what I want to do, I notice that upon rerunning puppet apply the files, although they were being removed, were being recopied.
I can understand why this happens, but I don't know how to code it so that the files get copied ONLY if the package gets updated/installed (e.g. package wasn't installed or old). Otherwise the files will get copied over and over again every time puppet runs every 30 min (default setup) on the client I assume... I tried using the replace => false to prevent this but that just means the files wont ever get removed from /var/tmp after the first run of the class, because it only prevents subsequent runs of the class to re-copy the files (from my testing). This does prevent the redundant, repetitive copying - however I just want the files to be gone the first time!
Is this possible? Head hurts :(
Thanks in advance! We're running Puppet version 3.8.6 on EL7.3.
EDIT: To be clear, this is the bit that I'm struggling with: the resource file { 'Package Setup Files':. This keeps getting files copied even though the package isn't updated/installed. How do I prevent this from happening?
Here are some suggestions.
1) Recommendation for a short term solution
Stop trying to clean up those files if you do not need to. Put them in /opt and forget about them. Better still, have Puppet place a README file in there with them that will explain to your future self and to your fellow admins what they are and why they are there.
While I completely understand the desire to clean up, you need to weigh the cost of having a few old files in a directory somewhere against the cost of having complicated logic in the Puppet code that will not make any sense to anyone in a few months.
This is what I would do and in my experience it is also what most Puppet module authors do with these sorts of set up files.
2) Consider an orchestration framework
That said, it appears to me that you are trying to use Puppet to do operational tasks, and while it can kind of do operational tasks (via features like ensure => latest etc) it is really intended to be a configuration management tool.
I recommend people use Puppet to ensure => installed for packages (make sure Puppet can install the app properly if you need to fully rebuild the node); then delegate the problem of applying version upgrades and hotfixes etc outside of Puppet.
There are a few reasons for this.
Puppet is a declarative configuration management system; your Puppet code should define an end-state. Puppet is not like a shell script, where instead of an end-state, you define steps that change the state of a server imperatively, "one step at a time".
The first problem with ensure => latest is philosophical.
latest does not define a single end-state. The behaviour of your code at time X is different from the behaviour at time Y. So your code is not idempotent.
The second problem is practical. You can never solve the problem of RPM updates in a general way using Puppet, because Puppet can never know about all of the RPMs and their dependencies in your system. So, one way or another, you still need a specialised tool for managing the version updates.
So, since you will need a specialised tool for managing the version updates anyway, it is cleaner to draw a clear boundary between the two tools' roles: always use Puppet to manage the configuration and the initial installation; and then always use the other tool to manage the updates.
Ok, great. I see in your comments that you already have a Red Hat Satellite server, and you have written:
...some hosts within the Satellite have got an older version of the
software within yum. But we don't update this software very
often.....maybe once every year.
So, it sounds like you are using Puppet here to work around a problem in the way you are using Satellite. Is it possible to address this by fixing the way you use Satellite? If so, I think that will be cleaner.
Of course, sometimes the right thing to do is use a work-around, and that's why I provided some other options.
3) If you really really want Puppet to clean up those files
Perhaps move the logic inside a shell script. Something like:
class somepackage {
$shell =
'#!/bin/bash
# maybe use wget instead of puppet to get the files
wget http://a.b/c.tgz
tar zxf c.tgz
# install stuff
# clean up stuff
'
file { '/usr/local/bin/installer.sh':
ensure => file,
mode => '0755',
content => $shell,
}
package { 'some package':
ensure => latest,
notify => Exec['installer'],
}
exec { 'installer':
command => '/usr/local/bin/installer.sh',
refreshonly => true,
require => File['/usr/local/bin/installer.sh'],
}
}
The following is a simplified manifest I am running:
package {'ruby2.4':
ensure => installed
}
exec { "gem2.4_install_bundler":
command => "/usr/bin/gem2.4 install bundler",
require => Package['ruby2.4']
}
Puppet apply runs this manifest correctly i.e
installs ruby2.4 package (which includes gem2.4)
Installs bundler using gem2.4
However, puppet apply --noop FAILS because puppet cannot find the executable '/usr/bin/gem2.4' because ruby2.4 is not installed with --noop.
My question is if there is a standard way to test a scenario like this with puppet apply --noop? To validate that my puppet manifest is executing correctly?
It occurs to me that I may have to parse the output and validate the order of the executions. If this is the case, is there a standard way/tool for this?
A last resort is a very basic check that the puppet at least runs, which can be determined with the --detailed-exitcodes option. (a code different to 1).
Thank you in advance
rspec-puppet is the standard tool for that level of verification. It can build a catalog from the manifest (e.g. for a class, defined type, or host) and then you can write tests to verify the contents.
In your case you could verify that the package resource exists, that the exec resource exists, and verify the ordering between them. This would be just as effective as running the agent with --noop mode and parsing the output - but easier and cheaper to run.
rspec-puppet works best with modules, so assuming you follow the setup for your module from the website (adding rspec-puppet to your Gemfile, running rspec-puppet-init), and let's say this is in a class called ruby24, a simple spec in spec/classes/ruby24_spec.rb would be:
require 'spec_helper'
describe 'ruby24' do
it { is_expected.to compile.with_all_deps }
it { is_expected.to contain_package('ruby2.4').with_ensure('installed') }
it { is_expected.to contain_exec('gem2.4_install_bundler').with_command('/usr/bin/gem2.4 install bundler') }
it { is_expected.to contain_exec('gem2.4_install_bundler').that_requires('Package[ruby2.4]') }
end
I have an existing puppet manifest which installs a bunch of php5 packages and only after being installed restarts apache. The simplified manifest is something like
package { 'apache-php':
name => $modules,
ensure => installed
}
exec {'enable-mod-php':
command => $enable_cmd,
refreshonly => true
}
Package['apache-php'] ~> Exec['enable-mod-php'] ~> Service['apache']
After a system upgrade catalog runs have started failing with the following error message:
Error: Failed to apply catalog: Parameter name failed on Package[apache-php]: Name must be a String not Array at /etc/puppet/modules/apache/manifests/php.pp:22
I found out that I was using an undocumented feature/bug: Puppet 3.4.0 name as an array in package.
However, I'm having a hard time finding out how to redo my setup after the upgrade. How can I rewrite this manifest so that it works with more recent puppet versions?
Instead of using an arbitrary title for the package define in your example. (eg. apache-php) and using a name parameter, you can do the following:
$modules = ['foo','bar','baz']
package { $modules:
ensure => present
notify => Exec['enable-mod-php']
}
exec {'enable-mod-php':
command => $enable_cmd,
refreshonly => true,
notify => Service['apache']
}
service { 'apache':
# your apache params
}
I haven't looked at the code for the package provider, but can verify that the above works. You should also note that chaining arrows are all well and good, but according to the Puppet style guide, metaparameters are preferred.
Hope this helps.
I need to do a two step installation of a CentOS6 host with puppet (currently using puppet apply) and got stuck. Not even sure it's currently possible today.
Step 1, setup of base system e.g. setup hosts, ntp, mail and some driver stuff.
reboot required
Step 2, setup of a custom service.
Can this bee done a smooth way? I'm not very familiar with the puppet environment yet.
First off, I very much doubt that any setup steps on a CentOS machine strictly require a reboot. It is usually sufficient to restart the right set of services to make all settings take effect.
Anyway, basic approach to this type of problem could be to
Define a custom fact that determines whether a machine is ready to receive the final configuration steps (Step 2 in your question)
Protect the pertinent parts of your manifest with an if condition that uses that fact value.
You may want to create a file first, then delete it when you are done installing the base system (ntp in the below example)
for example
exec { '/tmp/reboot':
path => "/usr/bin:/bin:/sbin",
command => 'touch /tmp/reboot',
onlyif => 'test ! -f /tmp/rebooted',
}
service { 'ntp':
require => Exec['/tmp/reboot'],
...
}
exec { 'reboot':
command => "mv /tmp/reboot /tmp/rebooted; reboot",
path => "/usr/bin:/bin:/sbin",
onlyif => "test -f /tmp/reboot",
require => Service['ntp'],
creates => '/tmp/rebooted',
}