Puppet reference enviornment name from .pp file? - puppet

I have external node classifier that manages the environment for each device in my puppet fleet.
When a device checks-in I'm updating it's configuration file so it knows what environment it's in:
ini_setting { 'set local enviornment':
ensure => present,
path => '/etc/puppetlabs/puppet/puppet.conf',
section => 'agent',
setting => 'environment',
value => 'environment_name',
}
I currently have each r10k branch hard-coding the name.
Instead I'd like to be able to use the same code block on all environments, something like:
ini_setting { 'set local enviornment':
...
value => $environment_name,
}

When a device checks-in I'm updating it's configuration file so it knows what environment it's in:
You do know that you don't need to do that for Puppet's sake, right? If you are (properly; see below) using an ENC to control nodes' environments then that overrides anything the nodes self-report, so you could do without nodes being locally configured to know their own environments at all.
Instead I'd like to be able to use the same code block on all
environments, something like:
ini_setting { 'set local enviornment':
...
value => $environment_name,
}
The correct way for an ENC to specify a node's environment to Puppet is by setting the environment key in its output for that node. This is how an ENC directly puts the node into the specified environment. Like any other top-level parameter emitted by the ENC, however, you can reference its value as a top-scope variable. Thus, if you want to update node's Puppet configuration to explicitly specify (after the fact) the environment that the ENC assigns to the node, then you can use that, much as you propose:
ini_setting { 'set local enviornment':
...
value => $::environment,
}

Related

Copy overwriting a file only on RPM update

I have following problem with my Puppet installation:
I would like to copy (overwrite) a file only, if a new version of RPM package was installed.
I have something like this:
package { 'my-rpm':
ensure => $version,
source => $rpm_file,
notify => File[$my_file],
}
file { $my_file:
ensure => present,
source => $my_file_template,
replace => true, # Overwrite (default behaviour). Needed in case new RPM was installed.
}
The problem is, that the "file" get executed also, if no new version of RPM was installed. This happens, since I change the $my_file file afterwards using "file_line"
file_line { 'disable_my_service':
ensure => present,
path => $my_file,
line => ' <deployment name="My.jar" runtime-name="My.jar" enabled="false">',
match => ' <deployment name="My.jar" runtime-name="My.jar">',
}
This change of the content of the $my_file triggers copying fresh version from the template on each and every Puppet run.
I could add "repace => false" to my file copy define, but this would break any further updates...
The long story short: I have the following loop
Copy file -> change file -> copy file -> ...
How can I break this loop?
UPDATE:
Clarification:
The "file_line" define is executed optionally, controlled by a Puppet hiera-property and so the "enabled" part can't be included in the RPM.
The entire file can't be turned into a template (IMHO). The problem: Puppet module must be able to install different (future) versions of the file.
The problem remains unsolved for the time being.
I think the problem you're getting here is that you're trying to manage $my_file using both the file and file_line resource types and this is going to cause the file to change during the catalog application.
Pick one or the other, manage it as a template or by file line.
I suspect what's happening here during the Puppet run is the file resource changes $my_file to look like this;
<deployment name="My.jar" runtime-name="My.jar">
Because that's what is in the template then, the file_line resource changes it to;
<deployment name="My.jar" runtime-name="My.jar" enabled="false">
Then on the next run the exact same thing happens, file changes $my_file to match the template and then file_line changes it to modify that line.
I would remove the notify => File[$my_file], it's not actually doing anything, you're defining the desired state in code so if that file changes for any reason, manual change or RPM update, Puppet is going to bring that file back into the desired state during the run. You may want to consider;
file { $my_file:
ensure => present,
source => $my_file_template,
require => Package['my-rpm'],
}
This ensures the file desired state is enforced after the package resource so if the package changes the file the file will be corrected in the same run.
https://puppet.com/docs/puppet/7.4/lang_relationships.html
You may also want to consider;
file { $my_file:
ensure => present,
source => $my_file_template,
require => Package['my-rpm'],
notify => Service['my-service'],
}
So the service provided by the rpm restarts when the config file is changed.
Copy overwriting a file only on RPM update
The problem is, that the "file" get executed also, if no new version of RPM was installed. This happens, since I change the $my_file file afterwards using "file_line"
Yes, File resources in a node's catalog are applied on every run. In fact, it's best to take the view that every resource that makes it into in a node's catalog is applied on every run. Resources' attributes affect what applying them means and / or what it means for them to be in sync, not whether they are applied at all. In the case of File, for example, setting replace => false says that as long as the file initially exists, its content is in sync (and therefore should not be modified), whereas replace => true says that the file's content is in sync only if it is an exact match to the specified source or content.
Generally speaking, it does not work well to manage the same or overlapping physical resources via multiple Puppet resources, and that's what you're running into here. The most idiomatic approach when you run into a problem with that is often to write a custom resource type with which to manage the target object in detail. But in this case, it looks like you could work around the issue by using an Exec to perform the one-time post-update copy:
package { 'my-rpm':
ensure => $version,
source => $rpm_file,
}
~> exec { "Start with default ${my_file}":
command => "cp '${my_file_template}' '${my_file}'",
# this is important:
refreshonly => true,
}
-> file { $my_file:
ensure => 'file',
replace => false,
# no source or content
owner => 'root', # or whatever
group => 'root', # or whatever
mode => '0644',
# ...
}
-> file_line { 'disable_my_service':
ensure => present,
path => $my_file,
# ...
}
You can, of course, use relationship metaparameters instead of the chaining arrows if you prefer or have need.
That approach gives you:
management of the package via the package manager;
copying the packaged default file to the target file only when triggered by the package being updated (by Puppet -- you won't get this if the package is updated manually);
managing properties of the file other than its contents via the File resource; and
managing a specific line of the file's contents via the File_line resource.

Best practices for managing multiple environment variables in prodution

Although I am not familiar with DevOps best practices, I am trying to come up with a reliable and efficient method for managing multiple variables in production. The following represents my current approach:
/
|ENV_VAR.sh
|--/api1
|--/staging.api1
|--/api2
|--/staging.api2
Where:
ENV_VAR.sh
### API 1 variables ###
export API1_VAR_1=foo
export API1_VAR_2=foo2
export API1_STAG_VAR_1=foo_stag
export API1_VAR_2=foo2_stag2
### API 2 variables ###
export API2_VAR_1=foo
export API2_VAR_2=foo2
export API2_STAG_VAR_1=foo_stag
export API2_VAR_2=foo2_stag2
The API 1 and 2 are two nodejs-based apps running in the same server using a reverse-proxy configuration.
If nothing goes bad with the server (e.g. unexpected shutdown), I just have to (re)set the variables once in a while via SOURCE ENV_VAR.SH in order to make sure that new variables are defined.
Before proceeding with this approach, I would like to know whether it is correct at all, or if it has a big flaw.
If this approach is alright, how to automatically (re)source the environment variables from the package.json whenever a new version of any App is deployed? (just to guarantee that the variables are still defined)
Thanks in advance.
I like using Loren West's config package for these configuration parameters. I happen to like to extend it with the properties package: that way I don't have to put parameters in valid, comment-free, JSON format. JSON5 also helps solve the readability problem, but I haven't tried it.
Why do I like this?
It gives a structured way of dealing with development / test / staging / production environments. It keys off the ENV environment variable, which of course has values like development and production.
All properties files go into a single directory, typically ./config. Your production krewe can tell what they're looking at. default.properties, development.properties and production.properties are the names of typical files.
Most configuration parameters don't have to be secret, and therefore they can be committed to your repository.
Secrets (passwords, connection strings, API keys, etc) can be stored in local.properties files placed into ./config by your deployment system. (Mention local.properties in your .gitignore file.)
Secrets can also be loaded from environment variables, named in a file called ./config/custom_environment_variables.json.
It works nicely with pm2.
This is really easy to configure.
Your files:
default.properties (used when not overridden by another file)
[API1]
VAR_1 = foo
VAR_2 = foo2
[API2]
VAR_1 = foo
VAR_2 = foo_for_api2
staging.properties
[API1]
VAR_1 = foo_stag
VAR_2 = foo2_stag2
[API2]
VAR_1=foo_stag
VAR_2=foo2_stag2
custom_environment_variables.json
{
"API1" : {
"password": "API1_PASS"
},
"API2" : {
"password": "API2_PASS"
}
}
Your nodejs program:
const config = require( 'config' )
require( 'properties' )
const appConfig = config.get( 'API1' )
const var1 = appConfig.VAR_1
const password = appConfig.password
Then you run your program with API1_PASS=yaddablah nodejs program.js and you get all your configs.

How can I use Foreman host groups with Puppet?

I have this manifest:
$foremanlogin = file('/etc/puppetlabs/code/environments/production/manifests/foremanlogin.txt')
$foremanpass = file('/etc/puppetlabs/code/environments/production/manifests/foremanpass.txt')
$query = foreman({foreman_user => "$foremanlogin",
foreman_pass => "$foremanpass",
item => 'hosts',
search => 'hostgroup = "Web Servers"',
filter_result => 'name',
})
$quoted = regsubst($query, '(.*)', '"\1"')
$query6 = join($quoted, ",")
notify{"The value is: ${query6}": }
node ${query6} {
package { 'atop':
ensure => 'installed',
}
}
When I execute this on agent I got error:
Server Error: Could not parse for environment production: Syntax error at ''
Error in my node block
node ${query6} {
package { 'atop':
ensure => 'installed',
}
}
I see correct output from notify, my variable looks like this:
"test-ubuntu1","test-ubuntu2"
Variable in correct node manifest format.
I don't understand whats wrong? variable query6 is correct.
How to fix that?
I just want to apply this manifest to foreman host group, how to do this right?
On the Puppet side, you create classes describing how to manage appropriate subunits of your machines' overall configuration, and organize those classes into modules. The details of this are far too broad to cover in an SO answer -- it would be analogous to answering "How do I program in [language X]?".
Having prepared your classes, the task is to instruct Puppet which ones to assign to each node. This is called "classification". Node blocks are one way to perform classification. Another is external node classifiers (ENCs). There are also alternatives based on ordinary top-level Puppet code in your site manifest. None of these are exclusive.
If you are running Puppet with The Foreman, however, then you should configure Puppet to use the ENC that Foreman provides. You then use Foreman to assign (Puppet) classes to nodes and / or node groups, and Foreman communicates the details to Puppet via its ENC. That does not require any classification code on the Puppet side at all.
See also How does host groups work with foreman?

Checking for custom file using puppet

Part of my puppet manifest checks for the existence of a custom sshd_config. If one is found, I use that. If it's not then I use my default. I'm just wondering if there is a more "puppet" way of doing this
if file("/etc/puppetlabs/puppet/files/${::fqdn}/etc/ssh/sshd_config", '/dev/null') != '' {
$sshd_config_source = "puppet:///private/etc/ssh/sshd_config"
} else {
$sshd_config_source = "puppet:///public/etc/ssh/sshd_config"
}
file { '/etc/ssh/sshd_config':
ensure => 'present',
mode => '600',
source => $sshd_config_source,
notify => Service['sshd'],
}
This code works but it's a little odd as file I have to give it the full path on the puppet master but when assigning $sshd_config_source I have to use the puppet fileserver path (puppet:///private/etc...).
Is there a better way of doing this?
It's a little known feature of the file type that you can supply multiple source values.
From the docs:
Multiple source values can be specified as an array, and Puppet will use the first source that exists. This can be used to serve different files to different system types:
file { "/etc/nfs.conf":
source => [
"puppet:///modules/nfs/conf.$host",
"puppet:///modules/nfs/conf.$operatingsystem",
"puppet:///modules/nfs/conf"
]
}
So you should just specify both the specific and the generic file URL, in that order, and Puppet will do the right thing for you.

How to use return value from a Puppet exec?

How can I make the below logic work? My aim is to compare the value of custom fact $environment and the content of the file /etc/facter/facts.d/oldvalue.
If the custom fact $environment is not equal to the content of file /etc/facter/facts.d/oldvalue, then execute the following code.
exec {'catenvchange' :
command => "/bin/cat /root/oldvalue"}
if $environment != exec['catenvchange'] {#code#}
Exec resources do not work that way. In fact, no resource works that way, or any way remotely like that. Moreover, the directory /etc/facter/facts.d/ serves a special purpose, and your expectation for how it might be appropriate to use a file within is not consistent with that purpose.
What you describe wanting to do looks vaguely like setting up an external fact and testing its value. If you drop an executable script named /etc/facter/facts.d/anything by some means (manually, plugin sync, File resource, ...) then that script will be executed before each Puppet run as part of the process of gathering node facts. The standard output generated by the script would be parsed for key=value pairs, each defining a fact name and its value. The facts so designated, such as one named "last_environment" will be available during catalog building. You could then use it like so:
if $::environment != $::last_environment {
# ...
}
Update:
One way to use this mechanism to memorialize the value that a given fact, say $::environment, has on one run so that it can be read back on the next run would be to declare a File resource managing an external fact script. For example,
file { '/etc/facter/facts.d/oldvalues':
ensure => 'file',
owner => 'root',
group => 'root',
mode => '0755',
content => "#!/bin/bash\necho 'last_environment=${::environment}'\n"
}

Resources