Is it possible to test, in a puppet manifest, for a dependency on the compiling node (the master or the applying node in case of a masterless setup)?
I've written a profile manifest for base nodes in my network. All nodes should include this profile, including the puppet masters. In this profile, some parts use puppetdb. This gets installed via a puppetmaster profile manifest. I need a way to conditionally skip the parts of the base profile manifest that uses puppetdb, until it is installed.
Yes We Can :-)
I think your question alludes to the fact that Facter only gathers information about the agent node, rather than the master.
To retrieve state from the master, you can use the generate function like this:
if generate('/usr/local/bin/is-puppetdb-installed') =~ /^yes/ {
$puppetdb_installed = true
}
You will need to write a generator script that produces appropriate output.
Related
Is there a limitation of relationships?
We have several Puppet Modules which depend on each other (or at least depend on their packages).
I am asking this because now that I wanted to subscribe some services to restart if the dependency updates.
Problem:
Error: Failed to apply catalog: Could not find dependency Package[shibbolethsp] for Package[httpd] at /etc/puppetlabs/code/environments/development/modules/apache/manifests/instance.pp:39
Modules:
# Module someco-httpd, init.pp
package['httpd'] {
...
require => Package['openssl','shibbolethsp'] # can find openssl but NOT shibbolethsp
}
# Module someco-openssl, init.pp
package['openssl'] {
...
}
# Module someco-shibbolethsp, init.pp
package['shibbolethsp'] {
...
}
The resource Package[shibbolethsp] IS present because if I remove the package and run puppet again I can see that it gets installed, but if I also want to configure Apache (which requires Package[shibbolethsp] to function properly) Puppet fails.
So the resource is present but Puppet can't resolve them properly I guess? The same relationship to Package[openssl] works as expected and Apache restarts if openssl updated to a new version ...
Is this a ordering/multithreading problem? One relationship works, the other doesn't ...
The problems are cross module dependencies. Resources in other modules live in a different namespace than the current module. So if you depend on a resource from another module you have to use the full path, e.g. Other_module::Other_class_or_defined_type['bla'] or use require Other_module in your init.pps to ensure the right ordering!
NOTE: in the site.pp you have to define the resources in their right order!
The resource Package[shibbolethsp] IS present because if I remove
the package and run puppet again I can see that it gets installed,
Your observation does not support the conclusion.
For one thing, it is entirely possible for a package to be installed as a result of a Puppet run even though no Package resource for it is included in the catalog. This happens all the time, in fact. It is driven by dependencies expressed in the packages themselves, where the package manager (Yum, apt, etc.) identifies the dependencies of a package it is installing and arranges to install those, too. Puppet has no insight into that.
For another thing, it is entirely possible for Package[shibbolethsp] to be declared in the catalog for one node, but not in the catalog for a different node. Naturally, if you uninstall shibbolethsp from a node of the first kind, then Puppet will reinstall it on its next run. In no way does that demonstrate that the package is declared in a different node's catalog.
but
if I also want to configure Apache (which requires
Package[shibbolethsp] to function properly) Puppet fails.
Which tells me that no, you are not declaring Package[shibbolethsp] in the affected node's catalog, your protestations to the contrary notwithstanding.
So the resource is present but Puppet can't resolve them properly I
guess? The same relationship to Package[openssl] works as
expected and Apache restarts if openssl updated to a new version ...
I see no reason to think that either relationship fails to work as should be expected, but I suspect you have incorrect expectations.
In the first place, Puppet resource and class relationships are about order of application dependencies. E.g. File[/etc/foo.conf] needs to be ensured up to date before Service[foo] is managed because otherwise the foo service may not be managed into the correct state. This is largely, albeit not wholly, separate from functional dependency between managed components.
In the second place, I think you are assuming that your require relationships between Package resources will cause the required Packages to be declared in the event that their requirer is declared. This is completely incorrect. Again, Puppet resource relationships are about order of application. Puppet cannot autodeclare required resources because it relies on you to tell it what properties they have, and also because that would produce a serious risk of duplicate declarations.
Overall, it is rarely useful to declare relationships between Package resources at the Puppet level, as functional dependencies between packages are best handled at the package / package manager level, and there typically aren't any other bona fide order-of-application dependencies between pairs of packages.
If you want shibboleth to be on machines that get Apache, then you need to ensure that the appropriate class is declared. You may also have some order of application considerations at a different level than the packages -- for example, you may need to ensure that Shibboleth is installed before you manage the Apache service, and perhaps also you want to restart the service if ever the shibboleth package or configuration is updated. You would arrange for that best at the class level, not in individual resource declarations.
For example the httpd module class of module someco-httpd might include something like this:
# Nodes to which this class is applied require class ::shibbolethsp to be applied first
require '::shibbolethsp'
Unlike resources' require metaparameter, that does cause the named class, shibbolethsp to be evaluated (presumably producing a declaration of Package[shibbolethsp], among others), and it also creates an order-of-application relationship so that the shibbolethsp class is applied first. And again, the order-of-application is not for the purpose of a package / package relationship, but rather for a more general class / class relationship that covers the dependency of the httpd service on having shibboleth installed and configured.
I'm in the process of learning puppet and I'm running into what I assume is a pretty basic issue. For the purposes of keeping things simple, I'm using a puppet forge module rather than anything custom written. Here's what I'm using version wise:
puppet server: 5.1.3
puppet agent: 5.3.2
puppetlabs-java: 2.1.0
When I install the puppetlabs-java module, I am using --modulepath /etc/puppetlabs/code/modules/.
I currently have my agent configured to point at my puppet server and an environment I have named example.
The environment currently looks like this:
├── environment.conf
├── hiera.yaml
├── manifests
│ └── init.pp
└── modules
└── java.pp
I have done nothing to environment.conf and hiera.yaml, they are currently defaults. My init.pp contains:
node 'node1' {
}
node 'node2' {
}
and my java.pp contains:
class { 'java':
distribution => 'jre',
}
My issue is two fold. One if I place the java.pp in the manifest folder it applies java to both nodes, regardless of if I have called out an include java in either node. If I put the include java on either node, that works correctly and one node will get java and the other will not, but it does not appear to be respecting any of the settings I have in the java.pp. How can I accomplish only installing java to node1 with my custom settings in the java.pp file?
if I place the java.pp in the manifest folder it applies java to both nodes, regardless of if I have called out an include java in either node.
Yes. The catalog builder processes every manifest file in the environment's manifests directory for every node. Your java.pp file contains a top-scope declaration of class java, so if that file is processed by the catalog builder then it will include class 'java' in the target node's manifest.
If I put the include java on either node, that works correctly and one node will get java and the other will not, but it does not appear to be respecting any of the settings I have in the java.pp.
No, it wouldn't. include java (or class { 'java': ... }) declares class java, thereby instructing the catalog builder to include that class in the target node's catalog. The two forms are more or less alternatives, and both will cause the catalog builder to look for the a definition of class java in the environment's module path. Your java.pp hasn't either the right name or the right location to be found in that search, which is good because it does not in fact contain the wanted definition.
To customize class parameters, you should start right away with external data and automatic data binding. This is not the only way to bind values to class parameters, but it's the mechanism you should prefer, especially for declaring classes across module boundaries.
Regarding Hiera details, a very basic Hiera config for this purpose might consist of these parts:
example/hiera.yaml
# example/hiera.yaml
---
version: 5
defaults:
datadir: data
data_hash: yaml_data
hierarchy:
- name: "Common data"
path: "common.yaml"
example/data/common.yaml
---
java::distribution: jre
The "example" in those file names would be the main directory of your environment of the same name. Note, by the way, that although you can add a new hierarchy level for every module, that would be unusual. It's more typical to use hierarchies that map to an hierarchical categorization of the machines being configured; naming in the above anticipates such an arrangement, in which as yet there is only one category.
After having updated the module data, you might find that you need to make the master expire its environment cache. One way to do that would be to restart the service.
I am new to puppet, I am facing one issue with the same.
How can I use the erb template on the clients node and process them through puppet class, I know the templates need to be on the puppet master but my erb templates are on client node. Is there any way to accomplish this?
One more issue, can I execute any commmand on client node(again not on puppet master) through puppet??
Templates are evaluated during catalog compilation. The master sends this catalog to the node. I suspect you want to have node-specific settings applied, which you can use with facter and custom facts.
I have been experimenting with using hiera for node classification. I followed this example: http://docs.puppetlabs.com/hiera/1/complete_example.html
I was able to assign a node of mine two classes as per this json file:
{
"classes" : [ "ntp",
"base" ],
...
I can see the effects of these class assignments in the puppet runs for my node, but when I look at the node with the Puppet Enterprise 3 Console I only see that the class pe_mcollective has been assigned to the node. Why isn't the Puppet Enterprise Console not aware that my node has been assigned the classes ntp and base?
Thanks
See the Puppet documentation related to data sources: http://docs.puppetlabs.com/pe/latest/puppet_assign_configurations.html. The Puppet Enterprise Console does not ingest site.pp, node.pp or Hiera data and therefore such configuration data is not visible in the PE Console interface.
I have packaged my application into an RPM package, say, myapp.rpm. While installing this application, I would like to receive some inputs from the user (an example for input could be - environment where the app is getting installed - "dev", "qa", "uat", "prod"). Based on the input, the application will install the appropriate files. Is there a way to pass parameters while installing the application?
P.S.: A possible solution could be to create an RPM package for each environment. However, in our scenario, this is not a viable option since we have around 20 environments and we do not wish to have 20 different packages for the same application.
In general, RPM packages should not require user interaction. Time and time again, the RPM folks have stated that it is an explicit design goal of RPM to not have interactive installs. For packages that need some sort of input before first use, you typically ask for this information on first use, our you put it all in config files with macros or something and tell your users that they will have to configure the application before it is usable.
Even passing a parameter of some sort counts as end-user interaction. I think what you want is to have your pre or install scripts auto detect the environment somehow, maybe by having a file somewhere they can examine. I'll also point out that from an RPM user's perspective, having a package named *-qa.rpm is a lot more intuitive than passing some random parameter.
For your exact problem, if you are installing different content, you should create different packages. If you try to do things differently, you're going to end up fighting the RPM system more and more.
It isn't hard to create a build system that can spit out 20+ packages that are all mostly similar. I've done it with a template-ish spec file and some scripts run by make that will create the various spec files and build the RPMs. Without knowing the specifics, it sounds like you might even have a core package that all 20+ environment packages depend on, then the environment specific packages install whatever is specific to their target environment.
You could use the relocate option, e.g.
rpm -i --relocate /env=/uat somepkg.rpm
and have your script look up the variable data from a file located in the "env" directory
I think this is a very valid question, specially as soon as you are moving into the application development realm. There he configuration of the application for different target systems is your daily bread: you need to configure for Development, Integration Test, Acceptance Test, Production etc. I sure don't think building a seperate package for each enviroment is the solution. Basically it should be the same code running in different enviroments.
I know that this requirement is not supported by rpm. But what you can do as a work around is to use a simple config file, that the %pre script knows
to look for. The config file could be a simple shell script that for example sets environment variables, and then the different und pre and post scripts can use those.