Is there a limitation of relationships?
We have several Puppet Modules which depend on each other (or at least depend on their packages).
I am asking this because now that I wanted to subscribe some services to restart if the dependency updates.
Problem:
Error: Failed to apply catalog: Could not find dependency Package[shibbolethsp] for Package[httpd] at /etc/puppetlabs/code/environments/development/modules/apache/manifests/instance.pp:39
Modules:
# Module someco-httpd, init.pp
package['httpd'] {
...
require => Package['openssl','shibbolethsp'] # can find openssl but NOT shibbolethsp
}
# Module someco-openssl, init.pp
package['openssl'] {
...
}
# Module someco-shibbolethsp, init.pp
package['shibbolethsp'] {
...
}
The resource Package[shibbolethsp] IS present because if I remove the package and run puppet again I can see that it gets installed, but if I also want to configure Apache (which requires Package[shibbolethsp] to function properly) Puppet fails.
So the resource is present but Puppet can't resolve them properly I guess? The same relationship to Package[openssl] works as expected and Apache restarts if openssl updated to a new version ...
Is this a ordering/multithreading problem? One relationship works, the other doesn't ...
The problems are cross module dependencies. Resources in other modules live in a different namespace than the current module. So if you depend on a resource from another module you have to use the full path, e.g. Other_module::Other_class_or_defined_type['bla'] or use require Other_module in your init.pps to ensure the right ordering!
NOTE: in the site.pp you have to define the resources in their right order!
The resource Package[shibbolethsp] IS present because if I remove
the package and run puppet again I can see that it gets installed,
Your observation does not support the conclusion.
For one thing, it is entirely possible for a package to be installed as a result of a Puppet run even though no Package resource for it is included in the catalog. This happens all the time, in fact. It is driven by dependencies expressed in the packages themselves, where the package manager (Yum, apt, etc.) identifies the dependencies of a package it is installing and arranges to install those, too. Puppet has no insight into that.
For another thing, it is entirely possible for Package[shibbolethsp] to be declared in the catalog for one node, but not in the catalog for a different node. Naturally, if you uninstall shibbolethsp from a node of the first kind, then Puppet will reinstall it on its next run. In no way does that demonstrate that the package is declared in a different node's catalog.
but
if I also want to configure Apache (which requires
Package[shibbolethsp] to function properly) Puppet fails.
Which tells me that no, you are not declaring Package[shibbolethsp] in the affected node's catalog, your protestations to the contrary notwithstanding.
So the resource is present but Puppet can't resolve them properly I
guess? The same relationship to Package[openssl] works as
expected and Apache restarts if openssl updated to a new version ...
I see no reason to think that either relationship fails to work as should be expected, but I suspect you have incorrect expectations.
In the first place, Puppet resource and class relationships are about order of application dependencies. E.g. File[/etc/foo.conf] needs to be ensured up to date before Service[foo] is managed because otherwise the foo service may not be managed into the correct state. This is largely, albeit not wholly, separate from functional dependency between managed components.
In the second place, I think you are assuming that your require relationships between Package resources will cause the required Packages to be declared in the event that their requirer is declared. This is completely incorrect. Again, Puppet resource relationships are about order of application. Puppet cannot autodeclare required resources because it relies on you to tell it what properties they have, and also because that would produce a serious risk of duplicate declarations.
Overall, it is rarely useful to declare relationships between Package resources at the Puppet level, as functional dependencies between packages are best handled at the package / package manager level, and there typically aren't any other bona fide order-of-application dependencies between pairs of packages.
If you want shibboleth to be on machines that get Apache, then you need to ensure that the appropriate class is declared. You may also have some order of application considerations at a different level than the packages -- for example, you may need to ensure that Shibboleth is installed before you manage the Apache service, and perhaps also you want to restart the service if ever the shibboleth package or configuration is updated. You would arrange for that best at the class level, not in individual resource declarations.
For example the httpd module class of module someco-httpd might include something like this:
# Nodes to which this class is applied require class ::shibbolethsp to be applied first
require '::shibbolethsp'
Unlike resources' require metaparameter, that does cause the named class, shibbolethsp to be evaluated (presumably producing a declaration of Package[shibbolethsp], among others), and it also creates an order-of-application relationship so that the shibbolethsp class is applied first. And again, the order-of-application is not for the purpose of a package / package relationship, but rather for a more general class / class relationship that covers the dependency of the httpd service on having shibboleth installed and configured.
Related
We have multiple aws modules in git and when we use a module in other project we specify the path of the module in git as a source like this:
module "module_name" {
source = "git::https://gitlab_domain_name.com/terraform/modules/aws/module_name.git?ref=v1.0.0"
...
}
I want to know if there is a benefit to use a terraform private registry to store our modules like for instance when developing in Java we use a repository to store JAR packaged or also when working with docker images.
Yes, there are benefits to a private registry. Namely you can put some description, documentation, examples there and you get a better visual representation of what the module does, its inputs, outputs and resources.
But apart from that in terms of functionality of the module it behaves the same way. A java registry for example (e.g. nexus) makes sense because you do not want to force everyone to build the libs themselves (maybe they can't at all) and therefore having a place where pre-built libraries are stored does make sense. That reasoning does not apply to terraform since there is nothing being compiled.
Whole different story for custom providers, in that case you need a private registry to provide the compiled golang binaries, but you can write one yourself without too much effort (since terraform 0.13), it is just a http rest api.
I'm in the process of learning puppet and I'm running into what I assume is a pretty basic issue. For the purposes of keeping things simple, I'm using a puppet forge module rather than anything custom written. Here's what I'm using version wise:
puppet server: 5.1.3
puppet agent: 5.3.2
puppetlabs-java: 2.1.0
When I install the puppetlabs-java module, I am using --modulepath /etc/puppetlabs/code/modules/.
I currently have my agent configured to point at my puppet server and an environment I have named example.
The environment currently looks like this:
├── environment.conf
├── hiera.yaml
├── manifests
│ └── init.pp
└── modules
└── java.pp
I have done nothing to environment.conf and hiera.yaml, they are currently defaults. My init.pp contains:
node 'node1' {
}
node 'node2' {
}
and my java.pp contains:
class { 'java':
distribution => 'jre',
}
My issue is two fold. One if I place the java.pp in the manifest folder it applies java to both nodes, regardless of if I have called out an include java in either node. If I put the include java on either node, that works correctly and one node will get java and the other will not, but it does not appear to be respecting any of the settings I have in the java.pp. How can I accomplish only installing java to node1 with my custom settings in the java.pp file?
if I place the java.pp in the manifest folder it applies java to both nodes, regardless of if I have called out an include java in either node.
Yes. The catalog builder processes every manifest file in the environment's manifests directory for every node. Your java.pp file contains a top-scope declaration of class java, so if that file is processed by the catalog builder then it will include class 'java' in the target node's manifest.
If I put the include java on either node, that works correctly and one node will get java and the other will not, but it does not appear to be respecting any of the settings I have in the java.pp.
No, it wouldn't. include java (or class { 'java': ... }) declares class java, thereby instructing the catalog builder to include that class in the target node's catalog. The two forms are more or less alternatives, and both will cause the catalog builder to look for the a definition of class java in the environment's module path. Your java.pp hasn't either the right name or the right location to be found in that search, which is good because it does not in fact contain the wanted definition.
To customize class parameters, you should start right away with external data and automatic data binding. This is not the only way to bind values to class parameters, but it's the mechanism you should prefer, especially for declaring classes across module boundaries.
Regarding Hiera details, a very basic Hiera config for this purpose might consist of these parts:
example/hiera.yaml
# example/hiera.yaml
---
version: 5
defaults:
datadir: data
data_hash: yaml_data
hierarchy:
- name: "Common data"
path: "common.yaml"
example/data/common.yaml
---
java::distribution: jre
The "example" in those file names would be the main directory of your environment of the same name. Note, by the way, that although you can add a new hierarchy level for every module, that would be unusual. It's more typical to use hierarchies that map to an hierarchical categorization of the machines being configured; naming in the above anticipates such an arrangement, in which as yet there is only one category.
After having updated the module data, you might find that you need to make the master expire its environment cache. One way to do that would be to restart the service.
I would like to be able to set some configuration values within a particular package that I'm developing for laravel.
example:
"extra": {
"maxminddbpath": "src/storage/db"
},
I need to access those values from within one of my classes. How can this be accomplished.
Pseudocode might look something like this:
public function fire()
{
$this->getCompiler()->getPackage()->getExtra();//returns the extra node from composer.json
}
For my example I will be accessing the value from within a class that extends \Command
I would tend to agree that in some eyes this is something fanatical or silly because there is a better option. I understand that
In that case could you answer this, are there any instances where one might find it better as a software architect to locate provisional static values within that configuration file(composer.json)?
I think what is happening is we are avoiding the question by stating that it shouldn't be done.
an argument could then be made that the configuration of a json file is irrelevant to that of the application, which; by the nature the composer.json configuration file could not be true.
Take a look at this line of code on github:
https://github.com/mente/MaxMindGeoIpBundle/blob/master/Composer/ScriptHandler.php#L22
This was designed for symphony and not laravel but they are parent child branches of each other. I assume that there would be something within the laravel framework to handle this type of request.
Other uses might include:
Reading Grunt Files
Reading Ruby Configuration
Reading Node Configuration
Reading Deployment Settings
Reading Vagrant Configurations
Recommendations for a library?.
Configuration in composer.json should only be configuration which is used during a composer process. In such cases, you have to use composer scripts, which have access to this extras config.
When a setting is not specifically used in a composer process, it's more part of the app configuration and should belong in the configuration file of the application. I don't know Laravel well, but I guess it has nice configuration features.
Is it possible to test, in a puppet manifest, for a dependency on the compiling node (the master or the applying node in case of a masterless setup)?
I've written a profile manifest for base nodes in my network. All nodes should include this profile, including the puppet masters. In this profile, some parts use puppetdb. This gets installed via a puppetmaster profile manifest. I need a way to conditionally skip the parts of the base profile manifest that uses puppetdb, until it is installed.
Yes We Can :-)
I think your question alludes to the fact that Facter only gathers information about the agent node, rather than the master.
To retrieve state from the master, you can use the generate function like this:
if generate('/usr/local/bin/is-puppetdb-installed') =~ /^yes/ {
$puppetdb_installed = true
}
You will need to write a generator script that produces appropriate output.
So I'm setting up puppet for a project I'm working on and I wanted to know what the best way to share resources between environments is. The problem is that I have a number of common packages that I want installed between a few different environments.
I read up on puppets support for environments and it looked like all you can do is specify the module path and the manifest. If that's the case, then what is even the point of environments?
What I'm thinking about doing is just having a shared module path that has a module with the shared packages to install and then importing that into each environment's site manifest, but that just seems like a hacky way of doing it especially when modules are supposed to be stand alone.
Is there a better way to implement this? Am I missing something?
Thanks.
You may use node to configure different environment:
# /etc/puppetlabs/puppet/manifests/site.pp
node 'dev' {
include common
include apache
include squid
}
node 'prod' {
include common
include mysql
}
Here's a reference: http://docs.puppetlabs.com/puppet/2.7/reference/lang_node_definitions.html