Chef - Attribute Precedence when using multiple roles - attributes

I am using Chef for my production environment. I am aware of the standard attribute precedence that Chef has implemented, defined here:
http://docs.chef.io/attributes.html#attribute-precedence
The default attribute presedence is as follows:
Attribute files -> Node / Recipe -> Environment -> Roles
But what happens when I have a run_list containing multiple roles? Example:
"run_list": [ "role[webserver]", "role[dbserver]" ]
Which of the above default_attributes (defined in both roles) has precedence over the other?
Thanks!

This depends on how your attribute structures are defined in your roles. http://docs.chef.io/attributes.html#about-deep-merge

Related

In Chef how can you pass attributes with "include_recipe"?

I need to call a recipe and pass it specific attribute data, like:
include_recipe [nginx::passenger['my_attributeA' => 'foobar' , 'my_attributeB' => 'foofii']
i.e. in my wrapper, I have to pass data to a called cookbook.
Thanks
Node attributes in chef are global variables you should do this by setting them in the attributes file:
my_cookbook/attributes/default.rb:
default['my_attributeA'] = 'foobar'
default['my_attributeB'] = 'foofii'
my_cookbook/recipe/default.rb:
include_recipe "nginx::passenger"
my_cookbook/metadata.rb:
name "my_coobook"
version "1.2.3"
depends "nginx"
Note, gnerally you'd be setting node attributes like default['nginx']['some_nginx_cookbook_attribute'] in your wrapper to control the nginx cookbook, you probably wouldn't be setting something arbitrary like default['my_attributeA'].
there is no need to pass an argument for attribute assignment; rather, you will need to overload the attribute before including the desired recipe.
the attributes set in the dependent cookbooks before including other dependee cookbook, will be merged.
if the attribute precedence levels are the same, then that data is merged. If the attribute value precedence levels in an array are different, then that data is replaced. For all other value types (such as strings, integers, etc.), that data is replaced.

Puppet: Class Ordering / Containment - always wrong order

I read a lot about ordering puppet classes with containment (iam using Puppet 6). But it still does not work for me in one case. Maybe my english is not good enough and i miss something. Maybe somebody know what iam doing wrong.
I have a profile to installing a puppetserver (profile::puppetserver). This profile has three sub-classes which I contain within the profile::puppetserver
class profile::puppetserver(
) {
contain profile::puppetserver::install
contain profile::puppetserver::config
contain profile::puppetserver::firewall
}
That works fine for me. Now I want to expand this profile and install PuppetDB. For this, i use the puppetdb module from puppet forge:
So what i do is add profile::puppetserver::puppetdb and the contain to the profile::puppetserver
class profile::puppetserver::puppetdb(
) {
# Configure puppetdb and its underlying database
class { 'puppetdb': }
# Configure the Puppet master to use puppetdb
class { 'puppetdb::master::config': }
}
When i provision my puppetserver first and add the profile::puppetserver::puppetdb after it, puppetdb installs and everything works fine.
If I add it directly with contain, and provisioning everything at once, it crashes. It's because the puppetdb module is installed randomly during my master server installs (and also the postgresql server and so on). That ends in my puppetserver is not running and my puppetdb generate no local ssl certificates and the service doesn't comes up.
What i try first:
I installed the puppetdb Package in my profile::puppetserver::puppetdb directly and use the required flag. It works when i provision all at once.
class profile::puppetserver::puppetdb (
) {
Package { 'puppetdb':
ensure => installed,
require => Class['profile::puppetserver::config']
}
}
So i think i could do the same in the code above:
class profile::puppetserver::puppetdb(
) {
# Configure puppetdb and its underlying database
class { 'puppetdb':
require => Class['profile::puppetserver::config']
}
# Configure the Puppet master to use puppetdb
class { 'puppetdb::master::config':
require => Class['profile::puppetserver::config']
}
}
But this does not work...
So i read about puppet class containment and ordering by chains. So i did this in my profile::puppetserver
class profile::puppetserver(
) {
contain profile::puppetserver::install
contain profile::puppetserver::config
contain profile::puppetserver::firewall
contain profile::puppetserver::puppetdb
Class['profile::puppetserver::install'] ->
Class['profile::puppetserver::config'] ->
Class['profile::puppetserver::firewall'] ->
Class['profile::puppetserver::puppetdb']
}
But it still does not have any effect... he still starts to install postgresql and the puppetdb package during my "puppetserver provisioning" in the install, config, firewall steps.
How i must write the ordering, that all things from the puppetdb module, which i call in profile::puppetserver::puppetdb, only starts when the rest of the provisioning steps are finished?
I really don't understand it. I think maybe it haves something to do with the fact, that i declare classes from the puppetdb module inside of profile::puppetserver::puppetdb and not the directly Resource Type. Because when i use the Package Resource Type with the Require Flag, it seems to work. But i really don't know how to handle this. I think there must be a way or?
I think maybe it haves something to do with the fact, that i declare
classes from the puppetdb module inside of
profile::puppetserver::puppetdb and not the directly Resource Type.
Because when i use the Package Resource Type with the Require Flag, it
seems to work.
Exactly so.
Resources are ordered with the class or defined-type instance that directly declares them, as well as according to ordering parameters and instructions applying to them directly.
Because classes can be declared multiple times, in different places, ordering is more complicated for them. Resource-like class declarations such as you demonstrate (and which you really ought to avoid as much as possible) do not imply any particular ordering of the declared class. Neither do declarations via the include function.
Class declarations via the require function place a single-ended ordering constraint on the declared class relative to the declaring class or defined type, and declarations via the contain function place a double-ended ordering constraint similar to that applying to all resource declarations. The chaining arrows and ordering metaparameters can place additional ordering constraints on classes.
But i really dont know how to handle this. I think there must be a way or?
Your last example shows a viable way to enforce ordering at the level of profile::puppetserver, but its effectiveness is contingent on each of its contained classes taking the same approach for any classes they themselves declare, at least where those third-level classes must be constrained by the order of the second-level classes. This appears to be where you are falling down.
Note also that although there is definitely a need to order some things relative to some others, it is not necessary or much useful to try to enforce an explicit total order over all resources. Work with the lightest hand possible, placing only those ordering constraints that serve a good purpose.

What is the difference between Override Attribute Initializers and Set Run State in Enterprise Architect and why does it behave differently?

this is my first question on SO, so please exercise some kindness on my path to phrasing perfect questions.
On my current project I try to model deployments in EA v14.0 where I want components to be deployed on execution environments and additionally set them to some values.
However depending on how I deploy (as an Deployment Artifact or as a Component Instance) I get different behaviours. On Deployment Artifacts I am offered to Override Attribute Initializers. On a Component Instance I am offered to Set Run State. When I try to set an attribute on the DeploymentArtifact I get an error message that there is no initialiser to override. When I try to set the run state on the Component Instance I can set a value. However, then I get an UML validation error message, that I must not link a component instance to an execution environment:
MVR050002 - error ( (Deployment)): Deployment is not legal for Instance: Component1 --> ExecutionEnvironment1
This is how I started. I created a component with a deployment specification:
I then created a deployment diagram to deploy my component: Once as a Deployment Artifact and once as a Component Instance.
When I try to Override Attribute Initializers , I get the error message DeploymentArtifact has no attribute initializers to override`.
When I try to Set Run State I can actually enter values .
However, when I then validate my package, I get the aforementioned error message.
Can anyone explain what I am doing wrong or how this is supposed to work?
Thank you very much for your help!
Actually there are multiple questions here.
Your 2nd diagram is invalid (and probably EA should have moaned here already since it does so in V12).
You can deploy an artifact on a node instance and use a deployment spec as association class like shown on p. 654 of the UML 2.5 specs:
But you can not deploy something on something abstract. You will need instances - on both sides.
You can silence EA about warnings by turning off strict connector checking in the options:
To answer your question in the title: Override initializers looks into the attribute list of the classifier of an object and offers the so you can set any run states (that is values of attributes at runtime). Moreover the Set Run State allows to set arbitrary key value pairs which are not classifier attributes. This is to express e.g. RAM size in Nodes or things like that.

Host group on CFEngine

I have to write a policy for defining various Host groups, for a particular thing it should check set of parameters according to host group.
For example I have 2 different set of web cluster, On one cluster httpd.conf is kept under /usr/local/apache/httpd.conf and for another set it is kept under /etc/httpd/httpd.conf.
I have a policy to check file changes of these configuration but I want a way in which I can define for a particular host group where exactly it should check.
Any Hint, help would be very appreciable.
The general answer is that you define a class for each group, and assign the appropriate path onto a variable according to that. For example:
vars:
group1::
"httpd_conf" string => "/usr/local/apache/httpd.conf";
group2::
"httpd_conf" string => "/etc/httpd/httpd.conf";
Then you use $(httpd_conf) in the file operations, and it will have the correct value according to the group.
The potentially tricker part is how to define those classes. In this case it depends on your setup and your preferences. For example, you could define the classes by explicitly listing the hosts in each group:
classes:
"group1" or => { "host1", "host2", "host3" };
"group2" or => { "host4", "host5", "host6" };
Or by matching against hostname patterns:
classes:
"group1" expression => classmatch("grp1.*");
"group2" expression => classmatch("grp2.*");
There are other possibilities. For a full treatment, please check Defining classes for groups of hosts in Chapter 6 of my book "Learning CFEngine 3".

Reusing Puppet Defined Type Parameter in Other Defined Type

Lets say I want to define a set of resources that have dependencies on each other, and the dependent resources should reuse parameters from their ancestors. Something like this:
server { 'my_server':
path => '/path/to/server/root',
...
}
server_module { 'my_module':
server => Server['my_server'],
...
}
The server_module resource both depends on my_server, but also wants to reuse the configuration of it, in this case the path where the server is installed. stdlib has functions for doing this, specifically getparam().
Is this the "puppet" way to handle this, or is there a better way to have this kind of dependency?
I don't think there's a standard "puppet way" to do this. If you can get it done using the stdlib and you're happy with it, then by all means do it that way.
Personally, if I have a couple defined resources that both need the same data I'll do one of the follow:
1) Have a manifest that creates both resources and passes the data both need via parameters. The manifest will have access to all data both resources need, whether shared or not.
2) Have both defined resources look up the data they need in Hiera.
I've been leaning more towards #2 lately.
Dependency is only a matter of declaring it. So your server_module resource would have a "require => Server['my_server']" parameter --- or the server resource would have a "before => Server_module['my_module']".

Resources