Right way to add many facts based on hostname condition - puppet

I need to insert multiple facts on many machines(100+) based on its hostname. All the machines get a single app deployed but the app configuration differs and also machines form part of different clusters. I am trying to find the right approach to implement this. One approach I found in How can I create a custom :host_role fact from the hostname? wherein there is a condition in facter add code block to check for hostname.
I was also thinking if there is a way to add multiple facts in one facter add call instead of having 1 per facter as that way I can logically organize my code on cluster/machines instead of facts.
Example:
Machine1 and its facts:
clustername: C1
Java version: 1.8.60
ProcessRuns: 3
Machine2 and its facts:
clustername: C1
Java version: 1.8.60
ProcessRuns: 1
Machine3 and its facts:
clustername: C2
Java version: 1.9.00
ProcessRuns: 1
Machine4 and its facts:
clustername: C2
Java version: 1.9.00
ProcessRuns: 2
Machine5 and its facts:
clustername: C3
Java version: 1.9.60
ProcessRuns: 1

I did not find any way to have multiple facts in 1 facter.add call so went with 1 call per fact I wanted to add.
The fact file finally looks something like this:
when "Machine1"
Facter.add(:processcount) do
setcode do
3
end
end
when "Machine2", "Machine3", "Machine5"
Facter.add(:processcount) do
setcode do
1
end
end

Related

Group nodes in hiera by hiera-defined fact

I have a hierachy like this:
- "nodes/%{::certname}"
- (what's here is my question)
- common
I'd like to assign a group to my nodes in their individual configuration in hiera, like this in nodes/hostname.yaml :
---
group: alpha
Now, I'd like to have a file alpha.yaml, where I state group-specific settings.
So my question is how do I write the hierachy to ask hiera for the filename of the group definition?
Is there another way to achieve this?
You can. Make sure you have the group defined in Facts.
- "nodes/%{::certname}"
- "%{::group}"
- common
So you can test with below command
FACTER_group=alpha puppet apply your.pp
For custom facts, you can go through this document: Custom Facts Walkthrough

How can I avoid "write everything twice" in my hiera data?

Is there a better way to format my hiera data?
I want to avoid the "write everything twice" problem.
Here is what I have now:
[root#puppet-el7-001 ~]# cat example.yaml
---
controller_ips:
- 10.0.0.51
- 10.0.0.52
- 10.0.0.53
controller::horizon_cache_server_ip:
- 10.0.0.51:11211
- 10.0.0.52:11211
- 10.0.0.53:11211
I was wondering if there is functionality avaialble in hiera that is like Perl's map function.
If so then I could do something like:
controller::horizon_cache_server_ip: "%{hiera_map( {"$_:11211"}, %{hiera('controller_ips')})}"
Thanks
It depends on which puppet version you are using. I puppet 3.x, you can do the following:
common::test::var1: a
common::test::var2: b
common::test::variable:
- "%{hiera('common::test::var1')}"
- "%{hiera('common::test::var2')}"
common::test::variable2:
- "%{hiera('common::test::var1')}:1"
- "%{hiera('common::test::var2')}:2"
In puppet 4.0 you can try using a combination of zip, hash functions from stdlib, with built in function map.
Something like:
$array3 = zip($array1, $array2)
$my_hash = hash($array3)
$my_hash.map |$key,$val|{ "${key}:${val}" }
The mutation is a problem. It is simpler with identical data thanks to YAML's referencing capability.
controller_ips: &CONTROLLERS
- 10.0.0.51
- 10.0.0.52
- 10.0.0.53
controller::horizon_cache_server_ip: *CONTROLLERS
You will need more logic so that the port can be stored independently.
controller::horizon_cache_server_port: 11211
The manifest needs to be structured in a way that allows you to combine the IPs with the port.

How to implement different data for cucumber scenarios based on environment

I have an issue with executing the cucumber-jvm scenarios in different environments. Data that is incorporated in the feature files for scenarios belongs to one environment. To execute the scenarios in different environemnts, I need to update the data in the features files as per the environment to be executed.
for example, in the following scenario, i have the search criteria included in the feature file. search criteria is valid for lets say QA env.
Scenario: search user with valid criteria
Given user navigated to login page
And clicked search link
When searched by providing search criteria
|fname1 |lname1 |address1|address2|city1|state1|58884|
Then verify the results displayed
it works fine in QA env. But to execute the same scenario in other environments (UAT,stage..), i need to modify search criteria in feature files as per the data in those environments.
I'm thinking about maintaing the data for scenarios in properties file for different environments and read it based on the execution environment.
if data is in properties file, scenario will look like below. Instead of the search criteria, I will give propertyName:
Scenario: search user with valid criteria
Given user navigated to login page
And clicked search link
When searched by providing search criteria
|validSearchCriteria|
Then verify the results displayed
Is there any other way I could maintain the data for scenarios for all environments and use it as per the environment the scenario is getting executed? please let me know.
Thanks
I understand the problem, but I don't quite understand the example, so allow me to provide my own example to illustrate how this can be solved.
Let's assume we test a library management software and that in our development environment our test data have 3 books by Leo Tolstoy.
We can have test case like this:
Scenario: Search by Author
When I search for "Leo Tolstoy" books
Then I should get result "3"
Now let's assume we create our QA test environment and in that environment we have 5 books by Leo Tolstoy. The question is how do we modify our test case so it works in both environments?
One way is to use tags. For example:
#dev_env
Scenario: Search by Author
When I search for "Leo Tolstoy" books
Then I should get result "3"
#qa_env
Scenario: Search by Author
When I search for "Leo Tolstoy" books
Then I should get result "5"
The problem here is that we have lots of code duplication. We can solve that by using Scenario Outline, like this:
Scenario Outline: Search by Author
When I search for "Leo Tolstoy"
Then I should see "<number_of_books>" books
#qa_env
Examples:
| number_of_books |
| 5 |
#dev_env
Examples:
| number_of_books |
| 3 |
Now when you execute the tests, you should use #dev_env tag in dev environment and #qa_env in QA environment.
I'll be glad to hear some other ways to solve this problem.
You can do this in two ways
Push the programming up, so that you pass in the search criteria by the way you run cucumber
Push the programming down, so that your step definition uses the environment to decide where to get the valid search criteria from
Both of these involve writing a more abstract feature that does not specify the details of the search criteria. So you should end up with a feature that is like
Scenario: Search with valid criteria
When I search with valid criteria
Then I get valid results
I would implement this using the second method and write the step definitions as follows:
When "I search with valid criteria" do
search_with_valid_criteria
end
module SearchStepHelper
def search_with_valid_criteria
criteria = retrieve_criteria
search_with criteria
end
def retrieve_criteria
# get the environment you are working in
# use the environment to retrieve the search criteria
# return the criteria
end
end
World SearchStepHelper
Notice that all I have done is change the place where you do the work, from the feature, to a helper method in a module.
This means that as you are doing your programming in a proper programming language (rather than in the features) you can do whatever you want to get the correct criteria.
This may have been answered elsewhere but the team I work with currently tends to prefer pushing environmental-specific pre-conditions down into the code behind the step definitions.
One way to do this is by setting the environment name as an environment variable in the process running the test runner class. An example could be $ENV set to 'Dev'. Then #Before each scenario is tested it is possible verify the environment in which the scenario is being executed and load any environment-specific data needed by the scenario:
#Before
public void before(Scenario scenario) throws Throwable {
String scenarioName = scenario.getName();
env = System.getenv("ENV");
if (env == null) {
env = "Dev";
}
envHelper.loadEnvironmentSpecificVariables();
}
Here we set a 'default' value of 'Dev' in case the test runner is run without the environment variable being set. The envHelper points to a test utility class with the method loadEnvironmentSpecificVariables() that could load data from a JSON, csv, XML file with data specific to the environment being tested against.
An advantage of this approach is that it can help to de-clutter Feature files from potentially distracting environmental meta-data which can impact the readability of the feature outside of the development and testing domains.

mcollective mco facts command blank result output

I have install mcollective with activemq but when i run following command it ran successfully but result is blank, i want to see report output.
[root#vsoslp01 tmp]# mco find
vsopspss01
vsopsmgs01
[root#vsoslp01 tmp]# mco facts architecture
Report for fact: architecture
Finished processing 2 / 2 hosts in 293.91 ms
Installed plugins
[root#vsoslp01 tmp]# mco plugin doc
Please specify a plugin. Available plugins are:
Agents:
puppet Run Puppet agent, get its status, and enable/disable it
rpcutil General helpful actions that expose stats and internals to SimpleRPC clients
Aggregate:
average Displays the average of a set of numeric values
boolean_summary Aggregate function that will transform true/false values into predefined strings.
sum Determine the total added value of a set of values
summary Displays the summary of a set of results
Data Queries:
agent Meta data about installed MColletive Agents
fstat Retrieve file stat data for a given file
puppet Information about Puppet agent state
resource Information about Puppet managed resources
Discovery Methods:
flatfile Flatfile based discovery for node identities
mc MCollective Broadcast based discovery
stdin STDIN based discovery for node identities
Validator Plugins:
array Validates that a value is included in a list
ipv4address Validates that a value is an ipv4 address
ipv6address Validates that a value is an ipv6 address
length Validates that the length of a string is less or equal to a specified value
puppet_resource Validates the validity of a Puppet resource type and name
puppet_server_address Validates that a string is a valid Puppet Server and Port for the Puppet agent
puppet_tags Validates that a comma seperated list of tags are valid Puppet class names
puppet_variable Validates that a variable name is a valid Puppet name
regex Validates that a string matches a supplied regular expression
shellsafe Validates that a string is shellsafe
typecheck Validates that a value is of a certain type
EDIT:
[root#vsoslp01 logs]# mco inventory vsopspss01
Inventory for vsopspss01:
Server Statistics:
Version: 2.4.1
Start Time: Tue Feb 18 13:40:58 -0500 2014
Config File: /etc/mcollective/server.cfg
Collectives: mcollective
Main Collective: mcollective
Process ID: 7694
Total Messages: 14
Messages Passed Filters: 14
Messages Filtered: 0
Expired Messages: 0
Replies Sent: 13
Total Processor Time: 0.32 seconds
System Time: 0.09 seconds
Agents:
discovery puppet rpcutil
Data Plugins:
agent fstat puppet
resource
Configuration Management Classes:
No classes applied
Facts:
mcollective => 1
Did you populate the facts file? MCollective reads a yaml file contianing all the facts. If it's empty then it won't see any facts.
Populating the facts file
Have you applied Puppet manifests using the puppet-apply command? Does querying inventory on those systems return anything at all?
$ mco inventory vsopspss01

drupal module development

i have 2 question in drupal:
1.what's means of vid in node table?
2.I have three vocabularies:
Embassy
Organisation
Bussiness
How can I test in my node-address.tpl.php if the node type address belongs Embassy vocabulary ?
Node revision ID
I would use hook_node_view() for D7 or hook_nodeapi() "view" $op in a case of D6 for this. Putting logic into a template file is usually a bad idea.
vid depends on context, it can either mean Vocabulary ID in taxonamy or reVision ID for nodes. In the latter case it only is used if you have revisions enabled and it is set to save older versions of nodes.

Resources