I am trying to read keys from a hiera json file into an array.
The json is as follows:
{
"network::interfaces": {
"eth0": {
"ip": "10.111.22.10"
},
"eth1": {
"ip": "10.111.22.11"
},
"eth2": {
"ip": "10.111.22.12"
}
}
}
In my Puppet code, I am doing this:
$network_interfaces = hiera_array('network::interfaces')
notice($network_interfaces)
Which results in the following:
Notice: Scope(Class[Role::Vagrant]): {eth0 => {ip => 10.111.22.10}, eth2 => {ip => 10.111.22.11}, eth3 => {ip => 10.111.22.12}}
But what I want are just the interfaces: [eth0, eth1, eth2]
Can someone let me know how to do this?
The difference between hiera_array() and plain hiera() has to do with what happens when the requested key (network::interfaces in your case) is present at multiple hierarchy levels. It has very little to do with what form you want the data in, and nothing to do with selecting bits and pieces of data structures. hiera_array() requests an "array-merge" lookup. The more modern lookup() function refers to this as the "unique" merge strategy.
It seems unlikely that an array-merge lookup is in fact what you want. In that case, the easiest thing to do is read the whole hash and extract the keys:
$network_interfaces = keys(hiera('network::interfaces'))
In Puppet 4 you'll need to use the keys() function provided by the puppetlabs/stdlib module. From Puppet 5 on, that function appears in core Puppet.
Related
I'm trying to pass a list of hashes into a class. The need is to pass in one or more objects into my class - each object has a small set of data that is used to generate a templated script with the data points found in the hash.
I've successfully used Hiera Arrays & Hashes:
mymods::keyvaultcerts::array::certNames: ['cert1','cert2','cert3']
mymods::chocopackages::hash::packages:
dotnetfx : 4.8
urlrewrite : 2.1
azure-cli : 2.31
vcredist : 12.0`
The above has worked great. I can access the Hash data very easily
class mymods::chocopackages::hash (
Hash $packages
){
$packages.each | $packageName, $packageVersion | {
package{$packageName:
ensure => $packageVersion,
}
}
What I'm trying to do now, is building off the above. I need to pass in a list of objects to my class. So I'm trying to use an array of hash objects:
profile::octopus::worker::workerConfigurations:[
{
Port: 10933,
Pool: "General Worker Pool",
User: "Dummy1",
DisplaySuffix: "",
},
{
Port: 10934,
Pool: "General Dev Pool",
User: "Dummy2",
DisplaySuffix: "Dev",
},
{
Port: 10935,
Pool: "General QA Pool",
User: "Dummy3",
DisplaySuffix: "_QA",
}
]
Then, in my class I'm trying to iterate through the list and access the data like so ...
class profile::octopus::worker(
Array $workerConfigurations
){
$workerConfigurations.each | Integer $workerIndex, Hash $workerConfigurationData | {
$workerConfigurationData.each | $workerPort, $workerPool, $workerUser, $workerDisplaySuffix | {
notify{"\$workerPort ${$workerPort}" :}
}
}
}
What I was expecting, was to see notifications of the ports (10933, 10934, 10935). However, I'm receiving an error on the client-side:
Error while evaluating a Method Call, 'each' block expects between 1 and 2 arguments, got 4
My question is: Is this possible? If not, are there any recommendations on how to pass in a list of one or more objects into my class?
The built-in each function accepts a lambda that takes either one or two parameters. The significance of those parameters depends on whether one or two are given and on whether the first argument to each() is an array or a hash.
In no event may the block accept four parameters:
$workerConfigurationData.each | $workerPort, $workerPool, $workerUser, $workerDisplaySuffix | {
notify{"\$workerPort ${$workerPort}" :}
}
Nor is it clear what you expect the each to be doing for you there. Rather than iterating, you seem to want to select and use one of the mappings from the hash. You do that by subscripting the hash with the wanted key(s):
notify { "Port: ${workerConfigurationData['Port']}" :}
That has nothing in particular to do with Hiera, by the way. You can declare and use hashes (and arrays) without Hiera being involved.
I am trying to develop a puppet class with a defined resource which creates the configuration for a website.
One of the things that the defined resource has to do is assign the IP address of the website to a dummy interface. Due to constraints of the project this is done with NetworkManager.
So I have to generate a file like
[connection]
id=dummydsr
uuid=50819d31-8967-4321-aa34-383f4a658789
type=dummy
interface-name=dummydsr
permissions=
[ipv4]
method=manual
#IP Addresses come here
ipaddress1=1.2.3.4/32
ipaddress2=5.6.7.8/32
ipaddress3=8.7.6.5/32
[ipv6]
method=ignore
There is to be a line ipaddressX=... for every instance of the defined resource.
My problem is how do I track the number of times the defined resource has been instantiated so I can somehow increment a counter and generate the ipaddress lines.
Or for each instantiated defined resource, append the IP address to an array which I can later use to build the file
If I understand you, and I'm not certain that I do, but I think you would want to do something like this:
define mytype(
Integer $count,
...
) {
file { 'some_network_manager_file':
content => template(...)
}
}
And then you would have a loop:
$mystuff.each |$count, $data| {
mytype { ...:
count => $count,
...
}
}
Key insight here may be that the each function has some magic in it that allows you to get the index if you need it, see also this answer.
Now I think that's how it will work, without me spending time researching NetworkManager. If you provide more of your code, I may be able to update this to be more helpful.
This is less than ideal since I would prefer to have it inside the defined resource, but since I instantiate the defined resource with the data from a hash I use said hash to iterate that part.
class xxx_corp_webserver (
Hash $websites ={}
){
create_resources('xxx_corp_webserver::website', $websites)
# This would be nicer inside the defined class, but I did not find any other way
# Build and array with the IP addresses which are for DSR
$ipaddresses = $websites.map | $r | {
if $r[1]['enabledsr'] {
$r[1]['ipaddress']
}
}
# For each DSR address add the line
$ipaddresses.each | Integer $index , String $ipaddress | {
$num = $index+1
file_line{"dummydsr-ipaddress${num}":
ensure => present,
path => '/etc/NetworkManager/system-connections/dummydsr',
line => "address${num} = ${ipaddress}/32",
match => "^address.* = ${ipaddress}/32",
after => '# IP Addresses come here',
notify => Service['NetworkManager'],
require => File['/etc/NetworkManager/system-connections/dummydsr'],
}
}
}
I am trying to set up a set of nodes running various parts of the ELK stack. In particular, I've got a set of systems running Elasticsearch and I'd like to fill out my logstash config file so that it knows which systems have ES on them.
I can see something like this in the logstash config (obviously untested):
output {
elasticsearch {
hosts => [
<%
#es_hosts.each do |host|
"#{host}",
end
-%>
]
}
}
But what I can't figure out is how to collect the hostnames for systems which are running elasticsearch. I've got a modules which apply RabbitMQ and ES, and it already exports some resources, but this one looks like it just needs nodenames for merging into a list.
--------EDIT BELOW--------
I stumbled across datacat after examing some of the PF modules I use, and thought it might be a candidate. Here's what I've done, posted here because it's not working the way I would have expected.
On my elasticsearch nodes (there are several):
##datacat_fragment { "${::hostname} in hosts":
tag => ['elasticsearch_cluster'],
target => '/etc/logstash/conf.d/master.conf',
data => {
host => ["${::hostname}" ],
}
}
Then, on the logstash node that needs to output to these ES nodes:
Datacat_fragment<| tag == 'elasticsearch_cluster' |>
datacat { '/etc/lostash/conf.d/master.conf':
template => "${module_name}/logstash-master.conf.erb",
}
Finally, the template itself:
input { [...snip...] }
filter {}
output {
elasticsearch {
<% #data.keys.sort.each do |host| %>
hosts = [
<%= #data[host.sort.join(',') %>
]
}
}
Sadly, the result of all this is
input { [...snip...] }
filter {}
output {
elasticsearch {
}
}
So at present, it looks like the exported resources aren't being instantiated as expected and I can't see why. If I add a datacat_fragment defined the same way but local to the logstash manifest, the data gets inserted into the .conf file just fine. It's just the ones from the ES nodes that are being ignored.
To further complicate matters, the input section needs to have a value inserted into it that's based on the system receiving the file. So there's one part that needs to behave like a traditional template, and another section that needs to have data inserted from multiple sources. Datacat looks promising, but is there another way to do this? Concat with an inline template somehow?
How to do translation dictionary dynamically in logstash based on field value?
For example my current configuration is:
if [host] == "1.1.1.1" {
translate {
field => "[netflow][input_snmp]"
destination => "[netflow][interface_in]"
dictionary_path => "/etc/logstash/yaml/1.1.1.1.yml"
}
}
if [host] == "2.2.2.2" {
translate {
field => "[netflow][input_snmp]"
destination => "[netflow][interface_in]"
dictionary_path => "/etc/logstash/yaml/2.2.2.2.yml"
}
}
Is there a generic way to achieve this?
Logstash version 2.2.4
Thanks
I guess you can use it as:
translate {
field => "[netflow][input_snmp]"
destination => "[netflow][interface_in]"
dictionary_path => "/etc/logstash/yaml/%{host}.yml"
}
Check that: https://www.elastic.co/guide/en/logstash/current/event-dependent-configuration.html#sprintf
You can't load dictionary files dynamically depening on field value, it's not a question of syntax.
At least for the moment (current logstash version is 7.6.2)
All dictionary files are loaded in memory at logstash startup (and I suppose after a logstash configuration reload), before any event is processed.
Then the contents of the existing dictionary files are dynamically reloaded according to the refresh_interval option.
The dictionary paths can't be modified "at run time" depending on the current event.
In the elastic support forums you can check extra explanation (the 1st link even has a reference to the source code involved) and workarounds, but in the end it revolves around the same idea shown in your config:
set a bunch of static dictionary file names and control their usage with conditionals. You may use environment variables in the dictionary_path but they will be used once per logstash startup/reload.
https://discuss.elastic.co/t/dynamic-dictionary/138798/5
https://discuss.elastic.co/t/logstash-translate-plugin-dynamic-dictionary-path/129889
I am looking at puppet code that looks something like
class {
users => {
'repl#%' => {
ensure => present,
.
}
}
}
What does "repl" do? I cant find much information online.
The amount of anonymization almost hides the important points. But I belive that this is supposed to be the declaration of a hash, meant for use with the create_resources function.
It works like this: If you have a large number of resources that should not take all the space in your class (this reason is contrived), you can convert it to a hash structure instead.
mysql_grant {
'repl#%':
ensure => present,
rights => 'REPLICATION CLIENT';
}
This becomes a hash, stored in a variable.
$users = {
'repl#%' => {
ensure => present,
rights => 'REPLICATION CLIENT',
}
}
This can then be used to declare this (and more resources in the hash, if there is more than one) in a simple line.
create_resources('mysql_grant', $users)
I'm guessing that you are looking at grants because repl#% is a typical MySQL notation that means user with name "repl" from any client.
TL;DR it is a domain specific identifier and has no special meaning to Puppet itself.