Puppet: How to gather selected node names in manifest? - puppet

I am trying to set up a set of nodes running various parts of the ELK stack. In particular, I've got a set of systems running Elasticsearch and I'd like to fill out my logstash config file so that it knows which systems have ES on them.
I can see something like this in the logstash config (obviously untested):
output {
elasticsearch {
hosts => [
<%
#es_hosts.each do |host|
"#{host}",
end
-%>
]
}
}
But what I can't figure out is how to collect the hostnames for systems which are running elasticsearch. I've got a modules which apply RabbitMQ and ES, and it already exports some resources, but this one looks like it just needs nodenames for merging into a list.
--------EDIT BELOW--------
I stumbled across datacat after examing some of the PF modules I use, and thought it might be a candidate. Here's what I've done, posted here because it's not working the way I would have expected.
On my elasticsearch nodes (there are several):
##datacat_fragment { "${::hostname} in hosts":
tag => ['elasticsearch_cluster'],
target => '/etc/logstash/conf.d/master.conf',
data => {
host => ["${::hostname}" ],
}
}
Then, on the logstash node that needs to output to these ES nodes:
Datacat_fragment<| tag == 'elasticsearch_cluster' |>
datacat { '/etc/lostash/conf.d/master.conf':
template => "${module_name}/logstash-master.conf.erb",
}
Finally, the template itself:
input { [...snip...] }
filter {}
output {
elasticsearch {
<% #data.keys.sort.each do |host| %>
hosts = [
<%= #data[host.sort.join(',') %>
]
}
}
Sadly, the result of all this is
input { [...snip...] }
filter {}
output {
elasticsearch {
}
}
So at present, it looks like the exported resources aren't being instantiated as expected and I can't see why. If I add a datacat_fragment defined the same way but local to the logstash manifest, the data gets inserted into the .conf file just fine. It's just the ones from the ES nodes that are being ignored.
To further complicate matters, the input section needs to have a value inserted into it that's based on the system receiving the file. So there's one part that needs to behave like a traditional template, and another section that needs to have data inserted from multiple sources. Datacat looks promising, but is there another way to do this? Concat with an inline template somehow?

Related

How to read keys into array?

I am trying to read keys from a hiera json file into an array.
The json is as follows:
{
"network::interfaces": {
"eth0": {
"ip": "10.111.22.10"
},
"eth1": {
"ip": "10.111.22.11"
},
"eth2": {
"ip": "10.111.22.12"
}
}
}
In my Puppet code, I am doing this:
$network_interfaces = hiera_array('network::interfaces')
notice($network_interfaces)
Which results in the following:
Notice: Scope(Class[Role::Vagrant]): {eth0 => {ip => 10.111.22.10}, eth2 => {ip => 10.111.22.11}, eth3 => {ip => 10.111.22.12}}
But what I want are just the interfaces: [eth0, eth1, eth2]
Can someone let me know how to do this?
The difference between hiera_array() and plain hiera() has to do with what happens when the requested key (network::interfaces in your case) is present at multiple hierarchy levels. It has very little to do with what form you want the data in, and nothing to do with selecting bits and pieces of data structures. hiera_array() requests an "array-merge" lookup. The more modern lookup() function refers to this as the "unique" merge strategy.
It seems unlikely that an array-merge lookup is in fact what you want. In that case, the easiest thing to do is read the whole hash and extract the keys:
$network_interfaces = keys(hiera('network::interfaces'))
In Puppet 4 you'll need to use the keys() function provided by the puppetlabs/stdlib module. From Puppet 5 on, that function appears in core Puppet.

Logstash Filter not working when something has a period in the name

So I need to write a filter that changes all the periods in field names to underscores. I am using mutate, and I can do some things and not other things. For reference here is my current output in Kibana.
See those fields that say "packet.event-id" and so forth? I need to rename all of those. Here is my filter that I wrote and I do not know why it doesn't work
filter {
json {
source => "message"
}
mutate {
add_field => { "pooooo" => "AW CMON" }
rename => { "offset" = "my_offset" }
rename => { "packet.event-id" => "my_packet_event_id" }
}
}
The problem is that I CAN add a field, and the renaming of "offset" WORKS. But when I try and do the packet one nothing changes. I feel like this should be simple and I am very confused as to why only the one with a period in it doesn't work.
I have refreshed the index in Kibana, and still nothing changes. Anyone have a solution?
When they show up in dotted notation in Kibana, it's because there is structure to the document you originally loaded in json format.
To access the document structure using logstash, you need to use [packet][event-id] in your rename filter instead of packet.event-id.
For example:
filter {
mutate {
rename => {
"[packet][event-id]" => "my_packet_event_id"
}
}
}
You can do the JSON parsing directly in Filebeat by adding a few lines of config to your filebeat.yml.
filebeat.prospectors:
- paths:
- /var/log/snort/snort.alert
json.keys_under_root: true
json.add_error_key: true
json.message_key: log
You shouldn't need to rename the fields. If you do need to access a field in Logstash you can reference the field as [packet][length] for example. See Logstash field references for documentation on the syntax.
And by the way, there is a de_dot for replacing dots in field names, but that shouldn't be applied in this case.

How to do translation dictionary dynamically in logstash based on field value?

How to do translation dictionary dynamically in logstash based on field value?
For example my current configuration is:
if [host] == "1.1.1.1" {
translate {
field => "[netflow][input_snmp]"
destination => "[netflow][interface_in]"
dictionary_path => "/etc/logstash/yaml/1.1.1.1.yml"
}
}
if [host] == "2.2.2.2" {
translate {
field => "[netflow][input_snmp]"
destination => "[netflow][interface_in]"
dictionary_path => "/etc/logstash/yaml/2.2.2.2.yml"
}
}
Is there a generic way to achieve this?
Logstash version 2.2.4
Thanks
I guess you can use it as:
translate {
field => "[netflow][input_snmp]"
destination => "[netflow][interface_in]"
dictionary_path => "/etc/logstash/yaml/%{host}.yml"
}
Check that: https://www.elastic.co/guide/en/logstash/current/event-dependent-configuration.html#sprintf
You can't load dictionary files dynamically depening on field value, it's not a question of syntax.
At least for the moment (current logstash version is 7.6.2)
All dictionary files are loaded in memory at logstash startup (and I suppose after a logstash configuration reload), before any event is processed.
Then the contents of the existing dictionary files are dynamically reloaded according to the refresh_interval option.
The dictionary paths can't be modified "at run time" depending on the current event.
In the elastic support forums you can check extra explanation (the 1st link even has a reference to the source code involved) and workarounds, but in the end it revolves around the same idea shown in your config:
set a bunch of static dictionary file names and control their usage with conditionals. You may use environment variables in the dictionary_path but they will be used once per logstash startup/reload.
https://discuss.elastic.co/t/dynamic-dictionary/138798/5
https://discuss.elastic.co/t/logstash-translate-plugin-dynamic-dictionary-path/129889

Puppet; Call another .pp

So I am using the https://forge.puppetlabs.com/pdxcat/nrpe module to try to figure out automation of NRPE across hosts.
One of the available usages is
nrpe::command {
'check_users':
ensure => present,
command => 'check_users -w 5 -c 10';
}
Is there anyway to make a "group" of these commands and have them called on specific nodes?
For example:
you have 5 different nrpe:command each defining a different check, and then call those specific checks?
I am basically trying to figure out if I could group certain checks/commands together instead of setting up a ton of text in the main sites.pp file. This would also allow for customized templates/configurations across numerous nodes.
Thanks!
EDIT:
This is the command and what it's supposed to do when called on with the 'check_users' portion. If I could have a class with a set of "nrpe:command" and just call on that class THROUGH the module, it should work. Sorry, though. Still new at puppet. Thanks again.
define nrpe::command (
$command,
$ensure = present,
$include_dir = $nrpe::params::nrpe_include_dir,
$libdir = $nrpe::params::libdir,
$package_name = $nrpe::params::nrpe_packages,
$service_name = $nrpe::params::nrpe_service,
$file_group = $nrpe::params::nrpe_files_group,
) {
file { "${include_dir}/${title}.cfg":
ensure => $ensure,
content => template('nrpe/command.cfg.erb'),
owner => root,
group => $file_group,
mode => '0644',
require => Package[$package_name],
notify => Service[$service_name],
}
}
What version are you talking about? In puppet latest versions, inheritance is deprecated, then you shouldn't use it.
The easiest way would be to use "baselines".
Assuming you are using a manifests directory (manifest = $confdir/manifests inside your puppet.conf), simply create a $confdir/manifests/minimal.pp (or $confdir/manifests/nrpe_config.pp or whatever class name you want to use) with the content below:
class minimal {
nrpe::command { 'check_users':
ensure => present,
command => 'check_users -w 5 -c 10',
}
}
Then just call this class inside your node definitions (let's say in $confdir/manifests/my_node.pp) :
node 'my_node.foo.bar' {
include minimal
}

Puppet: unable to get hiera variable

I've been using hiera for several weeks now and all was working fine til few days ago when i started to get that kind of message:
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Could not find data item nom in any Hiera data file and no default supplied on node d0puppetclient.victor-buck.com
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
So i tried to make a very simple test to check if the problem came from my last code changes and i'm still getting this message. I can't get hiera variable anymore.
Below the test i made:
hiera.yaml:
---
:backends:
- yaml
:yaml:
:datadir: /etc/puppet/hieradata
:hierarchy:
- common
site.pp:
# /etc/puppet/manifests/site.pp
case $operatingsystem {
'Solaris': { include role::solaris }
'RedHat', 'CentOS': { include redhat::roles::common }
/^(Debian|Ubuntu)$/: { include role::debian }
# default: { include role::generic }
}
case $hostname {
/^d0puppetclient/: { include test }
}
test.pp:
class test{
$nom = hiera('nom')
file {"/root/test.txt":
ensure => file,
source => "/etc/puppet/test.txt.erb",
}
}
test.txt.erb:
<%= nom %>
Any idea about to fix this? I thought this could be an file access right issue, so i tried to grante access on some files (755) and it's not working...
You need to define nom in your common.yaml in order for it to hold a value. You can set a default value and conditionally create the file if you don't plan on setting it.
class test {
$nom = hiera('nom', false)
if $nom {
file { '/root/test.txt':
ensure => file,
content => template('test/test.txt.erb')
}
}
}
Notice how i used content instead of source. When using erb templates you need to specify the content using the template() function.
Using Templates
If you use source it is expecting a file rather than an erb template.
Hope this helps.

Resources