In Puppet, I made my own module that adds administrator accounts to our management servers.
class admins::add_admin($username, $userkey) {
$username.each |String $username| {
file { "/home/${username}":
ensure => directory,
mode => '0750',
owner => $username,
}
user { $username:
ensure => present,
shell => '/bin/bash',
}
ssh_authorized_key { $username:
ensure => present,
user => $username,
type => 'ssh-rsa',
key => $userkey,
}
}
}
$username is an array of the desired usernames and $userkeys is an array of the ssh keys.
When the each loop is run, the users are created accordingly, however the keys are the same for every user (which is logical, because I don't have functionality yet to iterate over the userkeys).
What I want is that this Puppet module iterates over two arrays, but I don't know how to do that.
You could do it this way:
class admins::add_admin (
Array[Hash[String, String]] $users_data = [], # declare data type and defaults
) {
$users_data.each |Hash $user| {
$username = $user['username']
$userkey = $user['userkey']
file { "/home/${username}":
ensure => directory,
mode => '0750',
owner => $username,
}
user { $username:
ensure => present,
shell => '/bin/bash',
}
ssh_authorized_key { $username:
ensure => present,
user => $username,
type => 'ssh-rsa',
key => $userkey,
}
}
}
And then you'd pass data in that looks like this:
class { 'admins::add_admin':
users_data => [
{
'username' => 'bill',
'userkey' => 'keydata1',
},
{
'username' => 'ted',
'userkey' => 'keydata2',
},
]
}
I think it is much better here to restructure your input data than try to deal with two arrays.
Related
I want to be able to get notified when a server is down.
Puppet: sensu/sensu-puppet v5.9.0
Based on https://github.com/sensu/sensu-go/issues/1960, I tried this code without success.
Since there is a special static handler called "keepalive", I create a set handler "keepalive" and include my telegram handler (telegram_ops) in it.
BACKEND Code
class { 'sensu':
api_host => 'sensu3.mydomain.com',
password => '****',
agent_password => '****',
agent_entity_config_password => '****',
ssl_ca_source => 'puppet:///modules/common/ssl/ca.crt',
}
include sensu::cli
class { 'sensu::backend':
ssl_cert_source => 'puppet:///modules/common/ssl/my.crt',
ssl_key_source => 'puppet:///modules/common/ssl/my.key',
config_hash => {
'deregistration-handler' => 'deregistration',
'event-log-file' => '/var/log/sensu/events.log'
}
}
sensu_bonsai_asset { 'sensu/check-cpu-usage':
ensure => 'present',
version => 'latest',
}
sensu_check { 'check-cpu':
ensure => 'present',
labels => {'contacts' => 'ops'},
handlers => ['telegram_ops'],
command => 'check-cpu-usage -w 75 -c 85',
interval => 60,
subscriptions => 'linux',
publish => true,
runtime_assets => ['sensu/check-cpu-usage']
}
sensu_bonsai_asset { 'sensu/sensu-go-has-contact-filter':
ensure => 'present',
version => '0.2.0',
}
sensu_filter { 'contact_ops':
ensure => 'present',
action => 'allow',
runtime_assets => ['sensu/sensu-go-has-contact-filter'],
expressions => ['has_contact(event, "ops")'],
}
sensu_filter { 'first_occurrence':
ensure => 'present',
action => 'allow',
expressions => ['event.check.occurrences == 1'],
}
sensu_bonsai_asset { 'Thor77/sensu-telegram-handler':
ensure => 'present'
}
sensu_handler { 'telegram_ops':
ensure => 'present',
type => 'pipe',
command => 'sensu-telegram-handler --api-token **** --chatid -****',
timeout => 10,
runtime_assets => ['Thor77/sensu-telegram-handler'],
filters => [
'is_incident',
'not_silenced',
'contact_ops',
'first_occurrence'
],
}
sensu_handler { 'keepalive':
ensure => 'present',
type => 'set',
handlers => ['telegram_ops'],
}
AGENT Code (Very simple code.)
class { 'sensu::agent':
subscriptions => ['default', 'linux', $hostname, 'nuc']
}
It does not work. If I suddenly shutdown a server, nothing happeds.
What is the proper way to do this?
It is posible any other aproach?
Long time ago there was another solution, class sensu had the parameter client_keepalive but it is not available anymore.
Thanks.
Hello I have this configuration for a logstash running on my computer :
input {
exec {
command => "powershell -executionpolicy unrestricted -f scripts/windows/process.ps1 command logstash"
interval => 30
type => "process_data"
codec => line
tags => [ logstash" ]
}
}
output
{
if "sometype-logs" in [tags] {
elasticsearch {
action => "index"
doc_as_upsert => true
index => "sometype-logs-%{+YYYY.MM.dd}"
hosts => "locahost:9200"
template_overwrite => true
}
} else {
elasticsearch {
action => "index"
doc_as_upsert => true
index => "%{type}"
hosts => "localhost:9200"
template_overwrite => true
}
}
When displaying indexes I have :
Why is index name is "%type" and not "process_data" ?
Probably just something about syntax. To used some field of the data, you must use this syntax
%{[somefield]}
(see example on this documentation page)
So, in your case, try this :
"%{[type]}"
in place of
"%{type}"
I want to skip certain exec, and file resources when there is no change in file content. Its working for file and service combination...
For example,
file { 'configfile.cfg':
ensure => file,
path => '/etc/configfile.cfg',
mode => '0644',
owner => 'root',
group => 'root',
content => template($template_file),
require => Package[$main_package],
notify => Service[$service],
}
service { $service:
ensure => $ensure,
enable => $enable,
hasrestart => true,
hasstatus => true,
require => [ Package[$main_package], File['configfile.cfg'] ],
}
The above code is working as expected. Service restarts only if it detects any change in /etc/configfile.cfg..
But I am following the same approach for file and exec combination, but its not working.... please see the below code
exec { 'purge-config-files':
before => [File["${config_file_service}"], File["${config_file_host}"]],
command => "/bin/rm -f ${baseconfigdir}/*",
notify => Domain_ip_map[$domain_ip_map_titles],
}
file { 'deployconfig.cfg':
ensure => file,
path => '/home/path/deployconfig.cfg',
mode => '0644',
owner => 'root',
group => 'root',
content => "test",
notify => Exec['purge-config-files'],
}
This code is not working as expected. Even if there is no change in
/home/path/deployconfig.cfg, Exec['purge-config-files'] is always
executing... what could be the reason for this?
I found the answer
exec { 'purge-config-files':
before => [File["${config_file_service}"], File["${config_file_host}"]],
command => "/bin/rm -f ${baseconfigdir}/*",
notify => Domain_ip_map[$domain_ip_map_titles],
subscribe=> File['deployconfig.cfg'],
refreshonly => true,
}
I forgot to put subscribe and refreshonly
....
We are storing our data types in each of its database in couchdb. What sort of format will the config file have to import data from multiple databases? Or do I need to have multiple config files for importing data from each database to an index. Will appreciate any help.
Thanks.
We use a single config file for multiple databases.
It's not perfect, but functional for now.
Currently looks like:
input {
couchdb_changes {
sequence_path => "db1.seq"
db => "db1"
host => "xxx.xxx.xxx.xxx"
username => "xxx"
password => "xxx"
add_field => {
"organization" => "db1"
}
}
couchdb_changes {
sequence_path => "db2.seq"
db => "db2"
host => "xxx.xxx.xxx.xxx"
username => "xxx"
password => "xxx"
add_field => {
"organization" => "db2"
}
}
}
filter {
mutate {
remove_field => [ "_attachments" ]
}
}
output {
stdout { codec => rubydebug }
elasticsearch {
cluster => "cluster0"
host => ["xxx.xxx.xxx.xxx", "xxx.xxx.xxx.xxx", "xxx.xxx.xxx.xxx"]
protocol => "http"
index => "%{[organization]}"
document_id => "%{[#metadata][_id]}"
}
}
We can send email notification to a particular email address but I want to send email to different mailing address based on some pattern in logs.
For example say I have three users with email address
userOne#something.com receives mail id log contains [userOneModule]
userTwo#something.com receives mail id log contains [userTwoModule]
userThree#something.com receives mail id log contains [userThreeModule
Logstash version used is 1.3.3
Any help if this is possible in logstash or any workaround to achieve something like this.
This is my configuration, Although both 'Security' and 'Portal' matches but email is sent to only one.
When I keep only one kind of logs say Security Logs or Portal Logs it works but when I keep both the logs it only sends email to one of it.
output {
if [module] == "Security"{
email {
to => "userOne#somemail.com"
from => "dummy2161#somemail.com"
match =>["%{message}","severity,ERROR"]
subject => "Error Occured"
body => "%{message}"
via => "smtp"
options => {
starttls => "true"
smtpIporHost => "smtp.gmail.com"
port => "587"
userName => "dummy2161#somemail.com"
password => "*******"
authenticationType => "plain"
}
}
}
if [module] == "Portal"{
email {
to => "userTwo#somemail.com"
from => "dummy2161#gmail.com"
match =>["%{message}","severity,ERROR"]
subject => "Error Occured"
body => "%{message}"
via => "smtp"
options => {
starttls => "true"
smtpIporHost => "smtp.gmail.com"
port => "587"
userName => "dummy2161#somemail
password => "*****"
authenticationType => "plain"
}
}
}
}
Thanks
You can either store the recipient email address in a field (using conditionals or grok filters to assign the value) and refer to that field in the email output's to parameter, or you can wrap multiple email outputs in conditionals.
Using a field for storing the address:
filter {
# If the module name is the same as the recipient address's local part
mutate {
add_field => { "recipient" => "%{modulename}#example.com" }
}
# Otherwise you might have to use conditionals.
if [modulename] == "something" {
mutate {
add_field => { "recipient" => "someuser#example.com" }
}
} else {
mutate {
add_field => { "recipient" => "otheruser#example.com" }
}
}
}
output {
email {
to => "%{recipient}"
...
}
}
Wrapping outputs in conditionals:
output {
if [modulename] == "something" {
email {
to => "someuser#example.com"
...
}
} else {
email {
to => "otheruser#example.com"
...
}
}
}