Sensu-Go ::: Set handler for keepalive check with puppet - puppet

I want to be able to get notified when a server is down.
Puppet: sensu/sensu-puppet v5.9.0
Based on https://github.com/sensu/sensu-go/issues/1960, I tried this code without success.
Since there is a special static handler called "keepalive", I create a set handler "keepalive" and include my telegram handler (telegram_ops) in it.
BACKEND Code
class { 'sensu':
api_host => 'sensu3.mydomain.com',
password => '****',
agent_password => '****',
agent_entity_config_password => '****',
ssl_ca_source => 'puppet:///modules/common/ssl/ca.crt',
}
include sensu::cli
class { 'sensu::backend':
ssl_cert_source => 'puppet:///modules/common/ssl/my.crt',
ssl_key_source => 'puppet:///modules/common/ssl/my.key',
config_hash => {
'deregistration-handler' => 'deregistration',
'event-log-file' => '/var/log/sensu/events.log'
}
}
sensu_bonsai_asset { 'sensu/check-cpu-usage':
ensure => 'present',
version => 'latest',
}
sensu_check { 'check-cpu':
ensure => 'present',
labels => {'contacts' => 'ops'},
handlers => ['telegram_ops'],
command => 'check-cpu-usage -w 75 -c 85',
interval => 60,
subscriptions => 'linux',
publish => true,
runtime_assets => ['sensu/check-cpu-usage']
}
sensu_bonsai_asset { 'sensu/sensu-go-has-contact-filter':
ensure => 'present',
version => '0.2.0',
}
sensu_filter { 'contact_ops':
ensure => 'present',
action => 'allow',
runtime_assets => ['sensu/sensu-go-has-contact-filter'],
expressions => ['has_contact(event, "ops")'],
}
sensu_filter { 'first_occurrence':
ensure => 'present',
action => 'allow',
expressions => ['event.check.occurrences == 1'],
}
sensu_bonsai_asset { 'Thor77/sensu-telegram-handler':
ensure => 'present'
}
sensu_handler { 'telegram_ops':
ensure => 'present',
type => 'pipe',
command => 'sensu-telegram-handler --api-token **** --chatid -****',
timeout => 10,
runtime_assets => ['Thor77/sensu-telegram-handler'],
filters => [
'is_incident',
'not_silenced',
'contact_ops',
'first_occurrence'
],
}
sensu_handler { 'keepalive':
ensure => 'present',
type => 'set',
handlers => ['telegram_ops'],
}
AGENT Code (Very simple code.)
class { 'sensu::agent':
subscriptions => ['default', 'linux', $hostname, 'nuc']
}
It does not work. If I suddenly shutdown a server, nothing happeds.
What is the proper way to do this?
It is posible any other aproach?
Long time ago there was another solution, class sensu had the parameter client_keepalive but it is not available anymore.
Thanks.

Related

How to map array inside message in Logstash HTTP Output

I am using Logstash to update by query existing Elasticsearch documents with an additional field that contains aggregate values extracted from Potgresql table.
I use elastichsearch output to load one index using document_id and http output to update another index that have different document_id but receving errors:
[2023-02-08T17:58:12,086][ERROR][logstash.outputs.http ][main][b64f19821b11ee0df1bd165920785876cd6c5fab079e27d39bb7ee19a3d642a4] [HTTP Output Failure] Encountered non-2xx HTTP code 400 {:response_code=>400, :url=>"http://localhost:9200/medico/_update_by_query", :event=>#LogStash::Event:0x19a14c08}
This is my pipeline configuration:
input {
jdbc {
# Postgres jdbc connection string to our database, mydb
jdbc_connection_string => "jdbc:postgresql://handel:5432/mydb"
statement_filepath => "D:\ProgrammiUnsupported\logstash-7.15.2\config\nota_sede.sql"
}
}
filter {
aggregate {
task_id => "%{idCso}"
code => "
map['idCso'] = event.get('idCso')
map['noteSede'] ||= []
map['noteSede'] << {
'id' => event.get('idNota'),
'tipo' => event.get('tipoNota'),
'descrizione' => event.get('descrizione'),
'data' => event.get('data'),
'dataInizio' => event.get('dataInizio'),
'dataFine' => event.get('dataFine')
}
event.cancel()"
push_previous_map_as_event => true
timeout => 60
timeout_tags => ['_aggregatetimeout']
}
}
}
output {
stdout { codec => rubydebug { metadata => true } }
# this works
elasticsearch {
hosts => "https://localhost:9200"
document_id => "STRUTTURA_%{idCso}"
index => "struttura"
action => "update"
user => "user"
password => "password"
ssl => true
cacert => "/usr/share/logstash/config/ca.crt"
}
http {
url => "http://localhost:9200/medico/_update_by_query"
user => "elastic"
password => "changeme"
http_method => "post"
format => "message"
content_type => "application/json"
message => '{
"query":{
"term":{
"idCso":"%{idCso}"
}
},
"script":{
"source":"ctx._source.noteSede=params.noteSede",
"lang":"painless",
"params":{
"noteSede":"%{noteSede}"
}
}
}
}'
}
}
The stdout output show me the sended docs to output like this:
{
"query" => {
"term" => {
"idCso" => "859119"
}
},
"script" => {
"source" => "ctx._source.noteSede=params.noteSede",
"lang" => "painless",
"params" => {
"noteSede" => "{dataFine=null, dataInizio=2020-02-13, descrizione=?, tipo=DB, id=6390644, data=2020-02-13 12:26:58.409},{dataFine=null, dataInizio=2020-02-13, descrizione=?, tipo=DE, id=6390645, data=2020-02-13 12:26:58.41}"
}
}
}
}
How could I set noteSede array field into message to _update_by_query ?

Logstash not replacing "%type" with value

Hello I have this configuration for a logstash running on my computer :
input {
exec {
command => "powershell -executionpolicy unrestricted -f scripts/windows/process.ps1 command logstash"
interval => 30
type => "process_data"
codec => line
tags => [ logstash" ]
}
}
output
{
if "sometype-logs" in [tags] {
elasticsearch {
action => "index"
doc_as_upsert => true
index => "sometype-logs-%{+YYYY.MM.dd}"
hosts => "locahost:9200"
template_overwrite => true
}
} else {
elasticsearch {
action => "index"
doc_as_upsert => true
index => "%{type}"
hosts => "localhost:9200"
template_overwrite => true
}
}
When displaying indexes I have :
Why is index name is "%type" and not "process_data" ?
Probably just something about syntax. To used some field of the data, you must use this syntax
%{[somefield]}
(see example on this documentation page)
So, in your case, try this :
"%{[type]}"
in place of
"%{type}"

Logstash Not Recognizing The Lat/Lon fileds in Json Format

I have fields like A_Latitude, A_Longitude, B_Latitude and B_Longitude. I would like to make use of this data and create Maps in Kibana. The problem is data is getting into elasticsearch, but the gejson columns created in Logstash filter not gettin recognized and data is not being fed into geo_point1 and geo_point2.
Hence, first created a geo_point mapping in Kibana dev tools as follows,
PUT cc-test
{
"mappings": {
"properties": {
"geo_point1":{
"type": "geo_point"
},
"geo_point2":{
"type": "geo_point"
}
}
}
}
I have configured my logstash config file the following way,
input {
jdbc {
# Postgres jdbc connection string to our database, mydb
jdbc_connection_string => "some string"
# The user we wish to execute our statement as
jdbc_user => "User"
jdbc_password => "Password"
# The path to our downloaded jdbc driver
jdbc_driver_library => "/apps/ELK/logstash/driver/ngdbc-2.4.56.jar"
jdbc_driver_class => "com.sap.db.jdbc.Driver"
# our query
#jdbc_validate_connection => true
#schedule => "* * * * *"
#record_last_run => true
# last_run_metadata_path => "login.txt"
statement => "SELECT
inputdata.A_LATITUDE, inpudata.A_LONGITUDE, inputdata.B_LATITUDE,
inputdata.B_LONGITUDE, outputdata.BANDWIDTH, inputdata.SEQUENCEID,
inputdata.REQUESTTIMESTAMP
FROM inputdata, outputdata
WHERE
inputdata.SEQUENCEID = outputdata.SEQUENCEID
AND inputdata.REQUEST_TIMESTAMP >= '2019-01-01 00:00:00'
AND inputdata.SEQUENCEID IS NOT NULL
AND inputdata.SEQUENCEID NOT IN ('N/A')
ORDER BY inputdata.SEQUENCEID DESC "
# jdbc_paging_enabled => "true"
# jdbc_page_size => "10000"
}
}
filter {
mutate {
convert => { "A_LONGITUDE" => "float" }
convert => { "A_LATITUDE" => "float" }
convert => { "B_LONGITUDE" => "float" }
convert => { "B_LATITUDE" => "float" }
}
mutate {
rename => {
"A_LONGITUDE" => "[geo_point1][lon]"
"A_LATITUDE" => "[geo_point1][lat]"
}
}
mutate {
rename => {
"B_LONGITUDE" => "[geo_point2][lon]"
"B_LATITUDE" => "[geo_point2][lat]"
}
}
}
output {
elasticsearch {
hosts => ["http://some server"]
index => "cc-test"
#document_type => "system_logs"
user => "Username"
password => "Password"
}
stdout { codec => rubydebug }
}
Don't understand what is wrong with the Filter part and why data is not getting into the columns geo_point1 and geo_point2!!
Somebody please help :pray::pray::pray:

Loop over multiple variables in Puppet

In Puppet, I made my own module that adds administrator accounts to our management servers.
class admins::add_admin($username, $userkey) {
$username.each |String $username| {
file { "/home/${username}":
ensure => directory,
mode => '0750',
owner => $username,
}
user { $username:
ensure => present,
shell => '/bin/bash',
}
ssh_authorized_key { $username:
ensure => present,
user => $username,
type => 'ssh-rsa',
key => $userkey,
}
}
}
$username is an array of the desired usernames and $userkeys is an array of the ssh keys.
When the each loop is run, the users are created accordingly, however the keys are the same for every user (which is logical, because I don't have functionality yet to iterate over the userkeys).
What I want is that this Puppet module iterates over two arrays, but I don't know how to do that.
You could do it this way:
class admins::add_admin (
Array[Hash[String, String]] $users_data = [], # declare data type and defaults
) {
$users_data.each |Hash $user| {
$username = $user['username']
$userkey = $user['userkey']
file { "/home/${username}":
ensure => directory,
mode => '0750',
owner => $username,
}
user { $username:
ensure => present,
shell => '/bin/bash',
}
ssh_authorized_key { $username:
ensure => present,
user => $username,
type => 'ssh-rsa',
key => $userkey,
}
}
}
And then you'd pass data in that looks like this:
class { 'admins::add_admin':
users_data => [
{
'username' => 'bill',
'userkey' => 'keydata1',
},
{
'username' => 'ted',
'userkey' => 'keydata2',
},
]
}
I think it is much better here to restructure your input data than try to deal with two arrays.

skip puppet functions when there is no change in file content

I want to skip certain exec, and file resources when there is no change in file content. Its working for file and service combination...
For example,
file { 'configfile.cfg':
ensure => file,
path => '/etc/configfile.cfg',
mode => '0644',
owner => 'root',
group => 'root',
content => template($template_file),
require => Package[$main_package],
notify => Service[$service],
}
service { $service:
ensure => $ensure,
enable => $enable,
hasrestart => true,
hasstatus => true,
require => [ Package[$main_package], File['configfile.cfg'] ],
}
The above code is working as expected. Service restarts only if it detects any change in /etc/configfile.cfg..
But I am following the same approach for file and exec combination, but its not working.... please see the below code
exec { 'purge-config-files':
before => [File["${config_file_service}"], File["${config_file_host}"]],
command => "/bin/rm -f ${baseconfigdir}/*",
notify => Domain_ip_map[$domain_ip_map_titles],
}
file { 'deployconfig.cfg':
ensure => file,
path => '/home/path/deployconfig.cfg',
mode => '0644',
owner => 'root',
group => 'root',
content => "test",
notify => Exec['purge-config-files'],
}
This code is not working as expected. Even if there is no change in
/home/path/deployconfig.cfg, Exec['purge-config-files'] is always
executing... what could be the reason for this?
I found the answer
exec { 'purge-config-files':
before => [File["${config_file_service}"], File["${config_file_host}"]],
command => "/bin/rm -f ${baseconfigdir}/*",
notify => Domain_ip_map[$domain_ip_map_titles],
subscribe=> File['deployconfig.cfg'],
refreshonly => true,
}
I forgot to put subscribe and refreshonly
....

Resources