print messages conditionally on resource synchronization - puppet

is there a way how to print out a message based on a resource synchronization? Something like:
required content of the file is following and if that is updated(synchronized) print a message e.g. Please restart the system.
I tried following
file { 'disableselinux':
ensure => present,
path => '/etc/selinux/config',
mode => 0644,
source => "puppet:///modules/base/selinux",
}
notify{'SElinuxChange':
loglevel => warning,
message => 'System needs restart',
subscribe => File['disableselinux'],
}
But that message will be printed every time I guess. Is there any elegant way of doing so avoiding if-then-else flags etc.

Related

How to let a mutex timeout?

I want to let my modem-AT-command-writing-thread only write to the modem's /dev/ttyUSB3 when the modem-AT-command-reading-thread has seen an "OK" or an "ERROR".
This initially sounds like a job for a Mutex<()>, but I have an additional requirement: If the modem-AT-command-reading-thread does not see an "OK" or "ERROR" within three seconds, the writing thread should just get on with sending the next AT command. i.e. If the reading thread gets nothing, the writing thread should still send one of its AT commands every three seconds. (Modems' AT command interfaces are often not nicely behaved.)
At the moment, I have a workaround using mpsc::channel:
Set-up:
let (sender, receiver) = channel::<()>();
modem-AT-command-reading-thread:
if line.starts_with("OK") || line.contains("ERROR") {
debug!("Sending go-ahead to writing_thread.");
sender.send(()).unwrap();
}
modem-AT-command-writing-thread:
/* This receive is just a way of blocking until the modem is ready. */
match receiver.recv_timeout(Duration::from_secs(3)) {
Ok(_) => {
debug!("Received go-ahead from reading thread.");
/*
* Empty the channel, in case the modem was too effusive. We don't want
* to "bank" earlier OK/ERRORs to allow multiple AT commands to be sent in
* quick succession.
*/
while let Ok(_) = receiver.try_recv() {}
}
Err(err) => match err {
RecvTimeoutError::Timeout => {
debug!("Timed-out waiting for go-ahead from reading thread.");
}
RecvTimeoutError::Disconnected => break 'outer
},
}
I cannot find a Mutex::lock_with_timeout().
How can I implement this properly, using a Mutex<()> or similar?
You can use parking_lot's Mutex, it has try_lock_for().

logstash file input in read mode for gzip file is consuming very high memory

Currently i am processing gzip files in logstash using file input plugin. its consuming very high memory and keeps on restarting even after giving a high heap size. As of now on an avg we are processing 50 files per min and the planning to process 1000's of file per min. With 100 files the RAM requirement touches 10Gb. What is the best way to tune this config or is there a better way to process such a huge volume of data in logstash.
is it advisable to write a processing engine in nodejs or any other languages.
Below is the logstash conf.
input {
file {
id => "client-files"
mode => "read"
path => [ "/usr/share/logstash/plugins/message/*.gz" ]
codec => "json"
file_completed_action => log_and_delete
file_completed_log_path => "/usr/share/logstash/logs/processed.log"
}
}
filter {
ruby {
code => 'monitor_name = event.get("path").split("/").last.split("_").first
event.set("monitorName", monitor_name )
split_field = []
event.get(monitor_name).each do |x|
split_field << Hash[event.get("Datapoints").zip(x)]
end
event.set("split_field",split_field)'
}
split {
field => "split_field"
}
ruby {
code => "event.get('split_field').each {|k,v| event.set(k,v)}"
remove_field => ["split_field","Datapoints","%{monitorName}"]
}
}

How to list applications and get the currently selected one within Node.JS

As the title suggests, I need to find a way to get the list of running applications (atom, chrome, etc.). I am currently using:
var exec = require('child_process').exec
exec('tasklist', (error, stdout, stderr) {
// stdout contains a list of running processes.
})
However this also gives services and hidden applications (redis-server, etc.) and doesn't return whether or not the window is currently active or not. Is there a way for this to be done? For reference, this is for a Windows system, but a cross-operating system solution would be preferable.
I found the wonderful winctl library allowed me to do what I needed. I used the following code:
const winctl = require('winctl')
// Iterate over all windows with a custom filter
winctl.FindWindows(win => win.isVisible() && win.getTitle()).then(windows => {
console.log("Visible windows:");
windows.sort((a,b) => a.getTitle().localeCompare(b.getTitle())).forEach(window => console.log(" - %s [pid=%d, hwnd=%d, parent=%d]", window.getTitle(), window.getPid(), window.getHwnd(), window.getParent()));
});

Logstash worker dies with no reason

Using logstash 2.3.4-1 on centos 7 with kafka-input plugin I sometimes get
{:timestamp=>"2016-09-07T13:41:46.437000+0000", :message=>#0, :events_consumed=>822, :worker_count=>1, :inflight_count=>0, :worker_states=>[{:status=>"dead", :alive=>false, :index=>0, :inflight_count=>0}], :output_info=>[{:type=>"http", :config=>{"http_method"=>"post", "url"=>"${APP_URL}/", "headers"=>["AUTHORIZATION", "Basic ${CREDS}"], "ALLOW_ENV"=>true}, :is_multi_worker=>false, :events_received=>0, :workers=>"", headers=>{..}, codec=>"UTF-8">, workers=>1, request_timeout=>60, socket_timeout=>10, connect_timeout=>10, follow_redirects=>true, pool_max=>50, pool_max_per_route=>25, keepalive=>true, automatic_retries=>1, retry_non_idempotent=>false, validate_after_inactivity=>200, ssl_certificate_validation=>true, keystore_type=>"JKS", truststore_type=>"JKS", cookies=>true, verify_ssl=>true, format=>"json">]>, :busy_workers=>1}, {:type=>"stdout", :config=>{"ALLOW_ENV"=>true}, :is_multi_worker=>false, :events_received=>0, :workers=>"\n">, workers=>1>]>, :busy_workers=>0}], :thread_info=>[], :stalling_threads_info=>[]}>, :level=>:warn}
this is the config
input {
kafka {
bootstrap_servers => "${KAFKA_ADDRESS}"
topics => ["${LOGSTASH_KAFKA_TOPIC}"]
}
}
filter {
ruby {
code =>
"require 'json'
require 'base64'
def good_event?(event_metadata)
event_metadata['key1']['key2'].start_with?('good')
rescue
true
end
def has_url?(event_data)
event_data['line'] && event_data['line'].any? { |i| i['url'] && !i['url'].blank? }
rescue
false
end
event_payload = JSON.parse(event.to_hash['message'])['payload']
event.cancel unless good_event?(event_payload['event_metadata'])
event.cancel unless has_url?(event_payload['event_data'])
"
}
}
output {
http {
http_method => 'post'
url => '${APP_URL}/'
headers => ["AUTHORIZATION", "Basic ${CREDS}"]
}
stdout { }
}
Which is odd, since it is written to logstash.log and not logstash.err
What does this error mean and how can I avoid it? (only restarting logstash solves it, until the next time it happens)
According to this github issue your ruby code could be causing the issue. Basically any ruby exception will cause the filter worker to die. Without seeing your ruby code, it's impossible to debug further, but you could try wrapping your ruby code in an exception handler and logging the exception somewhere (at least until logstash is updated to log it).

Logstash does not close file descriptors

I am using Logstash 2.2.4.
My current configuration
input {
file {
path => "/data/events/*/*.txt"
start_position => "beginning"
codec => "json"
ignore_older => 0
max_open_files => 46000
}
}
filter {
if [type] not in ["A", "B", "C"] {
drop {}
}
}
output {
HTTP {
http_method => "post"
workers => 3
url => "http://xxx.amazonaws.com/event"
}
}
In an input folder, I have about 25000 static (never updatable) txt files.
I configured --pipeline-workers to 16. In described configuration LS process running 1255 threads and opens about 2,560,685 file descriptors.
After some investigation, I found that LS keeping open files descriptor for all the files in the input folder and HTTP output traffic became very slow.
My question is why LS does not close file descriptor of already processed (transferred) files or implementing kind of input files pagination?
Maybe someone meets the same problem? If yes, please share your solution.
Thanks.

Resources