Perl regex in Nagios check - linux

I am unfamiliar with perl, and I have a need to modify a Nagios check. I'd appreciate any advice on how to proceed. The check I'm using is check_smart, found here:
https://www.claudiokuenzler.com/nagios-plugins/check_smart.php
This script lets you check SMART values from hard drives and present the results in a simple form for monitoring. As it stands, the script can take a regex in the form /dev/sd[a-c] for one of the options; I believe that this is the section which allows this:
# list of devices for a loop
my(#dev);
if ( $opt_d ){
# normal mode - push opt_d on the list of devices
push(#dev,$opt_d);
} else {
# glob all devices - try '?' first
#dev =glob($opt_g);
}
foreach my $opt_dl (#dev){
warn "Found $opt_dl\n" if $opt_debug;
if (-b $opt_dl || -c $opt_dl){
$device .= $opt_dl.":";
} else {
warn "$opt_dl is not a valid block/character special device!\n\n" if $opt_debug;
}
}
I don't quite understand why the variable is $opt_dl when earlier it seems to be $opt_d. The result, however, is that the script returns something like:
OK: [/dev/sda] - Device is clean --- [/dev/sdb] - Device is clean --- [/dev/sdc] - Device is clean
EDIT: Here's the code where $opt_d is set; on further thought it seems like $opt_dl is just $opt_d while it's in a loop or something?
use vars qw($opt_b $opt_d $opt_g $opt_debug $opt_h $opt_i $opt_v);
Getopt::Long::Configure('bundling');
GetOptions(
"debug" => \$opt_debug,
"b=i" => \$opt_b, "bad=i" => \$opt_b,
"d=s" => \$opt_d, "device=s" => \$opt_d,
"g=s" => \$opt_g, "global=s" => \$opt_g,
"h" => \$opt_h, "help" => \$opt_h,
"i=s" => \$opt_i, "interface=s" => \$opt_i,
"v" => \$opt_v, "version" => \$opt_v,
);
The part of the code I'd like to change in a similar fashion is:
# Allow all device types currently supported by smartctl
# See http://www.smartmontools.org/wiki/Supported_RAID-Controllers
if ($opt_i =~ m/(ata|scsi|3ware|areca|hpt|cciss|megaraid|sat)/) {
$interface = $opt_i;
} else {
print "invalid interface $opt_i for $opt_d!\n\n";
print_help();
exit $ERRORS{'UNKNOWN'};
}
Specifically, I'd like to be able to pass the script something like "megaraid,[5-8]" and let it run for each. In this case, I would not be passing the regex for the device, it would just be /dev/sda.
If anyone could give me advice on this I'd appreciate it!

$opt_dl is probably poorly named and has nothing to do with your $opt_d, those are two separate variables.
From the if statement, if $opt_d is not set (that is the script was not given any device name to act upon), then glob is called with the value of $opt_g and it is glob in fact that finds out all filenames based on the regex given inside $opt_g.
After this if statement, the #dev array is filed with the names of devices to handle.
And then you have a foreach statement which means a loop on each item inside the #dev array. And during the loop, each item is in the $opt_dl variable, due to its use on the foreach statement.
However I was not able to understand what you wanted to do in your last paragraph.

I'm the maintainer of check_smart and it's funny I accidentally stumbled on that question now.
I don't quite understand why the variable is $opt_dl when earlier it seems to be $opt_d. The result, however, is that the script returns something like: OK: [/dev/sda] - Device is clean --- [/dev/sdb] - Device is clean --- [/dev/sdc] - Device is clean
So basically when you use the -g parameter, you tell the check_smart plugin to use glob (https://perldoc.perl.org/functions/glob.html) - this is not the same as regular expression. The drives matching the glob expression (e.g. -d '/dev/sd[a-z]) will create a list ($opt_dl) and the plugin will run through each drive in a for loop.
Specifically, I'd like to be able to pass the script something like "megaraid,[5-8]" and let it run for each. In this case, I would not be passing the regex for the device, it would just be /dev/sda.
This is already possible since release 5.0 (which was released in April 2014, way before your question ;-) ). You just need to change the syntax. Instead of using the glob expression on -d, you use it on the interface parameter (-i). Practical example: -i 'megaraid,[5-8]'.
Since the newest release (6.6, released a couple of days ago), the output for multiple drive checks (using -g) and hardware storage/raid controllers has slightly changed and now indicates the interface's device id rather than the logical drive path:
# ./check_smart.pl -g /dev/sda -i 'megaraid,[1-3]'
OK: [megaraid,1] - Device is clean --- [megaraid,2] - Device is clean --- [megaraid,3] - Device is clean|
This is all described in the official documentation, too.
More info:
https://www.claudiokuenzler.com/monitoring-plugins/check_smart.php
https://www.claudiokuenzler.com/blog/914/check_smart-6.6-multiple-drives-check-megaraid-3ware-cciss-controllers
I hope this answers your question, although I am probably 2 years late.

Related

Terraform variable pointing to current file name

Is there some special variable available in Terraform configuration files which would point to current file name?
I'd like to use it for description fields in various resources, so that someone seeing these resources in the systems would know where is the master definition for them.
e.g.
in myinfra.tf
resource "aws_iam_policy" "my_policy" {
name = "something-important"
description = "Managed by Terraform at ${HERE_I_WOULD_LIKE_TO_USE_THE_VARIABLE}"
policy = <<EOF
[...]
EOF
}
And I would hope the description becomes:
description = "Managed by Terraform at myinfra.tf"
I tried ${path.module} but that only gives "filesystem path of the module where the expression is placed", so pragmatically speaking - everything but the file name I want.
Here's what I can share. Use the data external resource to call an external script that would get the directory/file name and then return it back as a string or any other type that your resources require. Obviously it's not exactly what you wanted as you'll get the dir/file name indirectly but hopefully it helps for others or even yourself for use-cases.
We use that only for azurerm and for very complex integrations that are not yet supported with the current provider versions. I have have not tested it specifically for AWS but since it's a core Terraform resource provider, I'm guessing it might work across the board.
data "external" "cwd" {
program = ["./script.sh"]
query = {
cwd = "${path.cwd}"
}
}
resource "aws_iam_policy" "my_policy" {
name = "something-important"
description = "Managed by Terraform at ${data.external.dir_script.result.filename}"
policy = <<EOF
[...]
EOF
This is how my script looks like:
#!/bin/sh
#echo '{"cwd":"for_testing"}' | ./dir_name.sh | xargs
PIPED=`cat`
errPrint "INFO: Got PIPED data:\n$PIPED"
DIR=`jq -r .cwd <<< $PIPED`
cd $DIR
filename=`ls | grep \.tf$ | xargs`
errPrint "INFO: Returning this as STDOUT:${filename}"
echo "{\"name\":\"$filename\"}"
You need to be that the data from the script needs to return a valid JSON object.
The program must then produce a valid JSON object on stdout, which will be used to populate the result attribute exported to the rest of the Terraform configuration. This JSON object must again have all of its values as strings. On successful completion it must exit with status zero.
Unfortunately, like the others mentioned, there's no other way to get the current file name being 'applied'.
I think you might benefit from using something like yor from Bridge Crew.
From the project's README:
Yor is an open-source tool that helps add informative and consistent tags across infrastructure-as-code frameworks such as Terraform, CloudFormation, and Serverless.
Yor is built to run as a GitHub Action automatically adding consistent tagging logics to your IaC. Yor can also run as a pre-commit hook and a standalone CLI.
So basically, it updates your resources tags with things like:
tags = {
env = var.env
yor_trace = "912066a1-31a3-4a08-911b-0b06d9eac64e"
git_repo = "example"
git_org = "bridgecrewio"
git_file = "applyTag.md"
git_commit = "COMMITHASH"
git_modifiers = "bana/gandalf"
git_last_modified_at = "2021-01-08 00:00:00"
git_last_modified_by = "bana#bridgecrew.io"
}
Maybe that would be good enough to provide what you're trying to do?
As far as my testimony, I have not used yor since my tagging uses a different approach. Instead of having "raw" tags, we use a label module that builds the tags for us and then merges in local tags.
Just sharing this info FYI in case it helps.

Avoid triggering an update if Resource.Schema attribute changes in Terraform

I'm trying to figure out if it is possible to prevent resource updates when one of the Resource.Schema attributes changes.
Essentially I'm building a provider that manages infrastructure. I've got a resource that updates firmware. Something like:
resource "redfish_simple_update" "update" {
transfer_protocol = "HTTP"
target_firmware_image = "/home/mikeletux/BIOS_FXC54_WN64_1.15.0.EXE"
}
As you can see, target_firmware_image does refer to the full path of my firmware package. I want to be able to change directories without triggering an update. I.e. changing above target_firmware_image by /home/mikeletux/Downloads/BIOS_FXC54_WN64_1.15.0.EXE for instance.
I don't know if this is possible. If done my own research and I found the CustomDiff functions to be added to the schema, but I think that thing doesn't match my scenario.
Do you think of something else I could do?
Thanks!
Just posting here how I finally did it.
To avoid triggering an update when the path changes but not the filename, I've found out that DiffSuppressFunc function becomes very handy here:
"target_firmware_image": {
Type: schema.TypeString,
Required: true,
Description: "Target firmware image used for firmware update on the redfish instance. " +
"Make sure you place your firmware packages in the same folder as the module and set it as follows: \"${path.module}/BIOS_FXC54_WN64_1.15.0.EXE\"",
// DiffSuppressFunc will allow moving fw packages through the filesystem without triggering an update if so.
// At the moment it uses filename to see if they're the same. We need to strengthen that by somehow using hashing
DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool {
if filepath.Base(old) == filepath.Base(new) {
return true
}
return false
},
}
By checking old and new value using filepath.Base(), I can figure out if filename is the same, no matter what path the file is placed in.
I'd like to improve that behavior in the future by implementing file hashing, so even the filename doesn't matter, but that's something I'll leave for a new version.
Thanks!

LUA (ESP8266) How to call/enter a module command from a string

Please forgive me, I don't even know if what I am asking is the correct terminology.
So...here goes.
I built a custom firmware of Nodemcu Dali ( from Hackerspace Stutgart.) This includes a dali lighting control "flavour" as they refer to. I had to modify it to work with the most recent LUA version. Anyway that works and the MODULE is built into the firmware.
From the LUA commandline / Interpretor (Esplorer interface, I can call the module and it all works fine.
To use the module you enter:
dali.arc(address_Mode,0,parameter)
or
dali.send(Address_Mode,Command,Address,parameter)
Address_mode can be: dali.SLAVE , dali.GROUP
Command can be dali.UP_200MS , dali.IMMEDIATE_OFF , dali.GO_TO_SCENE --... about 50 commands.
An Example command to send the light level 128 to all drivers would be as follows:
dali.arc(dali.BROADCAST,0,128) -- direct arc mode ( all lights,*ignored*,50% dimmed)
I want to use MQTT to control this thing.
I could use MQTT topics:
dali_topic/arc_broadcast -- for dali.arc(dali.BROADCAST,var1,var2)
dali_topic/group -- for dali.arc(dali.GROUP,var1,var2)
dali_topic/slave -- for dali.arc(dali.SLAVE,var1,var2)
and my payload string would only have to 2 variables,comma seperated eg. 0,128.
This I can all do day long but now I want to make it "better"...
I want to be able to rather send the message " dali.BROADCAST,0,128" which the code should then sort into a table with elements:
table[1] = dali.BROADCAST
table[2] = 0
table[3] = 128
and call dali.arc(table[1],table[2],table[3])
The table creation works,but I cannot get dali.BROADCAST passed to the module /function? call. First off because it is a string and second because it cannot be converted to a number or whatever substitute is required.
If this can be done them the Command field could aslo be sent with the MQTT payload rather than needing 50 MQTT topics.
I suppose I could also just try a lot of if statements or search a lookup table, but perhaps there is a simple way to just insert the command field in to function/module call?
Any assistance greatly appreciated
Edit here is some LUA output:
table ={"dali.BROADCAST",0,128}
dali.arc(table[1],table[2],table[3])
result:
Lua error: stdin:1: bad argument #1 to 'arc' (number expected, got boolean)
since if you print("dali.BROADCAST") you get nil
However
table[4] = dali.BROADCAST
dali.arc(table[4],table[2],table[3])
result works fine.
print( type(dali.BROADCAST ))
gives number
so how to pass my mqtt string dali.BROADCAST which is received as "dali.BROADCAST" and convert it to just dali.BROADCAST?
note I am not sending "" the message is however sent by MQTT as a CSV string.
From the Firmware source for the dali module.. in the module folder: dali.c
LROT_BEGIN(dali, NULL, 0)
LROT_FUNCENTRY( setup, dali_setup )
LROT_FUNCENTRY( arc, dali_arc )
LROT_FUNCENTRY( send, dali_send )
LROT_NUMENTRY( BROADCAST, BROADCAST )
LROT_NUMENTRY( SLAVE, SLAVE )
LROT_NUMENTRY( GROUP, GROUP )
LROT_NUMENTRY( IMMEDIATE_OFF, DALI_IMMEDIATE_OFF)
LROT_NUMENTRY( GO_TO_SCENE, DALI_GO_TO_SCENE)
The shackspace github link is the correct one, it is simply based on LUA1.45 or something low like that. I only had to modify dali.c in Modules to work with the lastest LUA.
The relevant dali files in the firmware is located in
app/modules
app/include
app/dali
folders
EDIT: Thinking about it, you probably always end up indexing dali, in which case you can do so directly by just structuring your table like this:
table[1] = "arc"
table[2] = "BROADCAST"
table[3] = 0
table[4] = 128
This way you can get to dali.BROADCAST by doing dali[table[2]] and to dali.arc by doing dali[table[1]].
HINT: You should probably still keep a whitelist of what is allowed where because someone could send any string and your program shouldn't just blindly index the dali table with that and return it.
You probably want something like this
Here's the relevant code:
function deepindex(tab, path)
if type(path)~="string" then
return nil, "path is not a string"
end
local index, rest = path:match("^%.?([%a%d]+)(.*)")
if not index then
index, rest = path:match("^%[(%d+)%](.*)")
index = tonumber(index)
end
if index then
if #rest>0 then
if tab[index] then
return deepindex(tab[index], rest)
else
return nil, "full path not present in table"
end
else
return tab[index]
end
else
return nil, "malformed index-path string"
end
end
Homework: this function also works with [] indexing for numbers, which you don't need. It should be easy to simplify the function to only do string-indexing with .
You would use that on the global environment to index it with a single string:
deepindex(_G, "dali.BROADCAST")
-- Which is the same as
_G.dali.BROADCAST
-- And, unless dali is a local, also
dali.BROADCAST
Keep in mind though, that this lets you remote-index _G with anything, which is a huge security nightmare. Better do this:
local whitelist = {}
whitelist.dali = dali
deepindex(whitelist, "dali.BROADCAST") -- this works
deepindex(whitelist, "some.evil.submodule") -- This does nothing
Was looking for a Wiki entry with more details, but I found none. If you happen to know where documentation is that specifies more about the available Lua commands, please include a link in your question.
It appears that there may be other functions to approach the same outcome, but I'm not certain how they are worded in your particular build.
https://github.com/shackspace/nodemcu-firmware-dali/blob/master/app/dali/dali_encode.c
What's returned when you print( type( dali.BROADCAST )) ?
I was guessing it might be raw C userdata, the specific case-switch for that arc command, however, I just found a similar Lua project that lists it as hexadecimal 255
https://github.com/a-lurker/Vera-Plugin-DALI-Planet/blob/master/Luup_device/L_DaliPlanet1.lua
Yea, it's likely just sending hexadecimal numbers.
https://en.wikipedia.org/wiki/Digital_Addressable_Lighting_Interface
try sending dali.arc( 0xFF, 0x00, 0x80 )
or dali.arc( 0xFE, 0x80 )
They make it sound like '1111 1110' ( 0xFE ) is directly followed by the brightness value, so that second command might light.
I'm not sure why it doesn't appear to be sending the correct codes when you place them in a table. What you've written appears to be correct, but it's likely a one-way broadcast, so you don't receive back any error messages...
If you can't get the arc command to work with tables, possibly you'll have better luck with the dali.send() command. Might just be a flaw in that app. If you can't get it resolved, submit a bug report to their GitHub page.

Using Logstash Aggregate Filter plugin to process data which may or may not be sequenced

Hello all!
I am trying to use the Aggregate filter plugin of Logstash v7.7 to correlate and combine data from two different CSV file inputs which represent API data calls. The idea is to produce a record showing a combined picture. As you can expect the data may or may not arrive in the right sequence.
Here is as an example:
/data/incoming/source_1/*.csv
StartTime, AckTime, Operation, RefData1, RefData2, OpSpecificData1
231313232,44343545,Register,ref-data-1a,ref-data-2a,op-specific-data-1
979898999,75758383,Register,ref-data-1b,ref-data-2b,op-specific-data-2
354656466,98554321,Cancel,ref-data-1c,ref-data-2c,op-specific-data-2
/data/incoming/source_1/*.csv
FinishTime,Operation,RefData1, RefData2, FinishSpecificData
67657657575,Cancel,ref-data-1c,ref-data-2c,FinishSpecific-Data-1
68445590877,Register,ref-data-1a,ref-data-2a,FinishSpecific-Data-2
55443444313,Register,ref-data-1a,ref-data-2a,FinishSpecific-Data-2
I have a single pipeline that is receiving both these CSVs and I am able to process and write them as individual records to a single Index. However, the idea is to combine records from the two sources into one record each representing a superset. of Operation related information
Unfortunately, despite several attempts I have been unable to figure out how to achieve this via Aggregate filter plugin. My primary question is whether this is a suitable use of the specific plugin? And if so, any suggestions would be welcome!
At the moment, I have this
input {
file {
path => ['/data/incoming/source_1/*.csv']
tags => ["source1"]
}
file {
path => ['/data/incoming/source_2/*.csv']
tags => ["source2"]
}
# use the tags to do some source 1 and 2 related massaging, calculations, etc
aggregate {
task_id = "%{Operation}_%{RefData1}_%{RefData1}"
code => "
map['source_files'] ||= []
map['source_files'] << {'source_file', event.get('path') }
"
push_map_as_event_on_timeout => true
timeout => 600 #assuming this is the most far apart they will arrive
}
...
}
output {
elastic { ...}
}
And other such variations. However, I keep getting individual records being written to the Index and am unable to get one combined. Yet again, as you can see from the data set there's no guarantee of the sequencing of records - so I am wondering if the filter is the right tool for the job, to begin with? :-\
Or is it just me not being able to use it right! ;-)
In either case, any inputs/ comments/ suggestions welcome. Thanks!
PS: This message is being cross-posted over from Elastic forums. I am providing a link there just in case some answers pop up there too.
The answer is to use Elastic search in upsert mode. Please see the specifics here..
I recommend first that the information reaches you in order so that the filter can take it better, secondly, you could set the options in your pipeline.yml: pipeline.workers: 1 and pipeline.ordered: true, thus guaranteeing the order of processing.

How to use return value from a Puppet exec?

How can I make the below logic work? My aim is to compare the value of custom fact $environment and the content of the file /etc/facter/facts.d/oldvalue.
If the custom fact $environment is not equal to the content of file /etc/facter/facts.d/oldvalue, then execute the following code.
exec {'catenvchange' :
command => "/bin/cat /root/oldvalue"}
if $environment != exec['catenvchange'] {#code#}
Exec resources do not work that way. In fact, no resource works that way, or any way remotely like that. Moreover, the directory /etc/facter/facts.d/ serves a special purpose, and your expectation for how it might be appropriate to use a file within is not consistent with that purpose.
What you describe wanting to do looks vaguely like setting up an external fact and testing its value. If you drop an executable script named /etc/facter/facts.d/anything by some means (manually, plugin sync, File resource, ...) then that script will be executed before each Puppet run as part of the process of gathering node facts. The standard output generated by the script would be parsed for key=value pairs, each defining a fact name and its value. The facts so designated, such as one named "last_environment" will be available during catalog building. You could then use it like so:
if $::environment != $::last_environment {
# ...
}
Update:
One way to use this mechanism to memorialize the value that a given fact, say $::environment, has on one run so that it can be read back on the next run would be to declare a File resource managing an external fact script. For example,
file { '/etc/facter/facts.d/oldvalues':
ensure => 'file',
owner => 'root',
group => 'root',
mode => '0755',
content => "#!/bin/bash\necho 'last_environment=${::environment}'\n"
}

Resources