Zabbix Active check log[] logrt[] - linux

I'm trying to create right regexp for log file I would like to monitor, but I have tried many things and they don't work.
What I would like to monitor...
My log file looks like this:
17-06-14 Name Succ Fail Reject
11:36:33 BalanceCheck 2 6 10
Connections 3 0 0
Transfers 0 0 0
17-06-14 Name Succ Fail Reject
11:37:33 BalanceCheck 2 6 1
Connections 3 0 0
Transfers 50 2 10
The value I'm trying to have in output is for example "2". Which should be found under Succ and BalanceCheck.
I was trying to use:
log[/tmp/logfile,,"BalanceCheck *([0-9]+)",,,,\1]
But it shows in Zabbix that it is not supported, because of too many outputs.
Also I can't create a regex for Fail value to work properly....

You are currently using:
log[/tmp/logfile,,"BalanceCheck *([0-9]+)",,,,\1]
Zabbix log[] key syntax is:
log[file,<regexp>,<encoding>,<maxlines>,<mode>,<output>,<maxdelay>]
Notice how the second parameter should be regexp, but you have it put in the third parameter. Try removing the first or second comma in your key.

Related

Bash Script Efficient For Loop (Different Source File)

First of all i'm a beginner in bash scripting. I usually code in Java but this certain task requires me to create some bash scripts in Linux. FYI i've already made a working script but I think its not efficient enough because of the large files I'm dealing with.
The problem is simple I have 2 logs that I need to compare and make some correction on one of the logs... ill call it logA and logB. This 2 logs contains different format here is an example:
01/04/2015 06:48:59|9691427842139609|601113090635|PR|30.00|8.3|1.7| <- log A
17978712 2015-04-01 06:48:44 601113090635 SUCCESS DealerERecharge.<-log B
17978714 2015-04-01 06:48:49 601113090635 SUCCESS DealerERecharge.<-log B
As you can see there is a gap in time stamp. The actual logs that will match with log A is the one with the ID 17978714 because it is the closest time from it. The highest time gap I've seen is 1 minute. I cant use the RANGE logic because if there are more than one line on log B that is within the 1 minute range then all of the line will show in my regenerated log.
The script I made contains a for loop which iterate the timestamp of log A until it hits something in log B (The first one it hits is the closest)
Inside the for loop I have this line of code which makes the loop slow.
LINEOUTPUT=$(less $File2 | grep "Validation 1" | grep "Validation 2" | grep "Timestamp From Log A")
I've read some sample using SED but the problem is I have 2 more validation to consider before matching it with the time stamp.
The validation works as a filter to narrow down the exact match for log A and B.
Additional Info: I tried doing some benchmark test for the script I made by performing some loop. One thing I've noticed is that even though I only use 1 pipe for that script the loop tick is still slow.

How to generate a pin code in Symfony

I'm working right now on a symfony2 web app and I need to generate automatically and randomly pin-code composed by 6 alphanumeric characters example:
14gkf8
kfgh88
this code will be sent by mail to the use, that's how he will connect to the platform.
anyone have an idea how to make it or there is maybe a ready tool to do it ? thank you
You can generate random codes with the following code:
substr(bin2hex(openssl_random_pseudo_bytes(100)), 0, 6)
Online demo.
openssl_random_pseudo_bytes() will generate random binary data, bin2hex() will transform this binary data as hexadecimal data (e.g. 5c3aa…e55) and substr(…, 0, 6) will keep only the 6 first characters. Since hexadecimal uses values from 0 to 9 and a to f, there is 16 different characters available at each position, so it gives 16^6 = 16,777,216 possibilities (with 0 to 9 and a to z it's 36^6 = 2,176,782,336, only ± 130 times more). If the user doesn't need to type the key, you can use more characters, for example with 12 characters you have many more possibilities: 16^12 = 2,814×10¹⁴.
You can use uniqid() to generate a unique alphanumeric string

Loop through api call in bash script

I have an api that returns 50 users.
Is there a way of looping through the api call
expand=users%5B1%3A50%5D
The 1 after B is the starting number and it will pull until 50 which is the number after A
I have a script that will store the responses to a text file but how can I loop through this adding increments of 50?
For example. having a variable in place of the numbers
expand=users%5B$num1%3A$num2%5D
expand=users%5B{1..50}%3A50%5D
The {1..5} will expand as 1 2 3 4 5
for example
$ echo abc{1..5}def
abc1def abc2def abc3def abc4def abc5def
now all you need is to loop over the expand
for api in $expand
do
#do something
done

Create an aggregate Nagios check based on values from other checks

I have multiple checks for ten different web servers. One of these checks is to monitor the number of established connections (using neststat and findstr to filter for ESTABLISHED). It works as expected on servers WEB1 through to WEB10. I can graph (using pnp4nagios) the TCP established connection count because the output is an integer. If it's above a certain threshold it goes into warning status, above another it becomes critical.
The individual checks are working the way I want.
However, what I'm looking to do is add up all of these connections into one graph. This would be some sort of aggregate graph or SUM of all of the others.
Is there a way to take the values/outputs from other checks and add them into one?
Server TCP Connections
WEB1 223
WEB2 124
WEB3 412
WEB4 555
WEB5 412
WEB6 60
WEB7 0
WEB8 144
WEB9 234
WEB10 111
TOTAL 2275
I want to graph only the total.
Nagios itself does not use performance data in any way, it just takes it and passes it to whatever you specify in your config. So there's no good way to do this in Nagios (You could pipe the performance output of nagios to some tee command which passes it to pnp4nagios and a different script that sums everything up, but that's just horrible to maintain).
If i had your problem, i'd do the following:
At the end of your current plugin, do something like
echo $nconnections > /some/dir/connections.$NAGIOS_HOSTNAME
where nconnections is the number of connections the plugin found. This example is shell, replace if you use some different language for the plugin. The important thing is: it should be easy to write the number to a special file in the plugin.
Then, create a new plugin which has code similar to:
#!/bin/bash
WARN=1000
CRIT=2000
sumconn=$(cat /some/dir/connections.* | awk '{sum += $1} END {print sum}')
if [ $sumconn -ge $CRIT ]; then
echo "Connection sum CRITICAL: $summconn connections|conn=$sumconn;$WARN;$CRIT"
exit 2
elif [ $sumconn -ge $WARN ]; then
echo "Connection sum WARNING: $summconn connections|conn=$sumconn;$WARN;$CRIT"
exit 1
else
echo "Connection sum OK: $summconn connections|conn=$sumconn;$WARN;$CRIT"
exit 0
fi
That way, whenever you probe an individual server, you'll save the data for the new plugin; the plugin will just pick up the data that's there, which makes it extremely short. Of course, the output of the summary will lag behind a bit, but you can minimize that effect by setting the normal_check_interval of the individual services low enough.
If you want to get fancy, add code to remove files older than a certain threshold from the cache directory. Or, you could even remove the individual services from your nagios configuration, and call the individual-server-plugin from the summation plugin for each server, if you're really uninterested in the connection count per server.
EDIT:
To solve the nrpe problem, create a check_nrpe_and_save plugin like this:
#!/bin/bash
output=$($NAGIOS_USER1/check_nrpe "$#")
rc=$?
nconnections=$(echo "$output" | head -1 | sed 's/.*you have \([0-9]*\) connections.*/$1/')
echo $nconnections > /some/dir/connections.$NAGIOS_HOSTNAME
echo $output
exit $rc
Create a new define command entry for this script, and use the new command in your service definitions. You'll have to adjust the sed pattern to what your plugin outputs. If you don't have the number of connections in your regular output, an expression like .*connections=\([0-9]*\);.* should work. This check_nrpe_and_save should behave just like check_nrpe, especially it should output the same string and return the same exit code, and write to the special file as well.

SPSS can't assign string variable

The first block of code runs fine but i where it says "value of F14a" i want it to actually assign the string that is stored there to F14a_pd3. But the Syntax below that should do that crashes SPSS for me. F14a is a string variable of length 50.
Working useless Syntax:
STRING F14a_pd3 (A50).
DO IF NOT F14a="missing" & papadex=3.
COMPUTE F14a_pd3="value of F14a".
ELSE.
COMPUTE F14a_pd3="missing".
END IF.
FREQUENCIES F14a_pd3.
Crashing Syntax:
STRING F14a_pd3 (A50).
DO IF NOT F14a="missing" & papadex=3.
COMPUTE F14a_pd3=F14a.
ELSE.
COMPUTE F14a_pd3="missing".
END IF.
FREQUENCIES F14a_pd3.
You should consider calling technical support. Your syntax is fine, and even if it wasn't it shouldn't make anything crash (at worst you should just get an error message).
Run this reproducible example demonstrating your syntax. If this doesn't "crash" SPSS, then it is possibly a problem with the current file you are working on.
data list free / F14a (A10) papadex (F1.0).
begin data
missing 3
xxxxx 3
missing 0
yyyyy 1
zzzzz 0
end data.
STRING F14a_pd3 (A10).
DO IF NOT F14a="missing" & papadex=3.
COMPUTE F14a_pd3=F14a.
ELSE.
COMPUTE F14a_pd3="missing".
END IF.
LIST ALL.
Which produces the table;
F14a papadex F14a_pd3
missing 3 missing
xxxxx 3 xxxxx
missing 0 missing
yyyyy 1 missing
zzzzz 0 missing
That syntax works fine for me. When you say SPSS crashes, exactly what happens? And what else is going on in that session? What version and platform are you using?

Resources