Get mounted disk space on Temp in Linux machine - linux

I am new to perl world. I written one perl script for calculating free disk space. But whenever output generates, it gives me different number than what actually shows using df -h command.
So my requirement is i want to show specific mounted free disk space. E.g I want to show only /boot "Use%" figure and it should match with df -h command figure.
Please find my script for reference as follows by clicking link named Actual Script.
Actual Script

The df function from Filesys::Df
module returns a reference to a hash (perldoc perlreftut) with fs info fields
Example:
$VAR1 = {
user_bavail => '170614.21875',
user_blocks => '179796.8203125',
user_fused => 408762,
used => '9182.6015625',
fused => 408762,
bavail => '170614.21875',
user_used => '9182.6015625',
su_bavail => '180077.20703125',
ffree => 11863876,
fper => 3,
user_favail => 11863876,
favail => 11863876,
user_files => 12272638,
blocks => '189259.80859375',
su_favail => 11863876,
files => 12272638,
per => 5,
su_blocks => '189259.80859375',
bfree => '180077.20703125',
su_files => 12272638
};
So you free space is
my $ref = df($dir, 1);
print $ref->{bavail} . " bytes\n";

Related

Logstash convert date duration from string to hours

I have a column like this:
business_time_left
3 Hours 24 Minutes
59 Minutes
4 Days 23 Hours 58 Minutes
0 Seconds
1 Hour
and so on..
What I want to do in Logstash is to convert this entirely into hours.
So mu value should entirety convert to something like
business_time_left
3.24
0.59
119.58
0
1
Is this possible?
My config file:
http_poller {
urls => {
snowinc => {
url => "https://service-now.com"
user => "your_user"
password => "yourpassword"
headers => {Accept => "application/json"}
}
}
request_timeout => 60
metadata_target => "http_poller_metadata"
schedule => { cron => "* * * * * UTC"}
codec => "json"
}
}
filter
{
json {source => "result" }
split{ field => ["result"] }
}
output {
elasticsearch {
hosts => ["yourelastuicIP"]
index => "inc"
action=>update
document_id => "%{[result][number]}"
doc_as_upsert =>true
}
stdout { codec => rubydebug }
}
Sample Json input data, when the url is hit.
{"result":[
{
"made_sla":"true",
"Type":"incident resolution p3",
"sys_updated_on":"2019-12-23 05:00:00"
"business_time_left":" 59 Minutes"} ,
{
"made_sla":"true",
"Type":"incident resolution l1.5 p4",
"sys_updated_on":"2019-12-24 07:00:00"
"business_time_left":"3 Hours 24 Minutes"}]}
Thanks in advance!
Q: Is this possible?
A: Yes.
Assuming your json- and split-filters are working correctly and the field business_time_left holds a single value like you showed (e.g. 4 Days 23 Hours 58 Minutes) I personally would do the following:
First, make sure that your data is in a kind of pattern meaning, you standardize the "quantity-descriptions". This means that the minutes are always labeled as "Minutes" not Mins, min or whatever.
Nextup, you can parse the field with the grok-filter like so:
filter{
grok{
match => { "business_time_left" => "(%{INT:calc.days}\s+Days)?%{SPACE}?(%{INT:calc.hours}\s+Hours)?%{SPACE}?(%{INT:calc.minutes}\s+Minutes)?%{SPACE}?(%{INT:calc.seconds}\s+Seconds)?%{SPACE}?" }
}
}
This will extract all available values into the desired fields, e.g. calc.days. The ? character prevents that grok fails if e.g. there are no seconds. You can test the pattern on this site.
With the data extracted, you can implement a ruby filter to aggregate the numeric values like so (untested though):
ruby{
code => '
days = event.get("calc.days")
hours = event.get("calc.hours")
minutes = event.get("calc.minutes")
sum = 0
if days
days_numeric = days.to_i
days_as_hours = days_numeric * 24
sum += days_as_hours
end
if hours
sum += hours.to_i
end
if minutes
sum += (minutes.to_i / 100)
end
# seconds and so on ...
event.set("business_time_left_as_hours", sum)
'
}
So basically you check if the values are present and add them to a sum with your custom logic.
event.set("business_time_left_as_hours", sum) will set the result as a new field to the document.
These code snippets are not intended to be working out of the box they are just hints. So please check the documentations about the ruby filter, ruby coding in general and so on.
I hope I could help you.

logstash - output single event into multiple line output file

I have a jdbc input with a select statement. each row in the restult set has 3 columns. c1, c2, c3. the event emitted has the following structure:
{"c1":"v1", "c2":"v2", "c3":"v3", "file_name":"tmp.csv"}
I want to output the values in a file in the following manner:
output file:
v1
v2
v3
this is the output configuration:
file {
path => "/tmp/%{file_name}"
codec => plain { format => "%{c1}\n%{c2}\n%{c3}" }
write_behavior => "overwrite"
flush_interval => 0
}
but what is generated is
outputfile:
v1\nv2\nv3
is the plain codec plugin not the one i need? is there any other codec plugin for the output file plugin that i can use? or is the only option i have is to write my own plugin?
Thanks!
A bit late to the party, but maybe this helps others. Although it looks funky, you should be able to get away with simply hitting Enter within the format string (using the line codec).
file {
path => "/tmp/%{file_name}"
codec => line {
format => "%{c1}
%{c2}
%{c3}"
}
write_behavior => "overwrite"
flush_interval => 0
}
Not the prettiest approach, but it works. Not sure if there is a better way.
what you are looking for is the line codec plugin: https://www.elastic.co/guide/en/logstash/current/plugins-codecs-line.html

Puppet read file content and generate a hash

I have a file called fstab.txt which contains:
UUID=86861354-d783-4b9e-a871-e9fbbfc35c22 /mnt/d1 ext4 defaults 1 2
UUID=ffa788ba-0802-4305-ab59-2a34dda3a706 /mnt/d2 ext4 defaults 1 2
UUID=993eec37-9c6d-4ba6-9ed3-77f2d7652256 /mnt/d3 ext4 defaults 1 2
UUID=36817374-0d46-4d5b-ac9b-2229268b0978 /mnt/d4 ext4 defaults 1 2
I want to read the file and generate a hash as below:
hash = {
"UUID=86861354-d783-4b9e-a871-e9fbbfc35c22" => "/mnt/d1",
"UUID=ffa788ba-0802-4305-ab59-2a34dda3a706" => "/mnt/d2",
"UUID=993eec37-9c6d-4ba6-9ed3-77f2d7652256" => "/mnt/d3",
"UUID=36817374-0d46-4d5b-ac9b-2229268b0978" => "/mnt/d4",
}
Currently I am thinking this way:
$output = generate("/bin/cat fstab.txt")
And split out $output.
Some one could guide me a better way to do this.
Thanks in advance.

Force lshosts command to return megabytes for "maxmem" and "maxswp" parameters

When I type "lshosts" I am given:
HOST_NAME type model cpuf ncpus maxmem maxswp server RESOURCES
server1 X86_64 Intel_EM 60.0 12 191.9G 159.7G Yes ()
server2 X86_64 Intel_EM 60.0 12 191.9G 191.2G Yes ()
server3 X86_64 Intel_EM 60.0 12 191.9G 191.2G Yes ()
I am trying to return maxmem and maxswp as megabytes, not gigabytes when lshosts is called. I am trying to send Xilinx ISE jobs to my LSF, however the software expects integer, megabyte values for maxmem and maxswp. By doing debugging, it appears that the software grabs these parameters using the lshosts command.
I have already checked in my lsf.conf file that:
LSF_UNIT_FOR_LIMTS=MB
I have tried searching the IBM Knowledge Base, but to no avail.
Do you use a specific command to specify maxmem and maxswp units within the lsf.conf, lsf.shared, or other config files?
Or does LSF force return the most practical unit?
Any way to override this?
LSF_UNIT_FOR_LIMITS should work, if you completely drained the cluster of all running, pending, and finished jobs. According to the docs, MB is the default, so I'm surprised.
That said, you can use something like this to transform the results:
$ cat to_mb.awk
function to_mb(s) {
e = index("KMG", substr(s, length(s)))
m = substr(s, 0, length(s) - 1)
return m * 10^((e-2) * 3)
}
{ print $1 " " to_mb($6) " " to_mb($7) }
$ lshosts | tail -n +2 | awk -f to_mb.awk
server1 191900 159700
server2 191900 191200
server3 191900 191200
The to_mb function should also handle 'K' or 'M' units, should those pop up.
If LSF_UNIT_FOR_LIMITS is defined in lsf.conf, lshosts will always print the output as a floating point number, and in some versions of LSF the parameter is defined as 'KB' in lsf.conf upon installation.
Try searching for any definitions of the parameter in lsf.conf and commenting them all out so that the parameter is left undefined, I think in that case it defaults to printing it out as an integer in megabytes.
(Don't ask me why it works this way)

How to programmatically query senderbase.org?

I'm trying to programmatically query senderbase.org but it's really hard to find any information about it.
I tried to query with:
dig txt 8.8.8.8.query.senderbase.org
Which returns:
"0-0=1|1=Google Incorporated|2=3.7|3=4.0|4=3228772|6=1174353533|8=2880|9=1|20=google-public-dns-a.|21=google.com|22=Y|23=7.9|24=8.0|25=1049184000|40=3.7|41=4.0|43=3.8|44=0.06|45=N|46=24|48=24|49=1.00|50=Mountain View|51=CA|52=94043|53=US|54=-122.057|"
But none of these fields seems to indicate if the IP is listed or not.
I found the following page with a description of the fields. But field 26, which seems to be what i need, is not present ( http://web.archive.org/web/20040830010414/http://www.senderbase.org/dnsresponses.html ).
I also found some SpamAssassin extensions which were querying rf.senderbase.org but it gives me inconsistent results. For the same field, sometimes it returns a float and sometimes it doesn't return anything.
Any ideas? Or parsing their html is the only option?
Thanks.
The key values are as follows
'0-0' => 'version_number',
1 => 'org_name',
2 => 'org_daily_magnitude',
3 => 'org_monthly_magnitude',
4 => 'org_id',
5 => 'org_category',
6 => 'org_first_message',
7 => 'org_domains_count',
8 => 'org_ip_controlled_count',
9 => 'org_ip_used_count',
10 => 'org_fortune_1000',
20 => 'hostname',
21 => 'domain_name',
22 => 'hostname_matches_ip',
23 => 'domain_daily_magnitude',
24 => 'domain_monthly_magnitude',
25 => 'domain_first_message',
26 => 'domain_rating',
40 => 'ip_daily_magnitude',
41 => 'ip_monthly_magnitude',
43 => 'ip_average_magnitude',
44 => 'ip_30_day_volume_percent',
45 => 'ip_in_bonded_sender',
46 => 'ip_cidr_range',
47 => 'ip_blacklist_score',
50 => 'ip_city',
51 => 'ip_state',
52 => 'ip_postal_code',
53 => 'ip_country',
54 => 'ip_longitude',
55 => 'ip_latitude',
The "domain rating" specified in SenderBase DNS responses is something that was implemented but never utilized, or at least not enough to make it useful. Other fields that were originally specified are a little hit-or-miss as well, although most should be pretty fresh for higher-volume senders of email. You might want to check out the Perl Net::SenderBase library, either to use it directly or as a reference for your own implementation.
The rf.senderbase.org domain you referred to reflects SenderBase Reputation Scores (SBRS), which is mostly independent from what you see on http://www.senderbase.org. SBRS is not considered a public service, so it would be wise to receive permission from Cisco/IronPort before using it for anything serious.

Resources