How to programmatically query senderbase.org? - dns

I'm trying to programmatically query senderbase.org but it's really hard to find any information about it.
I tried to query with:
dig txt 8.8.8.8.query.senderbase.org
Which returns:
"0-0=1|1=Google Incorporated|2=3.7|3=4.0|4=3228772|6=1174353533|8=2880|9=1|20=google-public-dns-a.|21=google.com|22=Y|23=7.9|24=8.0|25=1049184000|40=3.7|41=4.0|43=3.8|44=0.06|45=N|46=24|48=24|49=1.00|50=Mountain View|51=CA|52=94043|53=US|54=-122.057|"
But none of these fields seems to indicate if the IP is listed or not.
I found the following page with a description of the fields. But field 26, which seems to be what i need, is not present ( http://web.archive.org/web/20040830010414/http://www.senderbase.org/dnsresponses.html ).
I also found some SpamAssassin extensions which were querying rf.senderbase.org but it gives me inconsistent results. For the same field, sometimes it returns a float and sometimes it doesn't return anything.
Any ideas? Or parsing their html is the only option?
Thanks.

The key values are as follows
'0-0' => 'version_number',
1 => 'org_name',
2 => 'org_daily_magnitude',
3 => 'org_monthly_magnitude',
4 => 'org_id',
5 => 'org_category',
6 => 'org_first_message',
7 => 'org_domains_count',
8 => 'org_ip_controlled_count',
9 => 'org_ip_used_count',
10 => 'org_fortune_1000',
20 => 'hostname',
21 => 'domain_name',
22 => 'hostname_matches_ip',
23 => 'domain_daily_magnitude',
24 => 'domain_monthly_magnitude',
25 => 'domain_first_message',
26 => 'domain_rating',
40 => 'ip_daily_magnitude',
41 => 'ip_monthly_magnitude',
43 => 'ip_average_magnitude',
44 => 'ip_30_day_volume_percent',
45 => 'ip_in_bonded_sender',
46 => 'ip_cidr_range',
47 => 'ip_blacklist_score',
50 => 'ip_city',
51 => 'ip_state',
52 => 'ip_postal_code',
53 => 'ip_country',
54 => 'ip_longitude',
55 => 'ip_latitude',

The "domain rating" specified in SenderBase DNS responses is something that was implemented but never utilized, or at least not enough to make it useful. Other fields that were originally specified are a little hit-or-miss as well, although most should be pretty fresh for higher-volume senders of email. You might want to check out the Perl Net::SenderBase library, either to use it directly or as a reference for your own implementation.
The rf.senderbase.org domain you referred to reflects SenderBase Reputation Scores (SBRS), which is mostly independent from what you see on http://www.senderbase.org. SBRS is not considered a public service, so it would be wise to receive permission from Cisco/IronPort before using it for anything serious.

Related

Setting all inputs of an activity to 0 in wurst and brightway

Trying to set the existing exchanges (inputs) of an activity to zero and additionally adding an exchange, the following is returned:
"MultipleResults("Multiple production exchanges found")"
"NoResults: No suitable production exchanges founds"
Firstly I set all the input amounts to zero except for the output:
for idx, item in enumerate(ds['exchanges']):
item['amount'] = 0
ds['exchanges'][0]['amount'] = 1
Secondly, I add the a new exchange:
ds['exchanges'].append({
'amount': 1,
'input': (new['database'], new['code']),
'type': 'technosphere',
'name': new['name'],
'location': new['location']
})
Writing the database in the last steps returns the errors.
w.write_brightway2_database(DB, NEW_DB_NAME)
Does anyone see where the problem could be or if there are alternative ways to replace multiple inputs with another one?
Thanks a lot for any hints!
Lukas
Full error traceback:
--------------------------------------------------------------------------
NoResults Traceback (most recent call last)
<ipython-input-6-d4f2dde2b33d> in <module>
2
3 NEW_DB_NAME = "ecoinvent_copy_new"
----> 4 w.write_brightway2_database(ecoinvent, NEW_DB_NAME)
5
6 # Check for new databases
~\Miniconda3\envs\ab\lib\site-packages\wurst\brightway\write_database.py in write_brightway2_database(data, name)
47
48 change_db_name(data, name)
---> 49 link_internal(data)
50 check_internal_linking(data)
51 check_duplicate_codes(data)
~\Miniconda3\envs\ab\lib\site-packages\wurst\linking.py in link_internal(data, fields)
11 input_databases = get_input_databases(data)
12 get_tuple = lambda exc: tuple([exc[f] for f in fields])
---> 13 products = {
14 get_tuple(reference_product(ds)): (ds['database'], ds['code'])
15 for ds in data
~\Miniconda3\envs\ab\lib\site-packages\wurst\linking.py in <dictcomp>(.0)
12 get_tuple = lambda exc: tuple([exc[f] for f in fields])
13 products = {
---> 14 get_tuple(reference_product(ds)): (ds['database'], ds['code'])
15 for ds in data
16 }
~\Miniconda3\envs\ab\lib\site-packages\wurst\searching.py in reference_product(ds)
82 and exc['type'] == 'production']
83 if not excs:
---> 84 raise NoResults("No suitable production exchanges founds")
85 elif len(excs) > 1:
86 raise MultipleResults("Multiple production exchanges found")
NoResults: No suitable production exchanges found
It seems that setting the exchanges to zero caused the problem. The database cannot be written in this case. What I did now is setting the exchanges to a very small number, so that they have no effect on the impact assessment, but are not zero. Not the most elegant way, but works for me. So if anyone has similar problems, that might be a quick solution.

Logstash convert date duration from string to hours

I have a column like this:
business_time_left
3 Hours 24 Minutes
59 Minutes
4 Days 23 Hours 58 Minutes
0 Seconds
1 Hour
and so on..
What I want to do in Logstash is to convert this entirely into hours.
So mu value should entirety convert to something like
business_time_left
3.24
0.59
119.58
0
1
Is this possible?
My config file:
http_poller {
urls => {
snowinc => {
url => "https://service-now.com"
user => "your_user"
password => "yourpassword"
headers => {Accept => "application/json"}
}
}
request_timeout => 60
metadata_target => "http_poller_metadata"
schedule => { cron => "* * * * * UTC"}
codec => "json"
}
}
filter
{
json {source => "result" }
split{ field => ["result"] }
}
output {
elasticsearch {
hosts => ["yourelastuicIP"]
index => "inc"
action=>update
document_id => "%{[result][number]}"
doc_as_upsert =>true
}
stdout { codec => rubydebug }
}
Sample Json input data, when the url is hit.
{"result":[
{
"made_sla":"true",
"Type":"incident resolution p3",
"sys_updated_on":"2019-12-23 05:00:00"
"business_time_left":" 59 Minutes"} ,
{
"made_sla":"true",
"Type":"incident resolution l1.5 p4",
"sys_updated_on":"2019-12-24 07:00:00"
"business_time_left":"3 Hours 24 Minutes"}]}
Thanks in advance!
Q: Is this possible?
A: Yes.
Assuming your json- and split-filters are working correctly and the field business_time_left holds a single value like you showed (e.g. 4 Days 23 Hours 58 Minutes) I personally would do the following:
First, make sure that your data is in a kind of pattern meaning, you standardize the "quantity-descriptions". This means that the minutes are always labeled as "Minutes" not Mins, min or whatever.
Nextup, you can parse the field with the grok-filter like so:
filter{
grok{
match => { "business_time_left" => "(%{INT:calc.days}\s+Days)?%{SPACE}?(%{INT:calc.hours}\s+Hours)?%{SPACE}?(%{INT:calc.minutes}\s+Minutes)?%{SPACE}?(%{INT:calc.seconds}\s+Seconds)?%{SPACE}?" }
}
}
This will extract all available values into the desired fields, e.g. calc.days. The ? character prevents that grok fails if e.g. there are no seconds. You can test the pattern on this site.
With the data extracted, you can implement a ruby filter to aggregate the numeric values like so (untested though):
ruby{
code => '
days = event.get("calc.days")
hours = event.get("calc.hours")
minutes = event.get("calc.minutes")
sum = 0
if days
days_numeric = days.to_i
days_as_hours = days_numeric * 24
sum += days_as_hours
end
if hours
sum += hours.to_i
end
if minutes
sum += (minutes.to_i / 100)
end
# seconds and so on ...
event.set("business_time_left_as_hours", sum)
'
}
So basically you check if the values are present and add them to a sum with your custom logic.
event.set("business_time_left_as_hours", sum) will set the result as a new field to the document.
These code snippets are not intended to be working out of the box they are just hints. So please check the documentations about the ruby filter, ruby coding in general and so on.
I hope I could help you.

Get mounted disk space on Temp in Linux machine

I am new to perl world. I written one perl script for calculating free disk space. But whenever output generates, it gives me different number than what actually shows using df -h command.
So my requirement is i want to show specific mounted free disk space. E.g I want to show only /boot "Use%" figure and it should match with df -h command figure.
Please find my script for reference as follows by clicking link named Actual Script.
Actual Script
The df function from Filesys::Df
module returns a reference to a hash (perldoc perlreftut) with fs info fields
Example:
$VAR1 = {
user_bavail => '170614.21875',
user_blocks => '179796.8203125',
user_fused => 408762,
used => '9182.6015625',
fused => 408762,
bavail => '170614.21875',
user_used => '9182.6015625',
su_bavail => '180077.20703125',
ffree => 11863876,
fper => 3,
user_favail => 11863876,
favail => 11863876,
user_files => 12272638,
blocks => '189259.80859375',
su_favail => 11863876,
files => 12272638,
per => 5,
su_blocks => '189259.80859375',
bfree => '180077.20703125',
su_files => 12272638
};
So you free space is
my $ref = df($dir, 1);
print $ref->{bavail} . " bytes\n";

Zurb Foundation for Apps - CLI Fails2

At the risk of posting a duplicate, I am new here and don't have a rating yet so it wouldn't let me comment on the only relevant similar question I did find here:
Zurb Foundation for Apps - CLI Fails.
Zurb Foundation for Apps - CLI Fails
However I tried the answer there and I still get the same fail.
My message is :
(I don't have a reputation so I can't post images "!##"):
but it is essentially the same as the other post except mine mentions line 118 of foundationCLI.js where theirs notes line 139. Also the answer said to fix line 97 but in mine that code is on line 99.
92 // NEW
93 // Clones the Foundation for Apps template and installs dependencies
94 module.exports.new = function(args, options) {
95 var projectName = args[0];
96 var gitClone = ['git', 'clone', 'https://github.com/zurb/foundation-apps-template.git', args[0]];
97 var npmInstall = [npm, 'install'];
98 var bowerInstall = [bower, 'install'];
99 var bundleInstall = [bundle.bat];
100 if (isRoot()) bowerInstall.push('--allow-root');
101
102 // Show help screen if the user didn't enter a project name
103 if (typeof projectName === 'undefined') {
104 this.help('new');
105 process.exit();
106 }
107
108 yeti([
109 'Thanks for using Foundation for Apps!',
110 '-------------------------------------',
111 'Let\'s set up a new project.',
112 'It shouldn\'t take more than a minute.'
113 ]);
114
115 // Clone the template repo
116 process.stdout.write("\nDownloading the Foundation for Apps template...".cyan);
117 exec(gitClone, function(err, out, code) {
118 if (err instanceof Error) throw err;
119
120 process.stdout.write([
121 "\nDone downloading!".green,
122 "\n\nInstalling dependencies...".cyan,
123 "\n"
124 ].join(''));
I also posted an error log here
https://github.com/npm/npm/issues/7024
yesterday, as directed in the following error message: (which I am unable to post the image of "!##").
But I have yet to receive a response there.
Any idea how I can get past this so I can start an app?
Thanks, A
You may need to also install the git command-line client. On line 117, the foundation-cli.js is trying to run git clone and this is failing.
Could you please run
git --version
and paste the text (not image) of the output you see?
If you have installed git already (e.g., because you have Github for Windows < https://windows.github.com/ > ) then you may need to use the Git Shell shortcut or close/re-open your command prompt window in order to use git on the command line.
Once you've installed git and closed/reopened your shell, try the command
foundation-apps new myApp again.

Puppet have a defined resource fail if a variable is set to undef

I am writing a puppet defined type as follows:
1 #--------------------------------------------------#
2 #-------------------WindowsLog---------------------#
3 #--------------------------------------------------#
4 # Type to set up a windows log #
5 #--------------------------------------------------#
6
7 define windows_log($size = '25MB', $overflowAction = 'OverwriteAsNeeded', $logName = $title)
8 {
9
10 #Microsoft is stupid. Get-WinEvent has different names for logmode than limit-eventlog.
11 #The following selector (basuically a ternary operator) should fix that
12 $overflowWinEventName = $overflowAction ? {
13 OverwriteAsNeeded => "Circular",
14 OverwriteOlder => "AutoBackup",
15 DoNotOverwrite => "Retain",
16 default => undef,
17 }
18
19 if($overflowWinEventName == undef)
20 {
21 fail("${$overflowAction} is not a valid overflow action")
22 }
23 else{
24 exec { "Set maximum log size for ${logName}":
25 provider => powershell,
26 command => "Limit-EventLog -LogName ${logName} -MaximumSize ${size} -OverflowAction ${overflowAction}",
27 unless => "\$log = Get-WinEvent -ListLog ${logName}; if(\$log.MaximumSizeInBytes -eq ${size} -and \$log.LogMode -eq '${overflowWinEventName}'){exit 0}else{exit 1}",
28 }
29 }
30 }
However the method 'fail' does not have the effect I want, and none of the methods listed at http://docs.puppetlabs.com/references/latest/function.html seem to be right either.
Basically I am trying to get puppet to throw an error for this specific resource only, stop applying it, and then continue applying everything else. Fail throws a parser error which kills everything, and the other methods (warn, error, etc) seem to have no effect on the agent.
Any help would be greatly appreciated! I may have just stupidly overlooked something.
Your construct is basically sound. Defined resources cannot really 'fail' like native resources, but using your if/else construct, it will only do any work if there is no error.
Use fail() only if you detect an error that should make the whole catalog invalid. To just send a message to the agent, use a notify resource instead.
notify {
"FATAL - ${overflowAction} is not a valid overflow action":
loglevel => 'err',
withpath => true; # <- include the fully qualified resource name
}

Resources