How can I query Spamhaus's SBL with a domain name? - dns

I want to query Spamhaus's SBL using a domain name. I know this is possible to do because this form (Find SBL Listings by ISP Domain Name) does it and SpamAssassin does it, but I can only seem to get it to work with IP addresses. I took a quick look at the SpamAssassin code, but it has been so generalized that I could probably spend a couple hours tracking down the code that actually does something. Right now I can successfully query SBL for IP addresses like this:
#returns 127.0.0.2, so 208.73.210.0 is on the blacklist
dig +short 0.210.73.208.sbl.spamhaus.org
#returns nothing, so 72.14.225.72 isn't on the blacklist
dig +short 72.225.14.72.sbl.spamhaus.org
Querying with domain names seems to have something to do with DNS TXT records, but I don't know the right hostname to lookup. When I try something like
dig oversee.net.sbl.spamhaus.org TXT
I don't get any useful information back, but if you search with the form you find that oversee.net is associated with 208.73.210.0 which was reported as spamming on 30-Jul-2009 21:17 GMT.

Domains are in the "Domain Block List", not the SBL. Use dbl.spamhaus.org as the domain suffix.
The particular search you linked to is based on the ISP's domain name, and I don't believe it uses the same DNSBL interface.

Related

Find all domains that point to specific nameserver

I wonder how services performing reverse NS lookup work.
So basically let's say we have a server with an IP address.
That server has a ns record, to which some other domains point.
So for example here, https://viewdns.info/reversens/
When we specify ns1.example.com we see all domains pointing there.
How one would approach it programmatically?
How one would approach it programmatically?
You can't, because there is no way to do this.
What people do is the following more or less:
start with a list of domains (do searches, try dictionary words, use social media, download gTLD zone files, etc.)
resolve them, you get the nameservers
record in some database the domain <-> nameservers mapping
Now, with all the data you can trivially do reverse queries. This is how basically everyone does it (hence it is never real time, you first have to collect all information).

What is the use of having canonical names in computer networks? Can't we just use alias names to get IP address directly?

For eg. we wanted to search google.com (let us think that its an alias name), then we will lookup in DNS and get its canonical name which further helps to get the IP address. Why cant we just get IP address from alias name as it would also be unique.
It is not always guaranteed that alias name resolves to same IP address. And, there is a very good reason for it. Lets say person A is browsing google.com from country A. Google has it servers all over the world (for efficiency purposes). It is beneficial if person A requests are directed towards google servers in country A than towards some other distant location. Here where CNAME records comes into the picture. CNAME records are configured in such a way that google.com resolves to servers which are specific to country A. And another case where you get different IP for same alias name is when you fetch MX records (mail server records), for the same domain you can have different servers managing mails and web traffic.
The design of URL is for convenience. The convenience is that when we want to change the server IP, we don't need to tell all the users the new ip of the website. In other words, what we have done in server will make no change to users. That is the core thought in server design.

Is it possible to make fail2ban ignore google?

I need to use fail2ban due to many attack attempts on my server, I also have filters that I had to activate/create to block attack attempts.
But now I'm pretty sure that some google ip ends up in the jail of my fail2ban...
I added some ip in the ignoreip directive in the jail.local file, but they are only the ones that I managed to identify as real google ip in my access.log (I also have many fake google)
It would be nice to be able to give a list of ip to ignore to fail2ban, but google does not release its ip list, google says: https://support.google.com/webmasters/answer/80553?hl=en
So the question is: is it possible to do a reverse dns to understand if an ip belongs to google and tell fail2ban to ignore it?
Can it be done via fail2ban? Do you need any external script? Could it be too heavy, long and tiring for the server?
yes, you can identify google bots using reverse IP lookup.
all crawler bots will end with xxxxxx.google.com or xxxxxxx.googlebot.com
for e.g. crawl-203-208-60-1.googlebot.com
but it is not possible to identify in fail2ban, but you can whitelist the IP address once you know if its a Googlebot.
there are many ways to perform for reverse IP look.
you can use Python, Ruby or bash to find out. check the following article.
http://searchsignals.com/tutorials/reverse-dns-lookup/
there are websites that can find you reverse IP lookup.
https://dnschecker.org/reverse-dns.php
http://reverseip.domaintools.com/
if you can code in python, you easily dump reverse IP data in a file from a list of IP addresses.
Google does have a page about verifying GoogleBot addresses by doing a reverse-lookup on the IP address and verifying that it comes from a specific hostname (you'd then get the IP of that host, to double-check it comes back to the appropriate source IP).
There are also DNS TXT records that specify IP ranges for SPF (emails), Google Compute Cloud, and the wider Google IP addresses that can be used (many of which would be in use by GCP user's VMs and other services).
dig #8.8.8.8 +short TXT _spf.google.com
dig #8.8.8.8 +short TXT _cloud-netblocks.google.com
dig #8.8.8.8 +short TXT _cloud-netblocks.googleusercontent.com
The first query will return something like this:
"v=spf1 include:_netblocks.google.com include:_netblocks2.google.com include:_netblocks3.google.com ~all"
And you would then parse it to get the IP address ranges, or do a sub-query on the include:_netblocks.google.com etc to get other sets.
The information these records are not fixed, and can regularly change. (AWS publishes a .JSON file with several updates per week, for example).
I'm working on a system to automatically detect 'lying user-agents', with these, and some other techniques.

How many DNS records are necessary when changing DNS providers?

I changed DNS provider recently and I am trying to add DNS records to my new provider. However, I am unsure about how many records I should add.
My old nameserver had a whole bunch of auto-created records like "ftp.example.com", "cpanel.example.com", "_carddavs._tcp.example.com", "webdisk.example.com", "autodiscover.example.com", etc etc.
So my question is, can I just add the below TWO A records?
# ---> A Record pointing to my host IP address
www ---> A Record pointing to my host IP address
Any replies would be greatly appreciated!!
This question is akin to asking 'how many contacts do I need in my address book'
If you only have one friend, then a single record is all you need. (I'm ignoring the required SOA and NS records)
If you are going to have something talking to ftp.example.com then go ahead and add that record.
If you want to recieve mail on that domain, then you will need at least one MX record.
If you want to host a website at www.example.com then you will need to add a www A record. (or if you want to host a website at notwww.example.com, then add that A record)
Fill your DNS up with whatever you need it to have.
The reason for all of the already included options is that they lead off to revenue generating pages for whoever hosts your domain.

How does one implement SRV wildcard domains in IPv6?

I was looking to do something similar to
https://en.wikipedia.org/wiki/Reverse_DNS_lookup#Records_other_than_PTR_records
and place a SRV record in the reverse DNS tree.
In particular I was hoping to be able to add a srv record for a chunk
of the address space by using a wildcard. Something like the
following....
_service._tls.*.26.19.in-addr.arpa. IN SRV 1 1 443 service.example.com
However it turns out that my understanding of wildcard domains was inadequate according to:
SRV RRSet at a Wildcard Domain Name
https://www.rfc-editor.org/rfc/rfc4592#section-4.5
The above is confusing but basically explains (I think) that my
single wildcard SRV record above won't work. I think I need a SRV
record for each and every ip address I wanted to cover with the
wildcard domain.
In IPv4 I know I can use things like Bind's $GENERATE directive to automate the creation of all the records. But how would something like this be handled in IPv6 particularly if I also wanted to use DNSSEC to have all the records signed?
Any insights would be greatly appreciated.

Resources