Lately I've been seeing CloudFlare and Akamai IPs show up as the requesting IPs in logs for a public facing website. The user agent is suspicious; Mozilla/5.0. Also, there are no other values in the x-forwarded-for header, only a singular IP. Adding to the mystery, it's only scraping images. Our business says they are not aware of any vendor using these IP ranges for scraping.
Is there a new bot/crawler that utilizes these networks? I'd like to block the traffic, though not sure of any potential business impact.
Any feedback is appreciated.
I've also noticed this kind of traffic on our systems, and I think there's a good chance it is Apple's Mail Privacy Protection (MPP), based on the info found at - https://www.spamresource.com/2022/02/lets-track-apple-mpp-opens.html
Related
I am interested in creating an app, hosted by heroku. I am looking for a way to respond to requests only through a specific static IP. Any requests made from any other IP will be ignored / won't be allowed to view content. Is that possible through heroku natively or through any add-ons ? If not, do you have any recommendations about services that allow me to clarify a whitelist of static IPs that can access my app?
Thank you in advance!
You can check the value of the X-Forwarded-For request header, that is a comma separeted list of ip addresses, and the last one is the IP you are searching for, in the best scenario.
I said in the best scenario because there is some security and network/proxies consideration to add, You can read here.
I can anticipate that if you or your ISP are not using any proxy and you are not spoofing your IP from some reason (but in this scenario where you want to recognize the client ip, obviously you are not doing this), you can rely securely on what I told you: you can use the last value of the X-forwarded-for header as the client IP.
As a short URL service provider what safety checks I should follow on URLs to keep minimum risk on my users. For example someone should not use my service to short their hacking, spamming, phishing, etc type of links.
For example I can do domain whitelisting on my service so only trusted domain URLs can be short through my service. Like that what other safety checks I should follow before shorting any URLs.
As a URL Shortner (Service Provider) this safety checklist should be followed before shorting any URLs to keep the risk at minimum. Please note that safety on the internet can not be provided by following or using only one checklist or certain rules. We must try new things on own and learn new things day by day. This checklist is part of my research on the internet.
Domain or IP Whitelisting/Blacklisting - This is one of the best safety mechanism we can implement to allow shorting URLs from only trusted domains/IPs. And we can also block certain domains/IPs like malicious domains and IPs. Sometime blacklisting is easy to bypass but whitelisting is hard to bypass so whitelisting is recommended. However we can do both according to our need.
Domain/IP Reputation Check - We can also implement a mechanism to check Domain/IP reputation on global internet before shorting it. For example malicious IPs/Domains might have bad reputation on global internet database, we can simply block those address for shorting.
Reference :-
https://talosintelligence.com/reputation_center/
https://www.ipqualityscore.com/ip-reputation-check
https://ipremoval.sms.symantec.com/lookup
Check DNS Records (mxtoolbox) - MxToolbox supports global Internet operations by providing free, fast and accurate network diagnostic and lookup tools. MxToolbox provide a great list of tools including blacklisting of any ip/host, whois lookup etc.
Virus Total IP/Domain check - Virus Total is an online service that analyzes suspicious files and URLs to detect types of malware and malicious content using antivirus engines and website scanners.
HTTPS Protocol use - Before shorting any URLs we must verify that the URL contain proper SSL/TLS certificate and it contains HTTPS protocol. It is high chance that malicious sites/IPs will communicate through HTTP protocol. So, we must avoid shorting of HTTP hosts.
Disallow known shortened links - Some users try to over smart the security system by shorting the URLs multiple times. As a short link service provider we know the security risks behind short links. So we must avoid shorting of links that are already shorted by other services or even our service.
Compare URLs to list of known badware - There are lots of online database that provide list of suspected malicious IPs and domains. Use this database to prevent shorting of this hosts.
Reference:-
https://zeltser.com/malicious-ip-blocklists/
https://www.trendmicro.com/vinfo/us/threat-encyclopedia/malicious-url
URL Filtering - We can implement some policies like firewall policies that blocks certain content. For example, spam, nudity, violence, weapons, drugs, hacking, etc.
I apologize for my lack of knowledge on how the intricacies of the web work ahead of time.
I run a fairly large deal site (lets call it dealsite.com) and we send a lot of traffic to Amazon.com. Is there anyway for me to hide from Amazon that the users are are coming from dealsite.com? I do not want Amazon to know that we (dealsite.com) are the ones sending the traffic.
Maybe strip certain cookies?
Send outbound traffic through a proxy?
I am not doing anything illegal and these are real users not bots.
By using the noreferrer tag on your links, you can prevent Amazon from learning their traffic is coming from your site, and you don't need to set up a proxy, vpn, or cookie redirects.
HTTP generally sends the referring page along with its request for the new page as part of the HTTP referer section of the request header, and that's how sites track where their visitors come from. So for example, a user would click through to Amazon.com from Dealsite.com, and the request would include an HTTP referer telling Amazon.com that the user was linked from Dealsite.com.
To prevent web sites like Amazon from learning that their traffic came from your site, prevent your links from sending the HTTP referer. In HTML5, just add rel="noreferrer" to your links, and then referral information will not be sent to the site that was linked. The noreferrer link type is only suppported in new browsers, so I suggest using the knu's noreferrer polyfill to make sure it works on older browsers too.
So far this will prevent referrer information from being sent from 99.9% of your users - the only users that will send referral information will be users that are both using old browsers and have JavaScript disabled. To make it 100%, you could require users have JavaScript enabled to be able to click on those particular links.
Disclaimer: This is not the thorough idea you're looking for. I ran out of space in the comments so posted it as an answer. A couple of possible solutions come to my mind.
Proxy servers: Multiple distributed proxy servers to be specific. You can round robin your users through these servers and and hit Amazon so that the inbound traffic to Amazon from dualist.com keeps revolving. Disadvantage is that this will be slow depending on where the proxy server resides. So not the most ideal solution for an Ecommerce site but it works. And the major advantage is that implementation will be very simple.
VPN tunneling: Extremely similar to proxy server. VPN tunnel to another server and send redirect to Amazon from there. You'll get a new (non dealsite.com) IP from the VPN server of this network and your original IP will be masked
Redirects from user (Still in works) For this one I was thinking of if you could store the info you need from dealsite.com in a cookie and then instruct the host to redirect to Amazon by itself. Hence the inbound traffic to Amazon will be from the users IP and not dealsite.coms. If you need to get back to the dealsite session from Amazon, you could use the previously saved cookie to do so.
Ill add to this answer if I find something better.
Edit 1 A few hours more hours researching brought me to the Tor project. This might be useful but be wary, Many security experts advise against using Tor. See here
I work for Johns Hopkins University, and our web culture here has been an unruled wilderness for many years. We're trying to get a handle on the enormous number of registered subdomains across our part of the web-universe, and even our IT department is having some trouble tracking down the unabridged list.
Is there a tool or a script that would do this quickly and semi-easily? I'm a developer and would write something but I want to find out if this wheel has been created already.
Alternatively, is there a fancy way to google search, more than just *.jhu.edu or site: .jhu.edu, because those searches turn up tons of sites that use "jhu.edu" in the end of their urls (ex. www.keywordspy.com/organic/domain.aspx?q=cer.jhu.edu)
Thanks for your thoughts on this one!
The Google search site:*.jhu.edu seems to work well for me.
That said, you can also use Wolfram Alpha. Using this search, in the third box click "Subdomains" and then in the new subdomains section that is created click "More".
As #Mark B alluded to in his comment, the only way a domain name (sub or otherwise) has any real value is if a DNS service maps it to a server so that a browser can send it a request. The only way to track down all of the sub-domains is to track down their DNS entries. Thankfully, DNS servers are fairly easy to find, depending on the level of access you have to the network infrastructure and the authoritative DNS server for the parent domain.
If you are able to, you can pull DNS traffic from firewall logs in and around your network. That will let you find DNS servers that are being sent requests for your sub-domains.
Easier though would be to simply follow the DNS trail. The authoritative DNS server for your domain (jhu.edu) will have pointers to the other DNS servers that are authoritative for sub-domains (if your main one is not authoritative already).
If you have access to the domain registrar and have the proper authorization, you should be able to contact technical support and request the zone file or even export it yourself depending on the provider.
Our company develops a web application that other companies can license. Typically, our application runs on:
www.company.example
And a client's version of the application is run on:
client.company.example
Usually, a client runs their own site at:
www.client.example
Sometimes, clients request to have their version of the application available from:
application.client.example
This kind of setup is often seen with blogs (Wordpress, Blogger, Kickapps).
Technically, achieving this "DNS Masking" with a CNAME/A Record and some application configuration is straightforward. I've thought out some potential issues related to this, however, and wonder if you can think of any others that I've missed:
1) Traffic statistics (as measured by external providers, e.g., compete.com) will be lower since the traffic for company.example won't include that of application.client.example. (Local stats would not be affected, of course)
2) Potential cookie disclosure from application.client.example to company.example. If the client is setting cookies at .client.example, those cookies could be read by the company.example server.
3) Email Spoofing. Email could be sent from company.example with the domain application.client.example, possibly causing problems with spam blacklisting due to incompatible SPF records.
Thanks for any thoughts on this.
CNAME has been widely used for so long, especially by hosting companies. There are no major issues.
The biggest problem for us is when you have to use HTTPS. It's very difficult to support multiple CNAMEs on the same server. We use aliases in certificate (SAN extension). We have to get a new cert every time a new CNAME is added in DNS. Other than that, everything works great for us.
As to the issues you mentioned,
This should be an advantage. It's a lot easier to combine the stats than to separate them. So we prefer granular reports.
Cookies are not shared between domains, even if they are on the same IP. As long as apps are properly sandboxed on the server, they can't read each other's cookie.
You should rate-limit your own outgoing SMTP traffic on the server end so you don't get blacklisted.