Honey tokens using AWS API Gateway and Lambda functions - security

I want to setup fake URLs or honeytoken to trick an attacker to follow those links, and have a script to auto block the attacker IPs using AWS WAF.
Security is a big thing these days, and our web infrastructure has already been a target of massive bruteforce and DDOS attempts. I want to setup tracks so attacker who are using directory traversing attacks can be found. e.g A common directory listing attacks traverse URLs like ../admin, ../wp-admin etc while scanning a target website. I want to setup a mechanism to get alerted when any of these non-existent URLs get browsed.
Question:
1. Is it possible to redirect part of web-traffic e.g www.abc.com/admin to API gateway and remaining www.abc.com to my existent servers?
2. How will I setup DNS entries for such, if it is possible?
3. Is there a different/easy to achieve this.
Any suggestion is welcome, as I am open to Ideas. Thanks

You can first setup a Cloudfront distribution with WAF ip blacklisting. Then setup a honey pot using API Gateway and Lambda which becomes a origin for urls like /admin, /wp-admin.
Following example with bad bot blocking using honeypots can provide you with a head start.

Related

TYPO3 block IP addresses

Somebody tried to get access to my TYPO3 backend. I already have the IP adress of the attacker and want to block it from the backend.
I already tried to block ip with .htaccess but this doensn't work. I think the rules are overwritten by something else in the .htaccess file which I couldn't figure out yet.
Captcha is at the moment not a suitable solution.
Are there any good extensions for blocking IP adresses or is there another way to avoid these brute-force attacks?
If you are really concerned about somebody to be able to successfully get access to the system I suggest to go the "white list" path instead of blacklisting single IPs.
TYPO3 has a built in feature to block backand access for ALL IPs except some white listed ones.
To do this just add the following into AdditionalConfiguration.php putting your own IP and the IPs (or subnets) of other users too.
$GLOBALS["TYPO3_CONF_VARS"]['BE']['IPmaskList'] = 'x.x.x.x,y.y.y.*,z.z.*.*';
Other than that, just make sure you take the basic steps to make your backend more secure:
1) Force SSL for the backend:
$GLOBALS['TYPO3_CONF_VARS']['BE']['lockSSL'] = 2;
2) Implement a secure password policy for the backend users by using e.g. EXT:be_secure_pw
3) Secure session cookies to have ssl_only and http_only attributes:
$GLOBALS['TYPO3_CONF_VARS']['SYS']['cookieHttpOnly']=1;
$GLOBALS['TYPO3_CONF_VARS']['SYS']['cookieSecure']=1;
4) And last but not least: make sure you are using the most recent version of your TYPO3 version line, ideally a maintained LTS version.
You should block requests before PHP/MySql is in use in the best case. So .htaccess is the correct way in my eyes. If it does not work, you should ask your hoster.
It sounds like you want to block the IP of the attacker and put measures in place to block known bad ip's. One of the main issues with blocking the IP of the attacker is that it's fairly easy for an attacker to setup a new IP address and launch a new attack.
There are services that provide lists of known bad ip's if you wanted to implement your own firewall.
Alternatively you can look to place your URL behind a solution such as Cloudflare that have the ability to block IP's or countries. I know of business's that block traffic from China and Russia since they identified that most of their attacks came from these countries.

Hide referral information when my site users click on external links

I apologize for my lack of knowledge on how the intricacies of the web work ahead of time.
I run a fairly large deal site (lets call it dealsite.com) and we send a lot of traffic to Amazon.com. Is there anyway for me to hide from Amazon that the users are are coming from dealsite.com? I do not want Amazon to know that we (dealsite.com) are the ones sending the traffic.
Maybe strip certain cookies?
Send outbound traffic through a proxy?
I am not doing anything illegal and these are real users not bots.
By using the noreferrer tag on your links, you can prevent Amazon from learning their traffic is coming from your site, and you don't need to set up a proxy, vpn, or cookie redirects.
HTTP generally sends the referring page along with its request for the new page as part of the HTTP referer section of the request header, and that's how sites track where their visitors come from. So for example, a user would click through to Amazon.com from Dealsite.com, and the request would include an HTTP referer telling Amazon.com that the user was linked from Dealsite.com.
To prevent web sites like Amazon from learning that their traffic came from your site, prevent your links from sending the HTTP referer. In HTML5, just add rel="noreferrer" to your links, and then referral information will not be sent to the site that was linked. The noreferrer link type is only suppported in new browsers, so I suggest using the knu's noreferrer polyfill to make sure it works on older browsers too.
So far this will prevent referrer information from being sent from 99.9% of your users - the only users that will send referral information will be users that are both using old browsers and have JavaScript disabled. To make it 100%, you could require users have JavaScript enabled to be able to click on those particular links.
Disclaimer: This is not the thorough idea you're looking for. I ran out of space in the comments so posted it as an answer. A couple of possible solutions come to my mind.
Proxy servers: Multiple distributed proxy servers to be specific. You can round robin your users through these servers and and hit Amazon so that the inbound traffic to Amazon from dualist.com keeps revolving. Disadvantage is that this will be slow depending on where the proxy server resides. So not the most ideal solution for an Ecommerce site but it works. And the major advantage is that implementation will be very simple.
VPN tunneling: Extremely similar to proxy server. VPN tunnel to another server and send redirect to Amazon from there. You'll get a new (non dealsite.com) IP from the VPN server of this network and your original IP will be masked
Redirects from user (Still in works) For this one I was thinking of if you could store the info you need from dealsite.com in a cookie and then instruct the host to redirect to Amazon by itself. Hence the inbound traffic to Amazon will be from the users IP and not dealsite.coms. If you need to get back to the dealsite session from Amazon, you could use the previously saved cookie to do so.
Ill add to this answer if I find something better.
Edit 1 A few hours more hours researching brought me to the Tor project. This might be useful but be wary, Many security experts advise against using Tor. See here

Is there a way to find all existing subdomains of one main domain?

I work for Johns Hopkins University, and our web culture here has been an unruled wilderness for many years. We're trying to get a handle on the enormous number of registered subdomains across our part of the web-universe, and even our IT department is having some trouble tracking down the unabridged list.
Is there a tool or a script that would do this quickly and semi-easily? I'm a developer and would write something but I want to find out if this wheel has been created already.
Alternatively, is there a fancy way to google search, more than just *.jhu.edu or site: .jhu.edu, because those searches turn up tons of sites that use "jhu.edu" in the end of their urls (ex. www.keywordspy.com/organic/domain.aspx?q=cer.jhu.edu)
Thanks for your thoughts on this one!
The Google search site:*.jhu.edu seems to work well for me.
That said, you can also use Wolfram Alpha. Using this search, in the third box click "Subdomains" and then in the new subdomains section that is created click "More".
As #Mark B alluded to in his comment, the only way a domain name (sub or otherwise) has any real value is if a DNS service maps it to a server so that a browser can send it a request. The only way to track down all of the sub-domains is to track down their DNS entries. Thankfully, DNS servers are fairly easy to find, depending on the level of access you have to the network infrastructure and the authoritative DNS server for the parent domain.
If you are able to, you can pull DNS traffic from firewall logs in and around your network. That will let you find DNS servers that are being sent requests for your sub-domains.
Easier though would be to simply follow the DNS trail. The authoritative DNS server for your domain (jhu.edu) will have pointers to the other DNS servers that are authoritative for sub-domains (if your main one is not authoritative already).
If you have access to the domain registrar and have the proper authorization, you should be able to contact technical support and request the zone file or even export it yourself depending on the provider.

SaaS DNS Records Design

This question is an extension to previously answered question
How to give cname forward support to saas software
Sample sites -
client1.mysite.com
client2.mysite.com
...
clientN.mysite.com
Create affinity by say client[1-10].mysite.com to be forwarded to europe.mysite.com => IP address.
Another criteria is it should have little recourse to proxy, firewall and network changes. In essence the solution I am attempting is a Data Dependent Routing (based on URL, Login Information etc.).
However they all mean I have a token based authentication system to authenticate and then redirect the user to a new URL. I am afraid that can be a single point of failure and will need a seperate site from my core app to do such routing. Also its quite some refactoring to existing code. Another concenr is the solution also may not be entirely transparent to the end user as it will be a HTTP Redirect 301.
Keeping in mind that application can be served from Load Balanced Web Servers (IIS) with LB Switch and other Network appliances, I would greatly appreciate if someone can simplify and educate me how this should be designed.
Another resource I have been looking up is -
http://en.wikipedia.org/wiki/DNAME#DNAME_record
You could stick routing information into a cookie, so that the various intermediary systems can then detect that cookie and redirect the user accordingly without there being a single point of failure.
If the user forges a cookie of his own, he might get redirected to a server where he does not belong, but that server would then check whether the cookie is indeed valid, and prevent unauthorized access.

Can I find out what domain made a request that triggered an HttpModule?

How do I find out from within an HttpModule what domain made a particular request?
Say I only want to allow site1.com and site2.com to use images from my server, how do I check that it is them making the request?
There's no way to do this in every case. Consider that UrlReferrer may not be set. Also, consider that you could be called by a client that does not have a DNS address.
Instead, you should consider configuring IIS to authenticate using user certificates. If you've only got a small number of sites calling you, generate a certificate, register it with IIS and map to the user you want, then give the certificates to those two machines to be installed on them.
Request.UrlReferrer, but it can easily be spoofed.

Resources