Dynamic IP-based blacklisting - security

Folks, we all know that IP blacklisting doesn't work - spammers can come in through a proxy, plus, legitimate users might get affected... That said, blacklisting seems to me to be an efficient mechanism to stop a persistent attacker, given that the actual list of IP's is determined dynamically, based on application's feedback and user behavior.
For example:
- someone trying to brute-force your login screen
- a poorly written bot issues very strange HTTP requests to your site
- a script-kiddie uses a scanner to look for vulnerabilities in your app
I'm wondering if the following mechanism would work, and if so, do you know if there are any tools that do it:
In a web application, developer has a hook to report an "offense". An offense can be minor (invalid password) and it would take dozens of such offenses to get blacklisted; or it can be major, and a couple of such offenses in a 24-hour period kicks you out.
Some form of a web-server-level block kicks in on before every page is loaded, and determines if the user comes from a "bad" IP.
There's a "forgiveness" mechanism built-in: offenses no longer count against an IP after a while.
Thanks!
Extra note: it'd be awesome if the solution worked in PHP, but I'd love to hear your thoughts about the approach in general, for any language/platform

Take a look at fail2ban. A python framework that allows you to raise IP tables blocks from tailing log files for patterns of errant behaviour.

are you on a *nix machine? this sort of thing is probably better left to the OS level, using something like iptables
edit:
in response to the comment, yes (sort of). however, the idea is that iptables can work independently. you can set a certain threshold to throttle (for example, block requests on port 80 TCP that exceed x requests/minute), and that is all handled transparently (ie, your application really doesn't need to know anything about it, to have dynamic blocking take place).
i would suggest the iptables method if you have full control of the box, and would prefer to let your firewall handle throttling (advantages are, you don't need to build this logic into your web app, and it can save resources as requests are dropped before they hit your webserver)
otherwise, if you expect blocking won't be a huge component, (or your app is portable and can't guarantee access to iptables), then it would make more sense to build that logic into your app.

I think it should be a combination of user-name plus IP block. Not just IP.

you're looking at custom lockout code. There are applications in the open source world that contain various flavors of such code. Perhaps you should look at some of those, although your requirements are pretty trivial, so mark an IP/username combo, and utilize that for blocking an IP for x amount of time. (Note I said block the IP, not the user. The user may try to get online via a valid IP/username/pw combo.)
Matter of fact, you could even keep traces of user logins, and when logging in from an unknown IP with a 3 strikes bad username/pw combo, lock that IP out for however long you like for that username. (Do note that a lot of ISPs share IPs, thus....)
You might also want to place a delay in authentication, so that an IP cannot attempt a login more than once every 'y' seconds or so.

I have developed a system for a client which kept track of hits against the web server and dynamically banned IP addresses at the operating system/firewall level for variable periods of time for certain offenses, so, yes, this is definitely possible. As Owen said, firewall rules are a much better place to do this sort of thing than in the web server. (Unfortunately, the client chose to hold a tight copyright on this code, so I am not at liberty to share it.)
I generally work in Perl rather than PHP, but, so long as you have a command-line interface to your firewall rules engine (like, say, /sbin/iptables), you should be able to do this fairly easily from any language which has the ability to execute system commands.

err this sort of system is easy and common, i can give you mine easily enough
its simply and briefly explained here http://www.alandoherty.net/info/webservers/
the scripts as written arn't downloadable {as no commentry currently added} but drop me an e-mail, from the site above, and i'll fling the code at you and gladly help with debugging/taloring it to your server

Related

Creating a honeypot for nodejs / hapi.js

I have a hapijs application and checking some logs I have found some entries for automated site scanners and hits to entries to /admin.php and similar.
I found this great article How to Block Automated Scanners from Scanning your Site and I thought it was great.
I am looking for guidance on what the best strategy would be to create honey pots for a hapijs / nodejs app to identify suspicious requests, log them, and possibly ban the IPs temporarily.
Do you have any general or specific (to node and hapi) recommendations on how to implement this?
My thoughts include:
Create the honeypot route with a non-obvious name
Add a robots.txt to disallow search engines on that route
Create the content of the route (see the article and discussions for some of the recommendations)
Write to a special log or tag the log entries for easy tracking and later analysis
Possibly create some logic that if traffic from this IP address receives more traffic than certain threshold (5 times of honeypot route access will ban the IP for X hours or permanently)
A few questions I have:
How can you ban an IP address using hapi.js?
Are there any other recommendations to identify automated scanners?
Do you have specific suggestions for implementing a honeypot?
Thanks!
Let me start with saying that this Idea sounds really cool but I'm not if it is much practical.
First the chances of blocking legit bots/users is small but still exisits.
Even if you ignore true mistakes the option for abuse and denial of service is quite big. Once I know your blocking users who enter this route I can try cause legit users touch it (with an iframe / img / redirect) and cause them to be banned from the site.
Than it's effectiveness is small. sure your going to stop all automated bots that scan your sites (I'm sure the first thing they do is check the Disallow info and this is the first thing you do in a pentest). But only unsophisticated attacks are going to be blocked cause anyone actively targeting you will blacklist the endpoint and get a different IP.
So I'm not saying you shouldn't do it but I am saying you should think to see if the pros outwaite the cons here.
How to actually get it done is actually quite simple. And it seem like your looking for a very unique case of rate limiting I wouldn't do it directly in your hapi app since you want the ban to be shared between instances and you probably want them to be persistent across restarts (You can do it from your app but it's too much logic for something that is already solved).
The article you mentioned actually suggests using fail2ban which is a great solution for rate limiting. you'll need to make sure your app logs to afile it can read and write a filter and jail conf specifically for your app but it should work with hapi with no issues.
Specifically for hapi I maintain an npm module for rate limiting called ralphi it has a hapi plugin but unless you need a proper rate limiting (which you should have for logins, sessions and other tokens) fail2ban might be a better option in this case.
In general Honey pots are not hard to implement but as with any secuiry related solution you should consider who is your potential attacker and what are you trying to protect.
Also in general Honey pots are mostly used to notify about an existing breach or an imminent breach. Though they can be used to also trigger a lockdown your main take from them is to get visibility once a breach happend but before the attacker had to much time to abuse the system (You don't want to discover the breach two months later when your site has been defaced and all valuable data was already taken)
A few ideas for honey pots can be -
Have an 'admin' user with relatively average password (random 8 chars) but no privileges at all when this user successfully loges in notify the real admin.
Notice that your not locking the attacker on first attempt to login even if you know he is doing something wrong (he will get a different ip and use another account). But if he actually managed to loggin, maybe there's an error in your login logic ? maybe password reset is broken ? maybe rate limiting isn't working ? So much more info to follow through.
now that you know you have a semi competent attacker maybe try and see what is he trying to do, maybe you'll know who he is or what his end goal is (Highly valuable since he probably going to try again).
Find sensitive places you don't want users to play with and plant some canary tokens in. This can be just a file that sites with all your other uploads on the system, It can be an AWS creds on your dev machine, it can be a link that goes from your admin panel that says "technical documentation" the idea is that regular users should not care or have any access to this files but attackers will find them too tempting to ignore. the moment they touch one you know this area has been compromised and you need to start blocking and investigating
Just remember before implementing any security in try to think who you expect is going to attack you honey pots are probably one of the last security mesaures you should consider and there are a lot more common and basic security issues that need to be addressed first (There are endless amount of lists about node.js security best practices and OWASP Top 10 defacto standard for general web apps security)

What security risks are posed by using a local server to provide a browser-based gui for a program?

I am building a relatively simple program to gather and sort data input by the user. I would like to use a local server running through a web browser for two reasons:
HTML forms are a simple and effective means for gathering the input I'll need.
I want to be able to run the program off-line and without having to manage the security risks involved with accessing a remote server.
Edit: To clarify, I mean that the application should be accessible only from the local network and not from the Internet.
As I've been seeking out information on the issue, I've encountered one or two remarks suggesting that local servers have their own security risks, but I'm not clear on the nature or severity of those risks.
(In case it is relevant, I will be using SWI-Prolog for handling the data manipulation. I also plan on using the SWI-Prolog HTTP package for the server, but I am willing to reconsider this choice if it turns out to be a bad idea.)
I have two questions:
What security risks does one need to be aware of when using a local server for this purpose? (Note: In my case, the program will likely deal with some very sensitive information, so I don't have room for any laxity on this issue).
How does one go about mitigating these risks? (Or, where I should look to learn how to address this issue?)
I'm very grateful for any and all help!
There are security risks with any solution. You can use tools proven by years and one day be hacked (from my own experience). And you can pay a lot for security solution and never be hacked. So, you need always compare efforts with impact.
Basically, you need protect 4 "doors" in your case:
1. Authorization (password interception or, for example improper, usage of cookies)
2. http protocol
3. Application input
4. Other ways to access your database (not using http, for example, by ssh port with weak password, taking your computer or hard disk etc. In some cases you need properly encrypt the volume)
1 and 4 are not specific for Prolog but 4 is only one which has some specific in a case of local servers.
Protect http protocol level means do not allow requests which can take control over your swi-prolog server. For this purpose I recommend install some reverse-proxy like nginx which can prevent attacks on this level including some type of DoS. So, browser will contact nginx and nginx will redirect request to your server if it is a correct http request. You can use any other server instead of nginx if it has similar features.
You need install proper ssl key and allow ssl (https) in your reverse proxy server. It should be not in your swi-prolog server. Https will encrypt all information and will communicate with swi-prolog by http.
Think about authorization. There are methods which can be broken very easily. You need study this topic, there are lot of information. I think it is most important part.
Application input problem - the famose example is "sql injection". Study examples. All good web frameworks have "entry" procedures to clean all possible injections. Take an existing code and rewrite it with prolog.
Also, test all input fields with very long string, different charsets etc.
You can see, the security is not so easy, but you can select appropriate efforts considering with the impact of hacking.
Also, think about possible attacker. If somebody is very interested particulary to get your information all mentioned methods are good. But it can be a rare case. Most often hackers just scan internet and try apply known hacks to all found servers. In this case your best friend should be Honey-Pots and prolog itself, because the probability of hacker interest to swi-prolog internals is extremely low. (Hacker need to study well the server code to find a door).
So I think you will found adequate methods to protect all sensitive data.
But please, never use passwords with combinations of dictionary words and the same password more then for one purpose, it is the most important rule of security. For the same reason you shouldn't give access for your users to all information, but protection should be on the app level design.
The cases specific to a local server are a good firewall, proper network setup and encription of hard drive partition if your local server can be stolen by "hacker".
But if you mean the application should be accessible only from your local network and not from Internet you need much less efforts, mainly you need check your router/firewall setup and the 4th door in my list.
In a case you have a very limited number of known users you can just propose them to use VPN and not protect your server as in the case of "global" access.
I'd point out that my post was about a security issue with using port forwarding in apache
to access a prolog server.
And I do know of a successful prolog injection DOS attack on a SWI-Prolog http framework based website. I don't believe the website's author wants the details made public, but the possibility is certainly real.
Obviously this attack vector is only possible if the site evaluates Turing complete code (or code which it can't prove will terminate).
A simple security precaution is to check the Request object and reject requests from anything but localhost.
I'd point out that the pldoc server only responds by default on localhost.
- Anne Ogborn
I think SWI_Prolog http package is an excellent choice. Jan Wielemaker put much effort in making it secure and scalable.
I don't think you need to worry about SQL injection, indeed would be strange to rely on SQL when you have Prolog power at your fingers...
Of course, you need to properly manage the http access in your server...
Just this morning there has been an interesting post in SWI-Prolog mailing list, about this topic: Anne Ogborn shares her experience...

How to protect a website from DoS attacks

What is the best methods for protecting a site form DoS attack. Any idea how popular sites/services handles this issue?.
what are the tools/services in application, operating system, networking, hosting levels?.
it would be nice if some one could share their real experience they deal with.
Thanks
Sure you mean DoS not injections? There's not much you can do on a web programming end to prevent them as it's more about tying up connection ports and blocking them at the physical layer than at the application layer (web programming).
In regards to how most companies prevent them is a lot of companies use load balancing and server farms to displace the bandwidth coming in. Also, a lot of smart routers are monitoring activity from IPs and IP ranges to make sure there aren't too many inquiries coming in (and if so performs a block before it hits the server).
Biggest intentional DoS I can think of is woot.com during a woot-off though. I suggest trying wikipedia ( http://en.wikipedia.org/wiki/Denial-of-service_attack#Prevention_and_response ) and see what they have to say about prevention methods.
I've never had to deal with this yet, but a common method involves writing a small piece of code to track IP addresses that are making a large amount of requests in a short amount of time and denying them before processing actually happens.
Many hosting services provide this along with hosting, check with them to see if they do.
I implemented this once in the application layer. We recorded all requests served to our server farms through a service which each machine in the farm could send request information to. We then processed these requests, aggregated by IP address, and automatically flagged any IP address exceeding a threshold of a certain number of requests per time interval. Any request coming from a flagged IP got a standard Captcha response, if they failed too many times, they were banned forever (dangerous if you get a DoS from behind a proxy.) If they proved they were a human the statistics related to their IP were "zeroed."
Well, this is an old one, but people looking to do this might want to look at fail2ban.
http://go2linux.garron.me/linux/2011/05/fail2ban-protect-web-server-http-dos-attack-1084.html
That's more of a serverfault sort of answer, as opposed to building this into your application, but I think it's the sort of problem which is most likely better tackled that way. If the logic for what you want to block is complex, consider having your application just log enough info to base the banning policy action on, rather than trying to put the policy into effect.
Consider also that depending on the web server you use, you might be vulnerable to things like a slow loris attack, and there's nothing you can do about that at a web application level.

Obfuscating server headers

I have a WSGI application running in PythonPaste. I've noticed that the default 'Server' header leaks a fair amount of information ("Server: PasteWSGIServer/0.5 Python/2.6").
My knee jerk reaction is to change it...but I'm curious what others think.
Is there any utility in the server header, or benefit in removing it? Should I feel uncomfortable about giving away information on my infrastructure?
Thanks
Well "Security through Obscurity" is never a best practice; your equipment should be able to maintain integrity against an attacker that has extensive knowledge of your setup (barring passwords, console access, etc). Can't really stop a DDOS or something similar, but you shouldn't have to worry about people finding out you OS version, etc.
Still, no need to give away information for free. Fudging the headers may discourage some attackers, and, in cases like this where you're running an application that may have a known exploit crop up, there are significant benefits in not advertising that you're running it.
I say change it. Internally, you shouldn't see much benefit in leaving it alone, and externally you have a chance of seeing benefits if you change it.
Given the requests I find in my log files (like requests for IIS-specific bugs in Apache logs, and I'm sure IIS server logs will show Apache-specific requests as well), there's many bots out there that don't care about any such header at all. I guess almost everything is brute force nowadays.
(And actually, as for example I've set up quite a few instances of Tomcat sitting behind IIS, I guess I would not take the headers into account either, if I were to try to hack my way into some server.)
And above all: when using free software I kind of find it appropriate to give the makers some credits in statistics.
Masking your version number is a very important security measure. You do not want to give the attacker any information about what software you are running. This security feature is available in the mod_security, the Open Source Web Application Firewall for Apache:
http://www.modsecurity.org/
Add this line to your mod_security configuration file:
SecServerSignature "IIS/6.0"

I want to use security through obscurity for the admin interface of a simple website. Can it be a problem?

For the sake of simplicity I want to use admin links like this for a site:
http://sitename.com/somegibberish.php?othergibberish=...
So the actual URL and the parameter would be some completely random string which only I would know.
I know security through obscurity is generally a bad idea, but is it a realistic threat someone can find out the URL? Don't take the employees of the hosting company and eavesdroppers on the line into account, because it is a toy site, not something important and the hosting company doesn't give me secure FTP anyway, so I'm only concerned about normal visitors.
Is there a way of someone finding this URL? It wouldn't be anywhere on the web, so Google won't now it about either. I hope, at least. :)
Any other hole in my scheme which I don't see?
Well, if you could guarantee only you would ever know it, it would work. Unfortunately, even ignoring malicious men in the middle, there are many ways it can leak out...
It will appear in the access logs of your provider, which might end up on Google (and are certainly read by the hosting admins)
It's in your browsing history. Plugins, extensions etc have access to this, and often use upload it elsewhere (i.e. StumbleUpon).
Any proxy servers along the line see it clearly
It could turn up as a Referer to another site
some completely random string
which only I would know.
Sounds like a password to me. :-)
If you're going to have to remember a secret string I would suggest doing usernames and passwords "properly" as HTTP servers will have been written to not leak password information; the same is not true of URLs.
This may only be a toy site but why not practice setting up security properly as it won't matter if you get it wrong. So hopefully, if you do have a site which you need to secure in future you'll have already made all your mistakes.
I know security through obscurity is
generally a very bad idea,
Fixed it for you.
The danger here is that you might get in the habit of "oh, it worked for Toy such-and-such site, so I won't bother implementing real security on this other site."
You would do a disservice to yourself (and any clients/users of your system) if you ignore Kerckhoff's Principle.
That being said, rolling your own security system is a bad idea. Smarter people have already created security libraries in the other major languages, and even smarter people have reviewed and tweaked those libraries. Use them.
It could appear on the web via a "Referer leak". Say your page links to my page at http://entrian.com/, and I publish my web server referer logs on the web. There'll be an entry saying that http://entrian.com/ was accessed from http://sitename.com/somegibberish.php?othergibberish=...
As long as the "login-URL" never posted anywhere, there shouldn't be any way for search engines to find it. And if it's just a small, personal toy-site with no personal or really important content, I see this as a fast and decent-working solution regarding security compared to implementing some form of proper login/authorization system.
If the site is getting a big number of users and lots of content, or simply becomes more than a "toy site", I'd advice you to do it the proper way
I don't know what your toy admin page would display, but keep in mind that when loading external images or linking to somewhere else, your referrer is going to publicize your URL.
If you change http into https, then at least the url will not be visible to anyone sniffing on the network.
(the caveat here is that you also need to consider that very obscure login system can leave interesting traces to be found in the network traces (MITM), somewhere on the site/target for enabling priv.elevation, or on the system you use to log in if that one is no longer secure and some prefer admin login looking no different from a standard user login to avoid that)
You could require that some action be taken # of times and with some number of seconds of delays between the times. After this action,delay,action,delay,action pattern was noticed, the admin interface would become available for login. And the urls used in the interface could be randomized each time with a single use url generated after that pattern. Further, you could only expose this interface through some tunnel and only for a minute on a port encoded by the delays.
If you could do all that in a manner that didn't stand out in the logs, that'd be "clever" but you could also open up new holes by writing all that code and it goes against "keep it simple stupid".

Resources