How to minimize signin scripting attack - security

I'll jump right into the problem.
Let's say my website has sign-in capability and has a lots of users.
I'm getting sign-in scripting attacks where the hacker has list of "username" and "password"(let's say he got this from elsewhere) and he's running it on my website to see if they're a valid information.
I have a monitoring tool/app to catch if same IP address is trying to attempt "x" amount of logins within "y" amount of time and it successfully detects attacks which falls into those categories. However there are limitations to this as I'm unable to come up with all possible cases, I feel like it's impossible to stop them from this these types of attacks.
I'm curious how other companies like Amazon or other giants handles these type of attacks.
Thanks in advance.

They apply various techniques to determine if the login is done my a human or not - this is known as Turing test
For example, many sites lock out users out if they try too many login attempts within a fixed time period. This means that if it is a bot, it's later login attempts will be ignored. A variation on this is to increase the lock out time as the number of login attempts increase.
A CAPTCHA is used in sites like Ticketmaster.
Brute-force attacks such as this are unfortunately a fact of life.

Related

How should I implement a system to block password cracking?

I want to put a measure in place to stop people from trying to hack user accounts on my website. What would be the best process behind this without being annoying to a customer who just needs to try a few passwords to remember?
I notice Google shows up a captcha image after a couple of failed attempts. I've never tried hard enough but I'm sure they must block you after quite a few attempts.
Would would be the best practice to ensure that someone doesn't try a brute force approach to gain access to an account?
captcha ?
Blocking their IP Address (does this work if they're on a shared IP)?
Your best bet is to lock out(10min, 15min, etc...) on a per-username basis with a relatively high number of tries possible(10 or 20 or so) in a set period(e.g. rolling 30min window). By setting the number of tries higher than 3 or 5, the average user will either give up or attempt to reset their password before the lockout hits.
You may consider logging failed attempt data(IP, username, timestamps, ...) to understand behavior differences between normal user behavior and brute force attempts. This will allow you to refine your policy over time.
Also consider a strong password policy(at minimum 8+ characters with at least one number).
You may also consider some form of multi-factor authentication. You mentioned captcha but there are many other techniques you may find useful. One site I work with will email a token to a user's email address if they do not recognize a user's IP address and the user must present that token before they are able to access from the new IP address.
Schemes that lock a user out after a certain number of attempts and/or extend the time that it takes after further login attempts are accepted again are certainly a good idea. As are CAPTCHAs (aside from being annoying :) But, in my opinion, they only make sense if you have strong hardware backing you.
The reason why I believe this should only be tried if you have the resources to do so is that you have to keep in mind that a scheme like that requires you to remember the attempts recently made for potentially every user in your system. Certainly, there are numerous ways of persisting the information, varying in their effectiveness: in-memory cache, database, etc.
But no matter what, such a mechanism will put additional load on your application, and there's the downside: if an attacker gets bored or annoyed by your app, they might as well try to take it down with a denial of service attack. And complicated login schemes that need to persist a lot of information will help a lot in achieving that goal.
If you decide to apply such a feature, I would recommend you stress test it a lot in a lab first to get a feeling for "how much you can take" - this way you'll find out if you need to upgrade your hardware :)
An easier way that can do without the need for persistence is to apply a password hash like PBKDF2, bcrypt or scrypt. These artificially slow attackers down enough to make it as hard as possible for them. But be aware, that these, too, put additional computational strain on your application (although presumably less than the aforementioned measures), so again I would do some stress tests first.

How to defend against TabNabbing?

I got very concerned reading this genius post by Aza Raskin.
What are the non-browsers solutions to defend against TabNabbing? Are there any?
"Tab Nabbing" is not a new attack, Mr Raskin is ripping off other researchers work. PDP from GnuCitizen discovered this back in 2008.
The biggest threat as I see it is Phishing. To be honest I don't think there is a good solution to stop phishing. This particular issues I think should be fixed by the browser. Eventually Firefox and Chrome will get around to fixing it. To be honest SSLStrip is a bigger threat that all browsers face, which can be used along side this redirection attack. Currently chrome has a fix in the form of STS and Firefox in the form of HTTPs Everywhere. Using noscript will also help mitigate this redirection attack attack.
One thing that will prevent this sort of thing from happening is two factor authentication using something like an RSA token (unfortunately only one bank in this country provides this method).
The RSA token is a little USB stick sized gadget that has a continuously changing serial/sequence number on it, and it is issued to you (each stick has a different sequence of numbers). When you logon to your bank's website, you have to supply you log/pass, and also the current number on the RSA token - that number changes every two minutes. That means that if the bad guys collect your login details they have less than two minutes to login to your account before the current RSA sequence number changes and the captured login details become impossible to reuse.
This 2 factor authentication is not the silver bullet though, i don't see Google rolling this out for your random Gmail account, and neither will Facebook. It should be mandatory for financial institutions and online government departments, this will cut the scope of this type of attack. It is a commonly used protection mechanism for remote access to company website portals and remote network logins, and it is quite successful for this.
This still hasn't answered your question though - how can you as an website author or owner prevent this? You can't, unless you don't run third party scripts, and regularly check your pages to make sure you haven't been compromised and had a script inserted. You should never consider trying to scan any third party scripts, because they can be obfuscated to an incredible degree which you can't possibly scan for. If you do run third party scripts and feel strongly enough about this, then you might want to setp a machine which all it does is automated UI tests on your web site - it is an easy enough thing to set up with some basic tests and just leave it testing your live site every 30 or 60 minutes looking for unexpected results.
Like he suggests, use the password manager. There are quite a few other problems that can happen if you type your password every time. For sites that the password manager doesn't work, you're screwed. Client certificates ftw.
I just visited the page which you mention and my free virus checker (AVG) immediately detected a threat (I presume that he has an example on the page) and warned me of a Tabnapping Exploit.
So that's one, easy, possibility

Is it immoral to put a captcha on a login form?

In a recent project I put a captcha test on a login form, in order to stop possible brute force attacks.
The immediate reaction of other coworkers was a request to remove it, saying that it was inapropiate for that purpose, and that it was quite exotic to see a captcha in that place.
I've seen captcha images on signup, contact, password recovery forms, etc. So I personally don't see inapropiate to put a captcha also on a place like that. Well, it obviously burns down usability a little bit, but it's a matter of time and getting used to it.
With the lack of a captcha test, one would have to put some sort of blacklist / account locking mechanism, which also has some drawbacks.
Is it a good choice for you? Am I getting somewhat captcha-aholic and need some sort of group therapy?
Thanks in advance.
Just add a CAPTCHA test for cases when there have been failed login attempts for a given user. This is what lots of websites currently do (all popular email services for instance) and is much less invasive.
Yet it completely thwarts brute force attacks, as long as the attacker cannot break your CAPTCHA.
It's not immoral per se. It's bad usability.
Consider security implications: the users will consider logging in to be time consuming and will:
be less likely to use your system at all
never log out of your system and leave open sessions unattended.
Consider other forms of brute-force attack detection and prevention.
Captcha isn't a very traditional choice in login forms. The traditional protection against brute force attacks seems to be account locking. As you said, it has it's drawbacks, for example, if your application is vulnerable to account enumeration, then an attacker could easily perform a denial of service attack.
I would tend to agree with your co-workers. A captcha can be necessary on forms where you do not have to be authorized to submit data, because otherwise spambots will bomb them, but I fail to see what kind of abuse you are preventing by adding the captcha to a login form?
A captcha does not provide any form of securtiy, the way your other options, like the blacklist, would. It just verifies that the user is a human being, and hopefully the username/password fields would verify that.
If you want to prevent bruteforce attacks, then almost any other form of protection would be more usefull - throtteling the requests if there is too many, or banning IPs if the enter wrong passwords too many times, for instance.
Also, I think you are underestimating the impact on usability. A lot of browsers provide a lot of utilities to deal with username/password forms and all of these utilities are rendered useless if you add a captcha.
I would like to address the question in the title—the question of morality.
I would consider a captcha immoral under the following circumstances:
It excludes participation in the application to those with physical or mental challenges, when the main portion and purpose of the application would otherwise not make such an exclusion.
The mechanism of the captcha exposes users to distressing language or images beyond what would normally be expected in the application.
The captcha mechanism as presented to the user is deceptive or misleading in some way.
A captcha may also be considered immoral if its intent is to exclude genuinely sentient machine intelligences from participation for reasons of prejudice against non-humans. Of course, technology has not yet advanced to the level at which this is an issue, and, further, when it does become an issue, I expect human-excluding gates will be more feasible and common.
Many popular (most used) mail server doesn't have it?!

simple way to prevent flood of login requests?

If my website uses a POST form for login, what is a quick and easy way to prevent a rogue client from flooding my web server with POST requests trying to brute force crack my user accounts?
PHP/MySQL/Apache.
Preventing brute force cracking is trickier than it may at first seem. The solution will be to combine controls - one single control will not cut the mustard. And remember the goal: you want to slow down a brute force attack to the point where it will either be ineffective, or you can detect it and take action. The second option is generally more effective than than first.
You could use a captcha (this is currently a popular technique) but captchas can often be automatically read, and when they can't be read by a computer, farms of people can be be obtained by paying low waged workers or by using the captcha to protect "free" porn (both techniques have been used).
The advice of others to use a secret value in the form won't really help; an attacker simply has to parse the HTML to find the secret value, and include it in their post. This is pretty simple to automate, so it's not really a good defense. Oh, and if the value turns out to be easily predictable (using a poor or broken PRNG or a bad seed) you're up the creek, again.
Tracking the IP address is okay, but only if you don't support NAT. With NAT, valid users will appear to be duplicates. And remember that attackers can impersonate other systems; a single attack system can use other IP addresses, and even intercept the traffic to that system (ARP poisoning is one good mechanism for this).
You could use a max number of failed timeouts in a given period of time (like 3 within 1 hour). This slows the attacker down, but doesn't necessarily stop them. You might include an automated unlock, but you'll need to do some math, and make sure that the unlock time is actually useful.
Exponential backoff is another useful mechanism. This might be possible to tie to a session (which the attacker doesn't have to return to the server) to the IP address (With breaks with NAT) or to the account (which doesn't account for brute forcing across multiple accounts).
For the other defenses to be useful, you have to have strong passwords. If your passwords are easy to guess (are they in a dictionary? are they short? are they complex?) the attack will succeed. It's a good idea to implement minimum password strength requirements, and an "illegal passwords" dictionary (combined with common character substitutions for that dictionary). Alternatively, you might use a system like OATH, certificate login, or hardware tokens (like RSA's SecurID).
I think it was Burt Kaliski who discussed client puzzles. Basically, you give the client a challenge that's easy for the server, but difficult for the client; the client DoSes itself by wasting its own resources trying to solve the puzzle. The difficulty, here, would be in determining the right complexity for the puzzle. It might, for example, be factoring a large number. Whatever it is, you'd have to assume the most efficient possible algorithm, and you'd have to be able to handle different performance of different browsers on different machines (potentially slow) while slowing down automated attacks outside of browsers (potentially faster than your javascript). Did I mention that you'd have to implement a solution in JavaScript?
But you're still stuck with an attack that works across multiple accounts. I'm not aware of any publicly used controls that work well against this, unless you can track IP addresses.
Then, you'll want to protect usernames. An attacker who doesn't know usernames (requiring a system that doesn't indicate when usernames are valid) will have to learn both the username and the password, instead of easily confirming a username, then just attacking passwords.
And you'll need to be careful that error messages, and server timing don't give away (in)valid passwords, either.
And when you deal with error messages, make sure that password recovery mechanisms don't give anything away. Even in otherwise good systems, password recovery can blow the whole thing.
But, all that said, the attack is ultimately dependant upon the server's performance. You might simply implement a very slow mechanism for authentication (has to be slow for both valid and invalid authns). An online attack is guaranteed to go no faster than the server can process requests.
Then, you need to detect brute force attacks, so your system needs a good audit trail. But you'll need to be careful not to log too many log messages or you'll open up an easy way to dos the server by filling up disk space. Something like syslog's "the previous message has been received 1000 times" message would be good.
Once you're all done designing things, and again when you're done implementing things, you'll want to examine the whole system, and all features of the system, mathematically model it given the current settings and the server's performance and determine the average amount of time it would take an attacker to brute force (a) a single account, and (b) any account (brute forcing across accounts to avoid account-specific controls).
One approach is to keep track of the IP address (or even of the first 3 octets of the IP Address) of each request and to add a significant time in responding to (or even to drop) the requests coming from IPs that have had more than x requests in the last y minutes.
This is ineffective (or less effective) against distributed attacks, but otherwise work quite well, for a relatively simple implentation.
For a stronger proptection, one can also blacklist offending IPs (IPs, or again first 3 octets of IP, which have submitted more than say 6 bad attempts in the last 2 minutes or less) by systematically denying access to such IP for a period of say 15 minutes.
Well, if you know the IP of the source of the login attempt, you could allow it 5 attempts, then make that IP wait through a 5 minute "cool-off" period. If you think 5 minutes is too short, raise it until it's something more suitable (could go as high as 24 hours, or more if you think it's necessary). This might not work as well if they have dozens of coordinated nodes in a botnet.

Detecting login credentials abuse

I am the webmaster for a small, growing industrial association. Soon, I will have to implement a restricted, members-only section for the website.
The problem is that our organization membership both includes big companies as well as amateur “clubs” (it's a relatively new industry…).
It is clear that those clubs will share the login ID they will use to log onto our website. The problem is to detect whether one of their members will share the login credentials with people who would not normally supposed to be accessing the website (there is no objection for such a club to have all it’s members get on the website).
I have thought about logging along with each sign-on the IP address as well as the OS and the browser used; if the OS/Browser stays constant and there are no more than, say, 10 different IP addresses, the account is clearly used by very few different computers.
But if there are 50 OS/Browser combination and 150 different IPs, the credentials have obviously been disseminated far, and there would be then cause for action, such as modifying the password.
Of course, it is extremely annoying when your password is being unilaterally changed. So, for this problem, I thought about allowing the “clubs” to manage their own list of sub-accounts, and therefore if abuse is suspected, the user responsible would be easily pinned-down, and this “sub-member” alone would face the annoyance of a password change.
Question:
What potential problems would anyone see with such an approach?
Any particular reason why you can't force each club member to register (just straight-up, not necessarily as a sub or a similar complex structure)? Perhaps give each club some sort of code to use just when the users register so you can automatically create their accounts and affiliate them with a club, but you then have direct accounting of each member without an onerous process that the club has to manage themselves. Then it's much easier to determine if a given account is being spread around (disparate IP accesses in given periods of time).
Clearly then you can also set a limit on the number of affiliated accounts per club, should you want to do so. This is basically what you've suggested, I suppose, but I would try to keep any onerous management tasks out of the hands of your users if at all possible. If you can manage club-affiliated signups, you should, rather than forcing someone at the club to manage them for you.
Also, while some sort of heuristic based on IP and credentials is probably fine, I would stay away from incorporating user-agent, or at least caring too much about it. Seeing a few different UAs from the same IP - depending on your expected userbase, I suppose - isn't really that unusual. I use several browsers in the course of my day due to website bugs, etc. and unless someone is using a machine as a proxy, it's not evidence of anything nefarious.

Resources