simple way to prevent flood of login requests? - web

If my website uses a POST form for login, what is a quick and easy way to prevent a rogue client from flooding my web server with POST requests trying to brute force crack my user accounts?
PHP/MySQL/Apache.

Preventing brute force cracking is trickier than it may at first seem. The solution will be to combine controls - one single control will not cut the mustard. And remember the goal: you want to slow down a brute force attack to the point where it will either be ineffective, or you can detect it and take action. The second option is generally more effective than than first.
You could use a captcha (this is currently a popular technique) but captchas can often be automatically read, and when they can't be read by a computer, farms of people can be be obtained by paying low waged workers or by using the captcha to protect "free" porn (both techniques have been used).
The advice of others to use a secret value in the form won't really help; an attacker simply has to parse the HTML to find the secret value, and include it in their post. This is pretty simple to automate, so it's not really a good defense. Oh, and if the value turns out to be easily predictable (using a poor or broken PRNG or a bad seed) you're up the creek, again.
Tracking the IP address is okay, but only if you don't support NAT. With NAT, valid users will appear to be duplicates. And remember that attackers can impersonate other systems; a single attack system can use other IP addresses, and even intercept the traffic to that system (ARP poisoning is one good mechanism for this).
You could use a max number of failed timeouts in a given period of time (like 3 within 1 hour). This slows the attacker down, but doesn't necessarily stop them. You might include an automated unlock, but you'll need to do some math, and make sure that the unlock time is actually useful.
Exponential backoff is another useful mechanism. This might be possible to tie to a session (which the attacker doesn't have to return to the server) to the IP address (With breaks with NAT) or to the account (which doesn't account for brute forcing across multiple accounts).
For the other defenses to be useful, you have to have strong passwords. If your passwords are easy to guess (are they in a dictionary? are they short? are they complex?) the attack will succeed. It's a good idea to implement minimum password strength requirements, and an "illegal passwords" dictionary (combined with common character substitutions for that dictionary). Alternatively, you might use a system like OATH, certificate login, or hardware tokens (like RSA's SecurID).
I think it was Burt Kaliski who discussed client puzzles. Basically, you give the client a challenge that's easy for the server, but difficult for the client; the client DoSes itself by wasting its own resources trying to solve the puzzle. The difficulty, here, would be in determining the right complexity for the puzzle. It might, for example, be factoring a large number. Whatever it is, you'd have to assume the most efficient possible algorithm, and you'd have to be able to handle different performance of different browsers on different machines (potentially slow) while slowing down automated attacks outside of browsers (potentially faster than your javascript). Did I mention that you'd have to implement a solution in JavaScript?
But you're still stuck with an attack that works across multiple accounts. I'm not aware of any publicly used controls that work well against this, unless you can track IP addresses.
Then, you'll want to protect usernames. An attacker who doesn't know usernames (requiring a system that doesn't indicate when usernames are valid) will have to learn both the username and the password, instead of easily confirming a username, then just attacking passwords.
And you'll need to be careful that error messages, and server timing don't give away (in)valid passwords, either.
And when you deal with error messages, make sure that password recovery mechanisms don't give anything away. Even in otherwise good systems, password recovery can blow the whole thing.
But, all that said, the attack is ultimately dependant upon the server's performance. You might simply implement a very slow mechanism for authentication (has to be slow for both valid and invalid authns). An online attack is guaranteed to go no faster than the server can process requests.
Then, you need to detect brute force attacks, so your system needs a good audit trail. But you'll need to be careful not to log too many log messages or you'll open up an easy way to dos the server by filling up disk space. Something like syslog's "the previous message has been received 1000 times" message would be good.
Once you're all done designing things, and again when you're done implementing things, you'll want to examine the whole system, and all features of the system, mathematically model it given the current settings and the server's performance and determine the average amount of time it would take an attacker to brute force (a) a single account, and (b) any account (brute forcing across accounts to avoid account-specific controls).

One approach is to keep track of the IP address (or even of the first 3 octets of the IP Address) of each request and to add a significant time in responding to (or even to drop) the requests coming from IPs that have had more than x requests in the last y minutes.
This is ineffective (or less effective) against distributed attacks, but otherwise work quite well, for a relatively simple implentation.
For a stronger proptection, one can also blacklist offending IPs (IPs, or again first 3 octets of IP, which have submitted more than say 6 bad attempts in the last 2 minutes or less) by systematically denying access to such IP for a period of say 15 minutes.

Well, if you know the IP of the source of the login attempt, you could allow it 5 attempts, then make that IP wait through a 5 minute "cool-off" period. If you think 5 minutes is too short, raise it until it's something more suitable (could go as high as 24 hours, or more if you think it's necessary). This might not work as well if they have dozens of coordinated nodes in a botnet.

Related

UUID on database level used as a security measure instead of a true rights control?

Can UUID on database level be used as a security measure instead of a true rights control?
Consider a web application where all servlets implements "normal" access control by having a session id connected to the user calling it (through the web client). All users are therefore authenticated.
The next level of security needed is if a authenticated user actually "owns" the data being changed. In a web application this could for example be editing some text in a form. The client makes sure a user, by accident, doesn’t do something wrong (JavaScript). The issue is of course is that any number of network tools could easily repeat the call made by the browser and, by only changing the ID, edit a different row in the database table behind the servlet that the user does not "own".
My question is if it would be sufficient to use UUID's as keys in the database table and thereby making it practically impossible to guess a valid ID (https://en.wikipedia.org/wiki/Universally_unique_identifier#Random_UUID_probability_of_duplicates)? As far as I know similar approaches is used in Google Photos (http://www.theverge.com/2015/6/23/8830977/google-photos-security-public-url-privacy-protected) but I'm not sure it is 100% comparable.
Another option is off cause to have every servlet verify that the user is only performing an action on its own data, but in a big application with 200+ servlets and 50-100 tables this could be a very cumbersome task where mistakes could easily happen. In my mind this weakens the security far more, but I'm not sure if that is true.
I'm leaning towards the UUID solution, but I'm also curious if there are other obvious approaches to this problem that I ought to consider.
Update:
I should probably have clarified that my plan would be to use UUIDv4 which is supposed to be random. I know that entropy comes in to play here in regards to how random the UUID's actually are, but as far as I have read then Java (which is the selected platform/language) uses SecureRandom which is supposed to be "cryptographically strong" (link).
And in that case wiki states (link):
In other words, only after generating 1 billion UUIDs every second for the next 100 years, the probability of creating just one duplicate would be about 50%.
Using UUIDs in this manner has two major issues:
If there are no additional authentication methods, any attacker could simply guess UUIDs until they find one belonging to someone else. Google Photos doesn't need to worry about this as much, because they only use UUIDs to obfuscate publicly-shared photo views; you still need to authenticate to modify the photos. This is especially dangerous because:
UUIDs are intended to be unique, not random. There are likely to be predictable patterns in your UUIDs that an attacker would be able to observe and take advantage of. In addition, even without a clear pattern, the number of UUIDs an attacker needs to test to find a valid one swiftly decreases as your userbase grows.
I will always recommend using secure, continuously-checked authentication. However, if you have a fairly small userbase, and you are only using this to obfuscate public data access, then using UUIDs in this manner might be alright. Even then, you should be using actual random strings, and not UUIDs.
Another option is off cause to have every servlet verify that the user
is only performing an action on its own data, but in a big application
with 200+ servlets and 50-100 tables this could be a very cumbersome
task where mistakes could easily happen. In my mind this weakens the
security far more, but I'm not sure if that is true.
With a large legacy application adding in security later is always a complex task. And you're right - the more complicated an application, the harder it is to verify security. Complexity is the main enemy of security.
However, this is the best way to go rather than by trying to obscure insecure direct object reference problems.
If you are using these UUIDs in the query string then this information within URLs may be logged in various locations, including the user's browser, the web server, and any forward or reverse proxy servers between the two endpoints. URLs may also be displayed on-screen, bookmarked or emailed around by users. They may be disclosed to third parties via the Referer header when any off-site links are followed. Placing direct object references into the URL increases the risk that they will be captured by an attacker. An existing user of the application that then has their access revoked to certain bits of data - they will still be able to access this data by using a previously bookmarked URL (or by using their browser history). Even where the ID is passed outside of the URL mechanism, a local attacker that knows (or has figured out) how your system works could have purposely saved IDs just for the occasion.
As said by other answers, GUIDs/UUIDs are not meant to be unguessable, they are just meant to be unique. Granted, the Java implementation does actually generate cryptographically secure random numbers. However, what if this implementation changes in future releases, or what if your system is ported elsewhere where this functionality is different? If you're going to do this, you might as well generate your own cryptographically secure random numbers using your own implementation to use as identifiers. If you have 128bits of entropy in your identifiers, it is completely infeasible for anyone ever to guess them (even if they had all of the world's computing power).
However, for the above reasons I recommend you implement access checks instead.
You are trying to bypass authorisation controls by hoping that the key is unguessable. This is a security no-no. Depending on whom you ask, they may refer to it as an insecure direct object reference or a violation of the complete mediation principle.
As noted by F. Stephen Q, your assumption that UUIDs are unique does not imply that they are not predictable. The threat here is that if a user knows a few UUIDs, say his own, does that allow him to predict other peoples' UUIDs? This is a very real threat, see: Cautionary note: UUIDs generally do not meet security requirements. Especially note what the UUID RFC says:
Do not assume that UUIDs are hard to guess; they should not be used as
security capabilities (identifiers whose mere possession grants
access), for example.
You can use UUIDs for keys, but you still need to do authorisation checks. When a user wants to access his data, the database should identify the owner of the data, and the server logic needs to enforce that the current user is the same as the database claims the owner is.

How to minimize signin scripting attack

I'll jump right into the problem.
Let's say my website has sign-in capability and has a lots of users.
I'm getting sign-in scripting attacks where the hacker has list of "username" and "password"(let's say he got this from elsewhere) and he's running it on my website to see if they're a valid information.
I have a monitoring tool/app to catch if same IP address is trying to attempt "x" amount of logins within "y" amount of time and it successfully detects attacks which falls into those categories. However there are limitations to this as I'm unable to come up with all possible cases, I feel like it's impossible to stop them from this these types of attacks.
I'm curious how other companies like Amazon or other giants handles these type of attacks.
Thanks in advance.
They apply various techniques to determine if the login is done my a human or not - this is known as Turing test
For example, many sites lock out users out if they try too many login attempts within a fixed time period. This means that if it is a bot, it's later login attempts will be ignored. A variation on this is to increase the lock out time as the number of login attempts increase.
A CAPTCHA is used in sites like Ticketmaster.
Brute-force attacks such as this are unfortunately a fact of life.

What can be done to protect against brute-force of compromised password databases?

In light of the recent data breach at Blizzard I want to ask about brute-force and salted-hash password storage.
Ars Technica has a good article on why even the salted-hash passwords that Blizzard stores can be cracked in short order.
Because of the salting and hashing used, we know that a brute force attack is the only viable way to crack the "complicated" passwords (dictionary/plain-word passwords are trivial)... However Ars Technica makes a good point that the vast improvement in computational power (both local and in the cloud) makes brute-force cracking more viable.
For a website, Jeff Atwood notes that forcing delays in authentication attempts can realistically thwart brute-force attempts.... But in the case of the Blizzard breach, hackers have physical control of the DB, so no such accessibility limit can be imposed.
Consequently, Jeff also recommends pass-phrases because of the increased entropy facing a brute-force attacker.... But this, too, will eventually effectually fade as computational power becomes greater and more accessible.
So the question is: What brute-force protection schemes can be implemented that aren't vulnerable due to increasing computation power?
Two-stage authentication is often considered, but I've heard that some of these algorithms are also being broken, and a physical authenticator has a likely static algorithm, so once it's cracked all users would be vulnerable.
What about scheduled rolling salts that apply to the entire authentication DB? This would add a lot of overhead but seems like it would secure even in cases where the physical DB is leaked.
Security is a combination of a few things (there is much more than this list, but rather than turning this post into a book, I'll keep it at these for now):
Encryption - complexity; making it difficult to know what the original content is
Obfuscation - unclear/protected; making it difficult for other scripts/users to know or guess how your security scheme works.
Intrusion Prevention/Response - determining when a security breach (or attempted breach) has occurred, and responding to the incident
Encryption will be things like hashing, salts, SSL, keys, etc. Obfuscation will be things like steganography, using rotating salts, separating the passwords off into another server that no script can access, etc. Intrusion Prevention/Response will be things like rate limiting, delays, shutting down servers once the breach is made known, etc.
Now looking at your question: What brute-force protection schemes can be implemented that aren't vulnerable due to increasing computation power?
My answer: none. Unless someone builds a quantum computer or a mathematician writes an expansion to group theory in a way that would blow all of our minds out of our heads, then any and all "brute-force protection schemes" will be vulnerable to increasing computational power (especially distributed processing, such as cloud servers or bot-nets).
It seems like your fear is the case of Blizzard, where the database had been accessed, and the hashed passwords were seen by the hackers. If someone has the hash, and knows your salts/hashing procedure, then it's only a matter of time before they can get the password. At this point, we are talking only about encryption, because everything else is known and/or moot.
It's a matter of math: the longer and more complicated the password, that's increasing orders of magnitude, and the problem becomes an exponential with each added character. But if you exponentially increase the computational power of the brute-force algorithm, you're back to square one.
If a hacker gets a hold of the hashes that are stored in your database, then immediately lock the database, figure out how they got in, fix that security hole, and add a step to your authentication procedure, update the database with the new authentication procedure and turn everything back on.
In other words, make sure your authentication server/database is secure on every level so that hackers can't get access to it.
If you just want to "buy more time", then add complexity. But keep in mind that this doesn't make your database more secure. It would be better to analyze how to lock the database down to prevent someone from getting the hashes in the first place.

How should I implement a system to block password cracking?

I want to put a measure in place to stop people from trying to hack user accounts on my website. What would be the best process behind this without being annoying to a customer who just needs to try a few passwords to remember?
I notice Google shows up a captcha image after a couple of failed attempts. I've never tried hard enough but I'm sure they must block you after quite a few attempts.
Would would be the best practice to ensure that someone doesn't try a brute force approach to gain access to an account?
captcha ?
Blocking their IP Address (does this work if they're on a shared IP)?
Your best bet is to lock out(10min, 15min, etc...) on a per-username basis with a relatively high number of tries possible(10 or 20 or so) in a set period(e.g. rolling 30min window). By setting the number of tries higher than 3 or 5, the average user will either give up or attempt to reset their password before the lockout hits.
You may consider logging failed attempt data(IP, username, timestamps, ...) to understand behavior differences between normal user behavior and brute force attempts. This will allow you to refine your policy over time.
Also consider a strong password policy(at minimum 8+ characters with at least one number).
You may also consider some form of multi-factor authentication. You mentioned captcha but there are many other techniques you may find useful. One site I work with will email a token to a user's email address if they do not recognize a user's IP address and the user must present that token before they are able to access from the new IP address.
Schemes that lock a user out after a certain number of attempts and/or extend the time that it takes after further login attempts are accepted again are certainly a good idea. As are CAPTCHAs (aside from being annoying :) But, in my opinion, they only make sense if you have strong hardware backing you.
The reason why I believe this should only be tried if you have the resources to do so is that you have to keep in mind that a scheme like that requires you to remember the attempts recently made for potentially every user in your system. Certainly, there are numerous ways of persisting the information, varying in their effectiveness: in-memory cache, database, etc.
But no matter what, such a mechanism will put additional load on your application, and there's the downside: if an attacker gets bored or annoyed by your app, they might as well try to take it down with a denial of service attack. And complicated login schemes that need to persist a lot of information will help a lot in achieving that goal.
If you decide to apply such a feature, I would recommend you stress test it a lot in a lab first to get a feeling for "how much you can take" - this way you'll find out if you need to upgrade your hardware :)
An easier way that can do without the need for persistence is to apply a password hash like PBKDF2, bcrypt or scrypt. These artificially slow attackers down enough to make it as hard as possible for them. But be aware, that these, too, put additional computational strain on your application (although presumably less than the aforementioned measures), so again I would do some stress tests first.

How to defend against TabNabbing?

I got very concerned reading this genius post by Aza Raskin.
What are the non-browsers solutions to defend against TabNabbing? Are there any?
"Tab Nabbing" is not a new attack, Mr Raskin is ripping off other researchers work. PDP from GnuCitizen discovered this back in 2008.
The biggest threat as I see it is Phishing. To be honest I don't think there is a good solution to stop phishing. This particular issues I think should be fixed by the browser. Eventually Firefox and Chrome will get around to fixing it. To be honest SSLStrip is a bigger threat that all browsers face, which can be used along side this redirection attack. Currently chrome has a fix in the form of STS and Firefox in the form of HTTPs Everywhere. Using noscript will also help mitigate this redirection attack attack.
One thing that will prevent this sort of thing from happening is two factor authentication using something like an RSA token (unfortunately only one bank in this country provides this method).
The RSA token is a little USB stick sized gadget that has a continuously changing serial/sequence number on it, and it is issued to you (each stick has a different sequence of numbers). When you logon to your bank's website, you have to supply you log/pass, and also the current number on the RSA token - that number changes every two minutes. That means that if the bad guys collect your login details they have less than two minutes to login to your account before the current RSA sequence number changes and the captured login details become impossible to reuse.
This 2 factor authentication is not the silver bullet though, i don't see Google rolling this out for your random Gmail account, and neither will Facebook. It should be mandatory for financial institutions and online government departments, this will cut the scope of this type of attack. It is a commonly used protection mechanism for remote access to company website portals and remote network logins, and it is quite successful for this.
This still hasn't answered your question though - how can you as an website author or owner prevent this? You can't, unless you don't run third party scripts, and regularly check your pages to make sure you haven't been compromised and had a script inserted. You should never consider trying to scan any third party scripts, because they can be obfuscated to an incredible degree which you can't possibly scan for. If you do run third party scripts and feel strongly enough about this, then you might want to setp a machine which all it does is automated UI tests on your web site - it is an easy enough thing to set up with some basic tests and just leave it testing your live site every 30 or 60 minutes looking for unexpected results.
Like he suggests, use the password manager. There are quite a few other problems that can happen if you type your password every time. For sites that the password manager doesn't work, you're screwed. Client certificates ftw.
I just visited the page which you mention and my free virus checker (AVG) immediately detected a threat (I presume that he has an example on the page) and warned me of a Tabnapping Exploit.
So that's one, easy, possibility

Resources