In some cases, you might want to block hacker from your system by using IP addresses.
However, sometime it is more difficult due to existent of ISP proxy.
From the view of system, we see many traffic/connection/burceforce/wrong password from same IP meanwhile it could be a HTTP proxy or IPv6 gateway or similar. But it might not smart enough to tell is that normal or abnormal.
What's the suggested way to block those bad access without degrade user experience(e.g. too many captcha) to whom are innocent?
Don't know if you consider this "Degrading user experience" But you can code MAX_TRIES for the login to give the user only few tries -to login then if all tries are wrong he is blocked from logging for a while- to prevent Brute forcing the login.
And for other connection you can install mod_bw for apache then limit the connection limit per IP using this htaccess command
MaxConnection all 3
You should limit the login rate for each UserId.
After X mistakes, you can block a UserId till the user will reply to a special e-mail. This way, the user will also know that someone is trying to log into his account.
You can map source IP address into a specific country, and allow a user to log-in only from a predefined list of countries (user selection).
You can temporarily block a group of IP addresses (for example 172.16.254.*) if there are many false attempts from the same group. Many hackers just change the last octet.
Related
A client of mine has a site where you login with just a password (no user name). There are no users, but instead there's a user class system. So the admin class has a certain password, editor class different password etc. When you log in, your user class is detected based on the password.
I know this looks quite insecure, but the client thinks it's simple and he likes it that way (i didn't build the initial site)
Is it a good idea to limit the number of login attempts per IP, to let's say 5 per hour? Would this significantly increase the security? I believe it's very hard for an attacker to get a new IP again and again
This isn't a good idea, it's quite simple to an attacker change it's IP, so this will not give you extra security. But when you're behind a corporative network all the machines will have the same external address, so in practice you'll probably get some problems with you legitimate users.
the best solution to improve the security is to create some kind of whitelist.
There should be no harm limiting number of login attempts per hour. It would make it a little more secure, but keep in mind that it's rather easy to use proxy servers these days to use a different ip address.
Also you could think this reversed. If your client is always using this from same ip, you could make IP whitelist insted of blacklist.
It would increase security, but not as much as introducing a username parameter. Every attempt an attacker makes is targeting all users (classes), drastically increasing chances there will be a correct guess each time and you also have no way of knowing which of your classes is being targeted.
Also your current approach prohibits the use of salted hashes which can increase security as each password in your system will be hashed differently. This is because there is no username record available for the appropriate salt to be selected, which in itself will weaken security.
What you should do
You should introduce a username field - to keep it simple this could be the name of the class.
Log both the remote IP and username of every attempt and you should rate limit the number of attempts allowed from the same IP address or against the same username.
If more than your set threshold of bad attempts occur from either the same IP or against the same username then you should throttle responses by introducing an artificial delay that would hinder the attack. This delay should be across all threads if simultaneous attempts occur from the same IP or against the same username - these requests should then be processed in serial as this would slow a brute force attempt that utilises multiple threads with simultaneous HTTP connections.
Your suggestion of limiting of 5 per hour would only upset the most casual attacker. As other answers have mentioned, getting a new IP would be a trivial task for a determined attacker and something more than IP would be needed to properly secure your system and reduce the scope of the attack to a single resource (user) at any one time.
I am trying to determine possible vulnerabilities in a possible site implementation.
We need to be able to determine if the user is logging into the site from an local IP address or external. I know the IP address can be spoofed, though the spoofer won't be able to get much information.
I was thinking it could be possible for a person to spoof a local IP, perform a post action to modify data the server, though this would be difficult (predicting sequence numbers).
If the site used validation tokens on all post request, this might help. In particular I am using .Net MVC 4's AntiForgeryToken. I am not sure how the token is keyed to the user.
My question is if the spoofer went to a page normally to get the token, then spoofed his IP and used the token to do a post, would this succeed?
I know we're getting into the realm of the implausible, but ... Maybe an example might help. Lets say when a user logs in the application detects the IP (not using the HTTP_X_FORWARDED_FOR) and sets the session as local or remote. Could a malicous user load the login screen, get the token, spoof their IP address (assume they are able to determine the sequence number and post), then post the login with that IP address setting them as local?
Any insight would be appreciated.
Thanks,
Phillip
Your site's firewall should block anything with a source IP in your address block, to prevent IP spoofing in the first place.
Let's say you create a password reset system for your webapp. The system requires either a username or an email to send out the reset link to an account's email.
Consider these conflicting requirements:
Cracker A inputs into the system's form potential usernames (or emails) in an attempt to discover matches currently in the system.
Ideally, the system should neither confirm nor deny the presence of existing usernames and emails, giving exactly the same feedback to either case to prevent revealing matches.
User B tries to reset their password, but misspells, or worse, misremembers, their user name, such that it does not match any account on file.
As such, their reset request will never be fulfilled.
Ideally, their mistake would be made plain to them seconds after they request a reset, with a friendly message like" I'm sorry, we have no such username (or email) on file. You could try checking your spelling, or go ahead and create a new account." Otherwise, they may check their email, find nothing, wait, nothing, reset again, nothing, (because no match is available to send out) perhaps take their business elsewhere? If you're lucky, call customer service?
What ways are there to resolve these conflicting goals?
Edit:
After thinking the problem through, I'm considering that one way to solve the problem may be using email address only and if that email doesn't exist in the system, send out a "That account doesn't exist, here is a link to make a new account" to the email instead of the reset link.
That way, the user would always get informed, and a cracker could only get emails sent to accounts that they already had access to, which wouldn't be useful to them.
Make sense? Problems with that approach?
The usability for User B probably trumps the security risk for Cracker A, so the key is probably to limit what Cracker A can find out.
One way of handling Cracker A is rate-limiting the responses. User B will submit one request every 10 seconds (say) from speed-(mis)typing their name, and will do so from the same IP address. Cracker A will be trying to submit as many requests as possible in as short a time as possible, possibly from a botnet of many infected PCs under his command. If you always take (at least) 5 seconds to respond to a request, even when your system is perfectly capable of managing requests quicker, then Cracker A can only search a limited portion of the namespace in a reasonable time. Actually implementing this might be harder than I'd like to think it was.
Your system might need to be aware of attack patterns, and if there is a wide-spread attack fishing for responses, it should increase the time to respond. Such techniques require more intelligence in the reset response system, to detect where requests are coming from and how frequently. You might need to spot bad patterns in the IP addresses sending requests. If the same address sends many requests, especially if it does so after getting a match (response sent to given email address), you become rather suspicious of the IP address.
We are designing a security system to prevent brute force attacks to get into an account.
One option proposed is blacklisting by IP. If an IP address attempts to login too many times, any further attempts by that IP address are blocked for a given time.
Another option is to do a more traditional account lockout, where too many attempts on a given account locks out the account until the password is reset.
The issue with the first approach is customer service - if a legitimate user calls to get back in, they just have to wait it out - their IP is blacklisted for the time period.
The issue with the second is that it opens a DoS attack, given knowledge of a legitimate user name, anyone can put in bogus passwords to lock them out.
What experiences have you had in different approaches to preventing brute force attacks against user accounts?
Lists of tens of thousands of proxy servers can be bought or easily obtained by scanning(YAPH). There is software like THC-Hydra which can use lists of proxy servers for brute force. That is not to say that IP address black lists are bad, they raise the bar for the attacker.
Account based locking can be used against you. A hacker will need a list of user accounts, often the hacker doesn't care what account is broken. The first phase of attack against this system would be to try as many user names as possible, once you have a list of names then you go back and try weak passwords for each one.
The solution I like is to force the user to solve a capthca after maybe 5 failed logins. You can go with a blended approach of ip addresses and account names. If someone tries 5 failed logins from an ip address, you force that ip address to solve a capthca. If 5 failed login attempts for ANY login name, then that login name will require a captcha. But there is a potential problem, if someone is trying to brute force account names then you cannot act differently if the account doesn't exist. Thus you will have to ask for the user to solve a capthca on non-existent accounts which are also "locked" due to brute force.
Leaking account names to an attacker greatly aids in brute force. I recommend looking over your application to make sure you aren't leaking login names.
I second the idea of adding delay to the next login attempt, but add the addition of checking against incoming IP as well as attempted user account. So regardless of username used, if it came from the same IP more than some number of times start a exponentially growing login delay. This also ensures that attackers gain no information as to whether they are trying legit user names or non-existent user names.
Blacklisting IPs is essentially useless (already discussed here - see TOR/Proxy).
Locking the user account is bad CR (causes more headache for legit users than solving problems)
Adding captcha's is just silly (http://www.theinquirer.net/inquirer/news/1040158/captchas-easily-hackable). Haven't you all heard about how easily captcha's are hacked these days? Plus they are even more of a pain for legit users.
Delaying the next login makes brute force attempts impractical while still allowing legit users full access to their account when needed.
Blacklisting IP's is impractical these days. You'll alienate users of Tor or other similar proxies.
I'd suggest locking the account.
If DoS attacks are a concern then make it a self-expiring lock, or limit the login rate attempt.
Sample Rules
if failed_attempts > 5
if last_attempt < 30 seconds ago
error("You must wait 30 seconds before your next login attempt")
else
authenticate(user,pass)
You should couple this with some good-sense password strength requirements.
I think that blacklisting by IP addresses and user names is the best option. It defends against two types of attacks: specific (for one user) and generic password guessing.
The issue with the user blocking is that someone can automate this kind of attack to cause a DoS, denying all users access to the resource. So, that without an IP blocking is not useful.
Btw, take a look at http://www.ossec.net . It automates "active responses" based on any type of log, with a low timeout period (10 minutes) to avoid those issues.
i'd like to prevent bots from hacking weak password-protected accounts. (e.g. this happend to ebay and other big sites)
So i'll set a (mem-) cached value with the ip, amount of tries and timestamp of last try (memcache-fall-out).
But what about bots trying to open any account with just one password. For example, the bot tries all 500.000 Useraccounts with the password "password123". Maybe 10 will open.
So my attempt was to just cache the ip with tries and set max-tries to ~50. The i would delete it after a successful login. So the good-bot would just login with a valid account every 49 tries to reset the lock.
Is there any way to do it right?
What do big platforms do about this?
What can i do to prevent idiots from blocking all users on a proxy with retrying 50 times?
If there is no best practice - does this mean any platform is brute-forceable? At least with a hint on when counters are resetted?
I think you can mix your solution with captchas:
Count the number of tries per IP
In case there are too many tries from a given IP address within a given time, add a captcha to your login form.
Some sites give you maybe two or three tries before they start making you enter a captcha along with your username/password. The captcha goes away once you successfully log in.
There was a relatively good article on Coding Horror a few days ago.
While the code is focused on Django there is some really good discussion on the best practice methods on Simon Willison’s blog. He uses memcached to track IPs and login failures.
You could use a password strength checker when a user sets their password to make sure they're not using an easily brute-forced password.
EDIT: Just to be clear, this shouldn't be seen as a complete solution to the problem you're trying to solve, but it should be considered in conjunction with some of the other answers.
You're never going to be able to prevent a group of bots from trying this from lots of different IP addresses.
From the same IP address: I would say if you see an example of "suspicious" behavior (invalid username, or several valid accounts with incorrect login attempts), just block the login for a few seconds. If it's a legitimate user, they won't mind waiting a few seconds. If it's a bot this will slow them down to the point of being impractical. If you continue to see the behavior from the IP address, just block them -- but leave an out-of-band door for legitimate users (call phone #x, or email this address).
PLEASE NOTE: IP addresses can be shared among THOUSANDS or even MILLIONS of users!!! For example, most/all AOL users appear as a very small set of IP addresses due to AOL's network architecture. Most ISPs map their large user bases to a small set of public IP addresses.
You cannot assume that an IP address belongs to only a single user.
You cannot assume that a single user will be using only a single IP address.
Check the following question discussing best practices against distibuted brute force and dictionary attacks:
What is the best Distributed Brute Force countermeasure?