Best way to limit (and record) login attempts - security

Obviously some sort of mechanism for limiting login attempts is a security requisite. While I like the concept of an exponentially increasing time between attempts, what I'm not sure of storing the information. I'm also interested in alternative solutions, preferrably not including captchas.
I'm guessing a cookie wouldn't work due to blocking cookies or clearing them automatically, but would sessions work? Or does it have to be stored in a database? Being unaware of what methods can/are being used so I simply don't know what's practical.

Use some columns in your users table 'failed_login_attempts' and 'failed_login_time'. The first one increments per failed login, and resets on successful login. The second one allows you to compare the current time with the last failed time.
Your code can use this data in the db to determine how long it waits to lock out users, time between allowed logins etc

Assuming google has done the necessary usability testing (not an unfair assumption) and decided to use captchas , I'd suggest going along with them.
Increasing timeouts is frustrating when I'm a genuine user and have forgotten my password (with so many websites and their associated passwords that happens a lot , especially to me)

Storing attempts in the database is the best solution IMHO since it gives you the auditing records of the security breach attempts. Depending on your application this may or may not be a legal requirement.
By recording all bad attempts you can also gather higher level information, such as if the requests are coming from one IP address (i.e. someone / thing is attempting a brute force attack) so you can block the IP address. This can be VERY usefull information.
Once you have determined a threshold, why not force them to request the email to be sent to their email address (i.e. similar to 'I have forgotten my password'), or you can go for the CAPCHA approach.

Answers in this post prioritize database centered solutions because they provide a structure of records that make auditing and lockout logic convenient.
While the answers here address guessing attacks on individual users, a major concern with this approach is that it leaves the system open to Denial of Service attacks. Any and every request from the world should not trigger database work.
An alternative (or additional) layer of security should be implemented earlier in the req/ res cycle to protect the application and database from performing lock out operations that can be expensive and are unnecessary.
Express-Brute is an excellent example that utilizes Redis caching to filter out malicious requests while allowing honest ones.

You know which userid is being hit, keep a flag and when it reaches a threshold value simply stop accepting anything for that user. But that means you store an extra data value for every user.
I like the concept of an exponentially increasing time between attempts, [...]
Instead of using exponentially increasing time, you could actually have a randomized lag between successive attempts.
Maybe if you explain what technology you are using people here will be able to help with more specific examples.

Lock out Policy is all well and good but there is a balance.
One consideration is to think about the consruction of usernames - guessable? Can they be enumerated at all?
I was on an External App Pen Test for a dotcom with an Employee Portal that served Outlook Web Access /Intranet Services, certain Apps. It was easy to enumerate users (the Exec /Managament Team on the web site itself, and through the likes of Google, Facebook, LinkedIn etc). Once you got the format of the username logon (firstname then surname entered as a single string) I had the capability to shut 100's of users out due to their 3 strikes and out policy.

Store the information server-side. This would allow you to also defend against distributed attacks (coming from multiple machines).

You may like to say block the login for some time say for example, 10 minutes after 3 failure attempts for example. Exponentially increasing time sounds good to me. And yes, store the information at the server side session or database. Database is better. No cookies business as it is easy to manipulate by the user.
You may also want to map such attempts against the client IP adrress as it is quite possible that valid user might get a blocked message while someone else is trying to guess valid user's password with failure attempts.

Related

Detecting people sharing login / account information for a website

I have a website that contains a secure area accessible by logging in with account info. Within the secure area, I have some expensive IP. I have been finding that people are sharing their passwords with other people. Are there any existing technologies / solutions / methods that I can implement to detect fraud patterns?
Thanks in advance for the help.
check geographical region. If within some timeframe multiple logins from regions geographically far apart log in, then you know those credentials have been shared.
Friday morning a log in from NY, Friday evening a log in from China
bandwitdh consumption: if your site offers lots of content, if a user goes over some high limit, it means its credentials have been shared
max bandwidth 5MB/s; then in one day 60*60*24*5MB is your upper limit per day per user
keep a counter of live sessions so you can see how many people log in at the same time. This is imprecise because the same person can log in through multiple browsers from the same IP and have a session on each one.
if they have 100 sessions (4 times/hr), that seems more than one person can do, unless your site expects this behaviour
There are several ways to approach this. But it's really going to boil down to the type of content and how often a given user really is grabbing new content. For adult websites, obviously the primary purpose of the logins is to download new content. I'm not sure about your site.
One way, and perhaps the easiest, is to simply limit the number of simultaneous downloads and/or rate limit each download.
If the files are large enough, you can impose a rate limit on how fast the data transfer takes place. Pick something that's a little slow, but not slow enough to make people mad. I would guess taking 30 seconds to download a file isn't too bad.
Then, only allow them to download 1 or 2 documents at a time per login id. People will be a bit less likely to share their password if they know that they may not be able to download something because someone else is.
Another approach would be to capture the IP address when the user signs in. Yes, I know this changes, but it gives you a starting point. If multiple users are active with the same login id but with different IPs, then you can either send them an alert stating that their account has been "hacked" ;) and that you are changing the password. Change it, kick everyone out, and send the password to the email address you have on file.
Bear in mind, that you don't want to stop a user from accessing it from work then going home and accessing it there. So, you have to make sure that they are essentially online at the same time. This means getting requests from different IPs within a minute or two of each other.
A twist on this would be to detect if multiple session ids are associated with the same login. For example, when they log in, save the current session id to a table. After they log out or a timeout is reached, clear that session id.
Don't let them log in again while another session id is active. Inform them they have to wait xx minutes until the session is cleared OR that another user is currently logged in with their account.
Ask them if they want to reset the session. This allows for situations where someone accidentally closes the browser and goes back to your site. If they pick yes, then stop the currently active session, change the password and send it to the email address on file.
I guarantee this last one will make people stop sharing their passwords. After all if I can't log in because someone I gave my password to is currently online, then this is a pain point I want to stop. Also, if I'm the one who borrowed the password and just locked myself out because the password changed then I'll either get my own account or go elsewhere: both of which are usually acceptable situations.
https://softwareengineering.stackexchange.com/a/442073/422609 has some detailed suggestions on this topic.
Signals that may be useful:
IP
Device identifier (via fingerprinting or other means depending on platform)
Location
User behaviour
You can also look at other means such as using links to get people to login or multifactor auth that adds some friction to the sharing.
I would think more about what you intend to do once you detect someone sharing it. Is the outcome to get them to pay per user or per organization?
It is quite a tricky issue:
If your users change location several times a day, their IP will change, but it's still the same person.
If your user has the same location throughout the day, but connects several times, it could very well be different users, say, in an internet café.
You will have to use a combination of those: if the user changes IP frequently, go and check the map location of that IP, and see if it's possible to travel the distance in the time between the 2 connections. If it's not, it's a fraud.

Possible solutions for keeping track of anonymous users

I'm currently developing a web application that has one feature while allows input from anonymous users (No authorization required). I realize that this may prove to have security risks such as repeated arbitrary inputs (ex. spam), or users posting malicious content. So to remedy this I'm trying to create a sort of system that keeps track of what each anonymous user has posted.
So far all I can think of is tracking by IP, but it seems as though it may not be viable due to dynamic IPs, are there any other solutions for anonymous user tracking?
I would recommend requiring them to answer a captcha before posting, or after an unusual number of posts from a single ip address.
"A CAPTCHA is a program that protects websites against bots by generating and grading tests >that humans can pass but current computer programs cannot. For example, humans can read >distorted text as the one shown below, but current computer programs can't"
That way the spammers are actual humans. That will slow the firehose to a level where you can weed out any that does get through.
http://www.captcha.net/
There's two main ways: clientside and serverside. Tracking IP is all that I can think of serverside; clientside there's more accurate options, but they are all under user's control, and he can reanonymise himself (it's his machine, after all): cookies and storage come to mind.
Drop a cookie with an ID on it. Sure, cookies can be deleted, but this at least gives you something.
My suggestion is:
Use cookies for tracking of user identity. As you yourself have said, due to dynamic IP addresses, you can't reliably use them for tracking user identity.
To detect and curb spam, use IP + user browser agent combination.

How to defend excessive login requests?

Our team have built a web application using Ruby on Rails. It currently doesn't restrict users from making excessive login requests. We want to ignore a user's login requests for a while after she made several failed attempts mainly for the purpose of defending automated robots.
Here are my questions:
How to write a program or script that can make excessive requests to our website? I need it because it will help me to test our web application.
How to restrict a user who made some unsuccessful login attempts within a period? Does Ruby on Rails have built-in solutions for identifying a requester and tracking whether she made any recent requests? If not, is there a general way to identify a requester (not specific to Ruby on Rails) and keep track of the requester's activities? Can I identify a user by ip address or cookies or some other information I can gather from her machine? We also hope that we can distinguish normal users (who make infrequent requests) from automatic robots (who make requests frequently).
Thanks!
One trick I've seen is having form fields included on the login form that through css hacks make them invisible to the user.
Automated systems/bots will still see these fields and may attempt to fill them with data. If you see any data in that field you immediately know its not a legit user and ignore the request.
This is not a complete security solution but one trick that you can add to the arsenal.
In regards to #1, there are many automation tools out there that can simulate large-volume posting to a given url. Depending on your platform, something as simple as wget might suffice; or something as complex (relatively speaking) a script that asks a UserAgent to post a given request multiple times in succession (again, depending on platform, this can be simple; also depending on language of choice for task 1).
In regards to #2, considering first the lesser issue of someone just firing multiple attempts manually. Such instances usually share a session (that being the actual webserver session); you should be able to track failed logins based on these session IDs ang force an early failure if the volume of failed attempts breaks some threshold. I don't know of any plugins or gems that do this specifically, but even if there is not one, it should be simple enough to create a solution.
If session ID does not work, then a combination of IP and UserAgent is also a pretty safe means, although individuals who use a proxy may find themselves blocked unfairly by such a practice (whether that is an issue or not depends largely on your business needs).
If the attacker is malicious, you may need to look at using firewall rules to block their access, as they are likely going to: a) use a proxy (so IP rotation occurs), b) not use cookies during probing, and c) not play nice with UserAgent strings.
RoR provides means for testing your applications as described in A Guide to Testing Rails Applications. Simple solution is to write such a test containing a loop sending 10 (or whatever value you define as excessive) login request. The framework provides means for sending HTTP requests or fake them
Not many people will abuse your login system, so just remembering IP addresses of failed logins (for an hour or any period your think is sufficient) would be sufficient and not too much data to store. Unless some hacker has access to a great many amount of IP addresses... But in such situations you'd need more/serious security measurements I guess.

Preventing Brute Force Logins on Websites

As a response to the recent Twitter hijackings and Jeff's post on Dictionary Attacks, what is the best way to secure your website against brute force login attacks?
Jeff's post suggests putting in an increasing delay for each attempted login, and a suggestion in the comments is to add a captcha after the 2nd failed attempt.
Both these seem like good ideas, but how do you know what "attempt number" it is? You can't rely on a session ID (because an attacker could change it each time) or an IP address (better, but vulnerable to botnets). Simply logging it against the username could, using the delay method, lock out a legitimate user (or at least make the login process very slow for them).
Thoughts? Suggestions?
I think database-persisted short lockout period for the given account (1-5 minutes) is the only way to handle this. Each userid in your database contains a timeOfLastFailedLogin and numberOfFailedAttempts. When numbeOfFailedAttempts > X you lockout for some minutes.
This means you're locking the userid in question for some time, but not permanently. It also means you're updating the database for each login attempt (unless it is locked, of course), which may be causing other problems.
There is at least one whole country is NAT'ed in asia, so IP's cannot be used for anything.
In my eyes there are several possibilities, each having cons and pros:
Forcing secure passwords
Pro: Will prevent dictionary attacks
Con: Will also prevent popularity, since most users are not able to remember complex passwords, even if you explain to them, how to easy remember them. For example by remembering sentences: "I bought 1 Apple for 5 Cent in the Mall" leads to "Ib1Af5CitM".
Lockouts after several attempts
Pro: Will slow down automated tests
Con: It's easy to lock out users for third parties
Con: Making them persistent in a database can result in a lot of write processes in such huge services as Twitter or comparables.
Captchas
Pro: They prevent automated testing
Con: They are consuming computing time
Con: Will "slow down" the user experience
HUGE CON: They are NOT barrier-free
Simple knowledge checks
Pro: Will prevent automated testing
Con: "Simple" is in the eye of the beholder.
Con: Will "slow down" the user experience
Different login and username
Pro: This is one technic, that is hardly seen, but in my eyes a pretty good start to prevent brute force attacks.
Con: Depends on the users choice of the two names.
Use whole sentences as passwords
Pro: Increases the size of the searchable space of possibilities.
Pro: Are easier to remember for most users.
Con: Depend on the users choice.
As you can see, the "good" solutions all depend on the users choice, which again reveals the user as the weakest element of the chain.
Any other suggestions?
You could do what Google does. Which is after a certain number of trys they have a captacha show up. Than after a couple of times with the captacha you lock them out for a couple of minutes.
I tend to agree with most of the other comments:
Lock after X failed password attempts
Count failed attempts against username
Optionally use CAPTCHA (for example, attempts 1-2 are normal, attempts 3-5 are CAPTCHA'd, further attempts blocked for 15 minutes).
Optionally send an e-mail to the account owner to remove the block
What I did want to point out is that you should be very careful about forcing "strong" passwords, as this often means they'll just be written on a post-it on the desk/attached to the monitor. Also, some password policies lead to more predictable passwords. For example:
If the password cannot be any previous used password and must include a number, there's a good chance that it'll be any common password with a sequential number after it. If you have to change your password every 6 months, and a person has been there two years, chances are their password is something like password4.
Say you restrict it even more: must be at least 8 characters, cannot have any sequential letters, must have a letter, a number and a special character (this is a real password policy that many would consider secure). Trying to break into John Quincy Smith's account? Know he was born March 6th? There's a good chance his password is something like jqs0306! (or maybe jqs0306~).
Now, I'm not saying that letting your users have the password password is a good idea either, just don't kid yourself thinking that your forced "secure" passwords are secure.
To elaborate on the best practice:
What krosenvold said: log num_failed_logins and last_failed_time in the user table (except when the user is suspended), and once the number of failed logins reach a treshold, you suspend the user for 30 seconds or a minute. It is the best practice.
That method effectively eliminates single-account brute-force and dictionary attacks. However, it does not prevent an attacker from switching between user names - ie. keeping the password fixed and trying it with a large number of usernames. If your site has enough users, that kind of attack can be kept going for a long time before it runs out of unsuspended accounts to hit. Hopefully, he will be running this attack from a single IP (not likely though, as botnets are really becoming the tool of the trade these days) so you can detect that and block the IP, but if he is distributing the attack... well, that's another question (that I just posted here, so please check it out if you haven't).
One additional thing to remember about the original idea is that you should of course still try to let the legitimate user through, even while the account is being attacked and suspended -- that is, IF you can tell the real user and the bot apart.
And you CAN, in at least two ways.
If the user has a persistent login ("remember me") cookie, just let him pass through.
When you display the "I'm sorry, your account is suspended due to a large number of unsuccessful login attempts" message, include a link that says "secure backup login - HUMANS ONLY (bots: no lying)". Joke aside, when they click that link, give them a reCAPTCHA-authenticated login form that bypasses the account's suspend status. That way, IF they are human AND know the correct login+password (and are able to read CAPTCHAs), they will never be bothered by delays, and your site will be impervious to rapid-fire attacks.
Only drawback: some people (such as the vision-impaired) cannot read CAPTCHAs, and they MAY still be affected by annoying bot-produced delays IF they're not using the autologin feature.
What ISN'T a drawback: that the autologin cookie doesn't have a similar security measure built-in. Why isn't this a drawback, you ask? Because as long as you've implemented it wisely, the secure token (the password equivalent) in your login cookie is twice as many bits (heck, make that ten times as many bits!) as your password, so brute-forcing it is effectively a non-issue. But if you're really paranoid, set up a one-second delay on the autologin feature as well, just for good measure.
You should implement a cache in the application not associated with your backend database for this purpose.
First and foremost delaying only legitimate usernames causes you to "give up" en-mass your valid customer base which can in itself be a problem even if username is not a closely guarded secret.
Second depending on your application you can be a little smarter with an application specific delay countermeasures than you might want to be with storing the data in a DB.
Its resistant to high speed attempts that would leak a DOS condition into your backend db.
Finally it is acceptable to make some decisions based on IP... If you see single attempts from one IP chances are its an honest mistake vs multiple IPs from god knows how many systems you may want to take other precautions or notify the end user of shady activity.
Its true large proxy federations can have massive numbers of IP addresses reserved for their use but most do make a reasonable effort to maintain your source address for a period of time for legacy purposes as some sites have a habbit of tieing cookie data to IP.
Do like most banks do, lockout the username/account after X login failures. But I wouldn't be as strict as a bank in that you must call in to unlock your account. I would just make a temporary lock out of 1-5 minutes. Unless of course, the web application is as data sensitive as a bank. :)
This is an old post. However, I thought of putting my findings here so that it might help any future developer.
We need to prevent brute-force attack so that the attacker can not harvest the user name and password of a website login. In many systems, they have some open ended urls which does not require an authentication token or API key for authorization. Most of these APIs are critical. For example; Signup, Login and Forget Password APIs are often open (i.e. does not require a validation of the authentication token). We need to ensure that the services are not abused. As stated earlier, I am just putting my findings here while studying about how we can prevent a brute force attack efficiently.
Most of the common prevention techniques are already discussed in this post. I would like to add my concerns regarding account locking and IP address locking. I think locking accounts is a bad idea as a prevention technique. I am putting some points here to support my cause.
Account locking is bad
An attacker can cause a denial of service (DoS) by locking out large numbers of accounts.
Because you cannot lock out an account that does not exist, only valid account names will lock. An attacker could use this fact to harvest usernames from the site, depending on the error responses.
An attacker can cause a diversion by locking out many accounts and flooding the help desk with support calls.
An attacker can continuously lock out the same account, even seconds after an administrator unlocks it, effectively disabling the account.
Account lockout is ineffective against slow attacks that try only a few passwords every hour.
Account lockout is ineffective against attacks that try one password against a large list of usernames.
Account lockout is ineffective if the attacker is using a username/password combo list and guesses correctly on the first couple of attempts.
Powerful accounts such as administrator accounts often bypass lockout policy, but these are the most desirable accounts to attack. Some systems lock out administrator accounts only on network-based logins.
Even once you lock out an account, the attack may continue, consuming valuable human and computer resources.
Consider, for example, an auction site on which several bidders are fighting over the same item. If the auction web site enforced account lockouts, one bidder could simply lock the others' accounts in the last minute of the auction, preventing them from submitting any winning bids. An attacker could use the same technique to block critical financial transactions or e-mail communications.
IP address locking for a account is a bad idea too
Another solution is to lock out an IP address with multiple failed logins. The problem with this solution is that you could inadvertently block large groups of users by blocking a proxy server used by an ISP or large company. Another problem is that many tools utilize proxy lists and send only a few requests from each IP address before moving on to the next. Using widely available open proxy lists at websites such as http://tools.rosinstrument.com/proxy/, an attacker could easily circumvent any IP blocking mechanism. Because most sites do not block after just one failed password, an attacker can use two or three attempts per proxy. An attacker with a list of 1,000 proxies can attempt 2,000 or 3,000 passwords without being blocked. Nevertheless, despite this method's weaknesses, websites that experience high numbers of attacks, adult Web sites in particular, do choose to block proxy IP addresses.
My proposition
Not locking the account. Instead, we might consider adding intentional delay from server side in the login/signup attempts for consecutive wrong attempts.
Tracking user location based on IP address in login attempts, which is a common technique used by Google and Facebook. Google sends a OTP while Facebook provides other security challenges like detecting user's friends from the photos.
Google re-captcha for web application, SafetyNet for Android and proper mobile application attestation technique for iOS - in login or signup requests.
Device cookie
Building a API call monitoring system to detect unusual calls for a certain API endpoint.
Propositions Explained
Intentional delay in response
The password authentication delay significantly slows down the attacker, since the success of the attack is dependent on time. An easy solution is to inject random pauses when checking a password. Adding even a few seconds' pause will not bother most legitimate users as they log in to their accounts.
Note that although adding a delay could slow a single-threaded attack, it is less effective if the attacker sends multiple simultaneous authentication requests.
Security challenges
This technique can be described as adaptive security challenges based on the actions performed by the user in using the system earlier. In case of a new user, this technique might throw default security challenges.
We might consider putting in when we will throw security challenges? There are several points where we can.
When user is trying to login from a location where he was not located nearby before.
Wrong attempts on login.
What kind of security challenge user might face?
If user sets up the security questions, we might consider asking the user answers of those.
For the applications like Whatsapp, Viber etc. we might consider taking some random contact names from phonebook and ask to put the numbers of them or vice versa.
For transactional systems, we might consider asking the user about latest transactions and payments.
API monitoring panel
To build a monitoring panel for API calls.
Look for the conditions that could indicate a brute-force attack or other account abuse in the API monitoring panel.
Many failed logins from the same IP address.
Logins with multiple usernames from the same IP address.
Logins for a single account coming from many different IP addresses.
Excessive usage and bandwidth consumption from a single use.
Failed login attempts from alphabetically sequential usernames or passwords.
Logins with suspicious passwords hackers commonly use, such as ownsyou (ownzyou), washere (wazhere), zealots, hacksyou etc.
For internal system accounts we might consider allowing login only from certain IP addresses. If the account locking needs to be in place, instead of completely locking out an account, place it in a lockdown mode with limited capabilities.
Here are some good reads.
https://en.wikipedia.org/wiki/Brute-force_attack#Reverse_brute-force_attack
https://www.owasp.org/index.php/Blocking_Brute_Force_Attacks
http://www.computerweekly.com/answer/Techniques-for-preventing-a-brute-force-login-attack
I think you should log againt the username. This is the only constant (anything else can be spoofed). And yes it could lock out a legitimate user for a day. But if I must choose between an hacked account and a closed account (for a day) I definitely chose the lock.
By the way, after a third failed attempt (within a certain time) you can lock the account and send a release mail to the owner. The mail contains a link to unlock the account. This is a slight burden on the user but the cracker is blocked. And if even the mail account is hacked you could set a limit on the number of unlockings per day.
A lot of online message boards that I log into online give me 5 attempts at logging into an account, after those 5 attempts the account is locked for an hour or fifteen minutes. It may not be pretty, but this would certainly slow down a dictionary attack on one account. Now nothing is stopping a dictionary attack against multiple accounts at the same time. Ie try 5 times, switch to a different account, try another 5 times, then circle back. But it sure does slow down the attack.
The best defense against a dictionary attack is to make sure the passwords are not in a dictionary!!! Basically set up some sort of password policy that checks a dictionary against the letters and requires a number or symbol in the password. This is probably the best defense against a dictionary attack.
You could add some form of CAPTCHA test. But beware that most of them render access more difficult eye or earing impaired people. An interesting form of CAPTCHA is asking a question,
What is the sum of 2 and 2?
And if you record the last login failure, you can skip the CAPTCHA if it is old enough. Only do the CAPTCHA test if the last failure was during the last 10 minutes.
For .NET Environment
Dynamic IP Restrictions
The Dynamic IP Restrictions Extension for IIS provides IT Professionals and Hosters a configurable module that helps mitigate or block Denial of Service Attacks or cracking of passwords through Brute-force by temporarily blocking Internet Protocol (IP) addresses of HTTP clients who follow a pattern that could be conducive to one of such attacks. This module can be configured such that the analysis and blocking could be done at the Web Server or the Web Site level.
Reduce the chances of a Denial of Service attack by dynamically blocking requests from malicious IP addresses
Dynamic IP Restrictions for IIS allows you to reduce the probabilities of your Web Server being subject to a Denial of Service attack by inspecting the source IP of the requests and identifying patterns that could signal an attack. When an attack pattern is detected, the module will place the offending IP in a temporary deny list and will avoid responding to the requests for a predetermined amount of time.
Minimize the possibilities of Brute-force-cracking of the passwords of your Web Server
Dynamic IP Restrictions for IIS is able to detect requests patterns that indicate the passwords of the Web Server are attempted to be decoded. The module will place the offending IP on a list of servers that are denied access for a predetermined amount of time. In situations where the authentication is done against an Active Directory Services (ADS) the module is able to maintain the availability of the Web Server by avoiding having to issue authentication challenges to ADS.
Features
Seamless integration into IIS 7.0 Manager.
Dynamically blocking of requests from IP address based on either of the following criteria:
The number of concurrent requests.
The number of requests over a period of time.
Support for list of IPs that are allowed to bypass Dynamic IP Restriction filtering.
Blocking of requests can be configurable at the Web Site or Web Server level.
Configurable deny actions allows IT Administrators to specify what response would be returned to the client. The module support return status codes 403, 404 or closing the connection.
Support for IPv6 addresses.
Support for web servers behind a proxy or firewall that may modify the client IP address.
http://www.iis.net/download/DynamicIPRestrictions
Old post but let me post what I have in this the end 2016. Hope it still could help.
It's a simple way but I think it's powerful to prevent login attack. At least I always use it on every web of mine. We don't need CAPTCHA or any other third party plugins.
When user login for the first time. We create a session like
$_SESSION['loginFail'] = 10; // any number you prefer
If login success, then we will destroy it and let user login.
unset($_SESSION['loginFail']); // put it after create login session
But if user fail, as we usually sent error message to them, at the same time we reduce the session by 1 :
$_SESSION['loginFail']-- ; // reduce 1 for every error
and if user fail 10 times, then we will direct them to other website or any web pages.
if (!isset($_SESSION['loginFail'])) {
if ($_SESSION['login_fail'] < 1 ) {
header('Location:https://google.com/'); // or any web page
exit();
}
}
By this way, user can not open or go to our login page anymore, cause it has redirected to other website.
Users has to close the browser ( to destroy session loginFail that we created), open it 'again' to see our login page 'again'.
Is it helpful?
There are several aspects to be considered to prevent brute-force. consider given aspects.
Password Strenght
Force users to create a password to meet specific criteria
Password should contain at least one uppercase, lowercase, digit and symbol(special character).
Password should have a minimum length defined according to your criteria.
Password should not contain a user name or the public user id.
By creating the minimum password strength policy, brute-force will take time to guess the password. meanwhile, your app can identify such thing and migrate it.
reCaptcha
You can use reCaptcha to prevent bot scripts having brute-force function. It's fairly easy to implement the reCaptcha in web application. You can use Google reCaptcha. it has several flavors of reCaptcha like Invisible reCaptcha and reCaptcha v3.
Dynamic IP filtering Policy
You can dynamically identify the pattern of request and block the IP if the pattern matches the attack vector criteria. one of the most popular technique to filter the login attempts is Throttling. Read the Throttling Technique using php to know more. A good dynamic IP filtering policy also protects you from attacks like DoS and DDos. However, it doesn't help to prevent DRDos.
CSRF Prevention Mechanism
the csrf is known as cross-site request forgery. Means the other sites are submitting forms on your PHP script/Controller. Laravel has a pretty well-defined approach to prevent csrf. However, if you are not using such a framework, you have to design your own JWT based csrf prevention mechanism. If your site is CSRF Protected, then there is no chance to launch brute-force on any forms on your website. It's just like the main gate you closed.

Best practice against password-list-attacks with webapplications

i'd like to prevent bots from hacking weak password-protected accounts. (e.g. this happend to ebay and other big sites)
So i'll set a (mem-) cached value with the ip, amount of tries and timestamp of last try (memcache-fall-out).
But what about bots trying to open any account with just one password. For example, the bot tries all 500.000 Useraccounts with the password "password123". Maybe 10 will open.
So my attempt was to just cache the ip with tries and set max-tries to ~50. The i would delete it after a successful login. So the good-bot would just login with a valid account every 49 tries to reset the lock.
Is there any way to do it right?
What do big platforms do about this?
What can i do to prevent idiots from blocking all users on a proxy with retrying 50 times?
If there is no best practice - does this mean any platform is brute-forceable? At least with a hint on when counters are resetted?
I think you can mix your solution with captchas:
Count the number of tries per IP
In case there are too many tries from a given IP address within a given time, add a captcha to your login form.
Some sites give you maybe two or three tries before they start making you enter a captcha along with your username/password. The captcha goes away once you successfully log in.
There was a relatively good article on Coding Horror a few days ago.
While the code is focused on Django there is some really good discussion on the best practice methods on Simon Willison’s blog. He uses memcached to track IPs and login failures.
You could use a password strength checker when a user sets their password to make sure they're not using an easily brute-forced password.
EDIT: Just to be clear, this shouldn't be seen as a complete solution to the problem you're trying to solve, but it should be considered in conjunction with some of the other answers.
You're never going to be able to prevent a group of bots from trying this from lots of different IP addresses.
From the same IP address: I would say if you see an example of "suspicious" behavior (invalid username, or several valid accounts with incorrect login attempts), just block the login for a few seconds. If it's a legitimate user, they won't mind waiting a few seconds. If it's a bot this will slow them down to the point of being impractical. If you continue to see the behavior from the IP address, just block them -- but leave an out-of-band door for legitimate users (call phone #x, or email this address).
PLEASE NOTE: IP addresses can be shared among THOUSANDS or even MILLIONS of users!!! For example, most/all AOL users appear as a very small set of IP addresses due to AOL's network architecture. Most ISPs map their large user bases to a small set of public IP addresses.
You cannot assume that an IP address belongs to only a single user.
You cannot assume that a single user will be using only a single IP address.
Check the following question discussing best practices against distibuted brute force and dictionary attacks:
What is the best Distributed Brute Force countermeasure?

Resources