Chrome Instant invald URLs triggering website lockout - security

My website uses obscure, random URLs to provide some security for sensitive documents. E.g. a URL might be http://example.com/<random 20-char string>. The URLs are not linked to by any other pages, have META tags to opt out of search engine crawling, and have short expiration periods. For top-tier security some of the URLs are also protected by a login prompt, but many are simply protected by the obscure URL. We have decided that this is an acceptable level of security.
We have a lockout mechanism implemented where an IP address will be blocked for some period of time following several invalid URL attempts, to discourage brute-force guessing of URLs.
However, Google Chrome has a feature called "Instant" (enabled in Options -> Basic -> Search), that will prefetch URLs as they are typed into the address bar. This is quickly triggering a lockout, since it attempts to fetch a bunch of invalid URLs, and by the time the user has finished, they are not allowed any more attempts.
Is there any way to opt out of this feature, or ignore HTTP requests that come from it?
Or is this lockout mechanism just stupid and annoying for users without providing any significant protection?
(Truthfully, I don't really understand how this is a helpful feature for Chrome. For search results it can be interesting to see what Google suggests as you type, but what are the odds that a subset of your intended URL will produce a meaningful page? When I have this feature turned on, all I get is a bunch of 404 errors until I've finished typing.)

Without commenting on the objective, I ran into a similar problem (unwanted page loads from Chrome Instant), and discovered that Google does provide a way to avoid this problem:
When Google Chrome makes the request to your website server, it will send the following header:
X-Purpose: preview
Detect this, and return an HTTP 403 ("Forbidden") status code.

Or is this lockout mechanism just stupid and annoying for users without providing any significant protection?
You've potentially hit the nail on the head there: Security through obscurity is not security.
Instead of trying to "discourage brute-force guessing", use URLs that are actually hard to guess: the obvious example is using a cryptographically secure RNG to generate the "random 20 character string". If you use base64url (or a similar URL-safe base64) you get 64^20 = 2^6^20 = 2^120 bits. Not quite 128 (or 160 or 256) bits, so you can make it longer if you want, but also note that the expected bandwidth cost of a correct guess is going to be enormous, so you don't really have to worry until your bandwidth bill becomes huge.
There are some additional ways you might want to protect the links:
Use HTTPS to reduce the potential for eavesdropping (they'll still be unencrypted between SMTP servers, if you e-mail the links)
Don't link to anything, or if you do, link through a HTTPS redirect (last I checked, many web browsers will still send a Referer:, leaking the "secure" URL you were looking at previously). An alternative is to have the initial load set an unguessable secure session cookie and redirect to a new URL which is only valid for that session cookie.
Alternatively, you can alter the "lockout" to still work without compromising usability:
Serve only after a delay. Every time you serve a document or HTTP 404, increase the delay for that IP. There's probably an easy algorithm to asymptotically approach a rate limit but be more "forgiving" for the first few requests.
For each IP address, only allow one request at a time. When you receive a request, return HTTP 5xx on any existing requests (I forget which one is "server load too high").
Even if the initial delays are exponentially increasing (i.e. 1s, 2s, 4s), the "current" delay isn't going to be much greater than the time taken to type in the whole URL. If it takes you 10 seconds to type in a random URL, then another 16 seconds to wait for it to load isn't too bad.
Keep in mind that someone who wants to get around your IP-based rate limiting can just rent a (fraction of the bandwidth of a) botnet.
Incidentally, I'm (only slightly) surprised by the view from an Unnamed Australian Software Company that low-entropy randomly-generated passwords are not a problem, because there should be a CAPTCHA in your login system. Funny, since some of those passwords are server-to-server.

Related

is it safe to fetch an image on plain http on a bank's homebanking website?

I ask here instead of https://security.stackexchange.com/ because I dont think this question is on a professional level.
I just saw something weird on my bank's website, they are fetching an image from another domain, using http instead of https , on firefox it raises a security "mixed content" alert, on chrome it just shows up an alert on the security tab.
This is the site: https://www.bancoprovincia.com.ar/Principal/BipPersonal
The unsafe content (an image) happens to be on the page just before the user logs in to his home banking, I was worried that some attacker could intercept the content and replace it with something that could be a security risk.
Any chance this is a security risk for the bank and it's clients?.
It's not a direct vulnerability, but still bad practice.
Some risks that come to mind:
An attacker having access to users' connections (man in the middle) could replace the image with a malicious one, exploiting potentially zero-day (as yet unknown) flaws in browser or operating system image processor libraries. This could lead to remote code execution on the client.
Replacing the image could also be used to facilitate phishing. The malicious image could tell the user to call a phone number because of some kind of a problem, etc.
It is an information leak. An attacker may receive information about users browsing to the bank website, also if the image is in a header included on all pages, they may receive information about what the user does. This is inherently the case for every external site that has its images linked even over https, but over http this also applies to any MitM attacker.
It is a potential availability problem. If the image on the external site times out (waits too long to download), the page will not load for some time in some browsers and an attacker could exploit that. However, this I think is not affected by the image being served on plain http, it would affect an externally linked https image as well I think.
It's also a very bad practice, because instead of strengthening good security practices in users like always checking browser indications of a secure website, it is telling them that it's ok if there are warnings. It is not.

Web security: What prevents a hacker from spoofing a bank's site and grabbing data before form submission? (with example)

Assume http://chaseonline.chase.com is a real URL with a web server sitting behind it, i,e, this URL revolves to an IP address or probably several so that there can be a lot of identical servers that allows load balancing from client requests.
I guess that probably Chase buys up URLs that are "close" in the URL namespace(<<< how to define the term "namespace"? Lexicographically?? I think the latter is not trivial (because it depends on a post that one defines on top of URL strings ... never mind this comment).
Suppose that given of the URLs (http://mychaseonline.chase.com, http://chaseonline.chase.ua, http://chaseonline.chase.ru, etc.) is "free" (not bought). I buy one of these free URLs, write my phishing/spoofing server that sits behind
my URL and renders the following screen => https://chaseonline.chase.com/
I work to get my URL indexed (hopefully) at least as high or higher than the real one (http://chaseonline.chase.com). Chance is (hopefully) most bank clients/users won't notice my bogus URLs and I start collecting . I then use my server as a client in relationship to the real bank server http://chaseonline.chase.com, log in and using my collection/list of <user id, password> tuples to login to each <user id, password> to create mischief.
Is this a cross-site request forgery? How would one prevent this from occurring?
What I'm hearing in your description is a phishing attack albeit with slightly more complexity. Let's address some of this points
2) Really hard to get all the urls, especially when you take into consideration different variations such as unicode, or even just simple kerning hacks. For example the R and N in kerning looks a lot like an m when you look quickly. Welcome to chаse.rnobile.com! So with that said, I'd guess that most companies just buy the obvious domains.
4) Getting your url indexed higher than the real one, I'll posit is impossible. Google et al. are likely sophisticated enough to catch that type of thing from happening. One approach to getting above chase in SERP would be to buy adwords for something like "Bank Online With Chase." But there again, I'd assume that the search engines have a decent filtering/fraud prevention mechanism to catch this type of thing.
Mostly you'd be better off to keep your server from being indexed since that would simply attract attention. Because this type of thing will be shut down, I presume most phishing attacks go for large numbers of small 'fish' (larger ROI) or small numbers of large 'fish' (think targeted phishing attacks of execs, bank employees, etc.)
I think you offer up an interesting idea in point 4, that there's nothing to stop a man-in-the-middle attack from occurring wherein your site delegates out to the target site for each request. The difficulty in that approach is that you'd spend a ton of resources on creating a replica website. When you think of most hacking as being a business, trying to maximize your ROI then a lot of the "this is what I'd do if I were a hacker" ideas go way.
If I were to do this type of thing, I'd provide a login facade, have the user provide me their credentials, and then redirect to the main site on POST to my server. This way I get your credentials and you think there's just been an error on the form. I'm then free to pull all the information off of your banking site at my leisure.
There's nothing cross-site about this. It's a simple forgery.
It fails for a number of reasons: lack of security (your site isn't HTTPS), malware protection vendors explicitly check against this kind of abuse, Google won't rank your forgery above highly popular sites, and finally banks with a real sense of security use 2 Factor Authentication. The login token you'd get for my bank account is valid for a few seconds, literally, and can't be used for anything but logging in.

Identifying requests made by Chrome extensions?

I have a web application that has some pretty intuitive URLs, so people have written some Chrome extensions that use these URLs to make requests to our servers. Unfortunately, these extensions case problems for us, hammering our servers, issuing malformed requests, etc, so we are trying to figure out how to block them, or at least make it difficult to craft requests to our servers to dissuade these extensions from being used (we provide an API they should use instead).
We've tried adding some custom headers to requests and junk-json-preamble to responses, but the extension authors have updated their code to match.
I'm not familiar with chrome extensions, so what sort of access to the host page do they have? Can they call JavaScript functions on the host page? Is there a special header the browser includes to distinguish between host-page requests and extension requests? Can the host page inspect the list of extensions and deny certain ones?
Some options we've considered are:
Rate-limiting QPS by user, but the problem is not all queries are equal, and extensions typically kick off several expensive queries that look like user entered queries.
Restricting the amount of server time a user can use, but the problem is that users might hit this limit by just navigating around or running expensive queries several times.
Adding static custom headers/response text, but they've updated their code to mimic our code.
Figuring out some sort of token (probably cryptographic in some way) we include in our requests that the extension can't easily guess. We minify/obfuscate our JS, so are ok with embedding it in the JS source code (since the variable name it would have would be hard to guess).
I realize this may not be a 100% solvable problem, but we hope to either give us an upper hand in combatting it, or make it sufficiently hard to scrape our UI that fewer people do it.
Welp, guess nobody knows. In the end we just sent a custom header and starting tracking who wasn't sending it.

Secure Token URL - How secure is it? Proxy authentication as alternative?

I know it as secure-token URL, maby there is another name out there. But I think you all know it.
Its a teqniuque mostly applied if you want to restrict content delivery to a certain client, that you have handed a specific url in advance.
You take a secret token, concatenate it with the resource you want to protect, has it and when the client requests the this URL on one of your server, the hash is re-constructed from the information gathered from the request and the hash is compared. If its the same, the content is delivered, else the user gets redirected to your webseite or something else.
You can also put a timestamp in the has to put a time to live on the url, or include the users ip adress to restrict the delivere to his connection.
This teqnique is used by Amazon (S3 and Cloudfront),Level 3 CDN, Rapidshare and many others. Its also a basic part of http digest authentication, altough there is it taken a step further with link invalidation and other stuff.
Here is a link to the Amazon docs if you want to know more.
Now my concerns with this method is that if one person cracks one token of your links, the attacker gets your token plain-text and can sign any URL in your name himself.
Or worse, in the case of amazon, access your services on an administrative scope.
Granted, the string hashed here is usually pretty long. And you can include a lot of stuff or even force the data to have a minimum length by adding some unnecessary data to the request. Maby some pseudo variable in the URL that is not used, and fill it up with random data.
Therefore brute force attacks to crack the sha1/md5 or whatever you use hash are pretty hard. But protocol is open, so you only have to fill in the gap where the secret token is and fill up the rest with the data known from the requst. AND today hardware is awesome and can calculate md5s at a rate of multiple tens of megabytes per second. This sort of attack can be distributed to a computing cloud and you are not limited to something like "10 tries per minute by a login server or so" which makes hashing approaches usually quite secure. And now with amazon EC2 you can even rent the hardware for short time (beat them with their own weapons haha!)
So what do you think? Do my concerns have a basis or am I paranoic?
However,
I am currently designing an object storage cloud for special needs (integrated media trans coding and special delivery methods like streaming and so on).
Now level3 introduced an alternative approach to secure token urls. Its currently beta and only open to clients who specifically request it. They call it "Proxy authentication".
What happens is that the content-delivery server makes a HEAD request to a server specified in your (the client's) settings and mimics the users request. So the same GET path and IP Address (as x_forwarder) is passed. You respond with a HTTP status code that tells the server to go a head with the content delivery or not.
You also can introduce some secure-token process into this and you can also put more restrictions on it. Like let a URL only be requested 10 times or so.
It obviously comes with a lot of overhead because additional request and calculations take place, but I think its reasonable and I don't see any caveats with it. Do you?
You could basically reformulate your question to: How long a secret token is needed to be safe.
To answer this consider the number of possible characters (alphanumeric + uppercase is is already 62 options per character). Secondly ensure that the secret token is random, and not in a dictionary or something. Then for instance if you would take a secret token of 10 characters long, it would take 62^10 (= 839.299.365.868.340.224 )attempts to bruteforce (worstcase; average case would be half of that of course). I wouldn't really be scared of that, but if you are, you could always ensure that the secret token is at least 100 chars long, in which case it takes 62^100 attempts to bruteforce (which is a number of three lines in my terminal).
In conclusion: just take a token big enough, and it should suffice.
Of course proxy authentication does offer your clients extra control, since they can way more directly control who can look and not, and this would for instance defeat emailsniffing as well. But I don't think the bruteforcing needs to be a concern given a long enough token.
It's called MAC as far as I understand.
I don't understand what's wrong with hashes. Simple calculations show that a SHA-1 hash, 160 bits, gives us very good protection. E.g. if you have a super-duper cloud which does 1 billion billions attempts per second, you need ~3000 billions billions years to brute force it.
You have many ways to secure a token :
Block IP after X failed token decoding
Add a timestamp in your token (hashed or crypted) to revoke the token after X days or X hours
My favorite : use a fast database system such as Memcached or better : Redis to stokre your tokens
Like Facebook : generate a token with timestamp, IP etc... and crypt it !

Preventing Brute Force Logins on Websites

As a response to the recent Twitter hijackings and Jeff's post on Dictionary Attacks, what is the best way to secure your website against brute force login attacks?
Jeff's post suggests putting in an increasing delay for each attempted login, and a suggestion in the comments is to add a captcha after the 2nd failed attempt.
Both these seem like good ideas, but how do you know what "attempt number" it is? You can't rely on a session ID (because an attacker could change it each time) or an IP address (better, but vulnerable to botnets). Simply logging it against the username could, using the delay method, lock out a legitimate user (or at least make the login process very slow for them).
Thoughts? Suggestions?
I think database-persisted short lockout period for the given account (1-5 minutes) is the only way to handle this. Each userid in your database contains a timeOfLastFailedLogin and numberOfFailedAttempts. When numbeOfFailedAttempts > X you lockout for some minutes.
This means you're locking the userid in question for some time, but not permanently. It also means you're updating the database for each login attempt (unless it is locked, of course), which may be causing other problems.
There is at least one whole country is NAT'ed in asia, so IP's cannot be used for anything.
In my eyes there are several possibilities, each having cons and pros:
Forcing secure passwords
Pro: Will prevent dictionary attacks
Con: Will also prevent popularity, since most users are not able to remember complex passwords, even if you explain to them, how to easy remember them. For example by remembering sentences: "I bought 1 Apple for 5 Cent in the Mall" leads to "Ib1Af5CitM".
Lockouts after several attempts
Pro: Will slow down automated tests
Con: It's easy to lock out users for third parties
Con: Making them persistent in a database can result in a lot of write processes in such huge services as Twitter or comparables.
Captchas
Pro: They prevent automated testing
Con: They are consuming computing time
Con: Will "slow down" the user experience
HUGE CON: They are NOT barrier-free
Simple knowledge checks
Pro: Will prevent automated testing
Con: "Simple" is in the eye of the beholder.
Con: Will "slow down" the user experience
Different login and username
Pro: This is one technic, that is hardly seen, but in my eyes a pretty good start to prevent brute force attacks.
Con: Depends on the users choice of the two names.
Use whole sentences as passwords
Pro: Increases the size of the searchable space of possibilities.
Pro: Are easier to remember for most users.
Con: Depend on the users choice.
As you can see, the "good" solutions all depend on the users choice, which again reveals the user as the weakest element of the chain.
Any other suggestions?
You could do what Google does. Which is after a certain number of trys they have a captacha show up. Than after a couple of times with the captacha you lock them out for a couple of minutes.
I tend to agree with most of the other comments:
Lock after X failed password attempts
Count failed attempts against username
Optionally use CAPTCHA (for example, attempts 1-2 are normal, attempts 3-5 are CAPTCHA'd, further attempts blocked for 15 minutes).
Optionally send an e-mail to the account owner to remove the block
What I did want to point out is that you should be very careful about forcing "strong" passwords, as this often means they'll just be written on a post-it on the desk/attached to the monitor. Also, some password policies lead to more predictable passwords. For example:
If the password cannot be any previous used password and must include a number, there's a good chance that it'll be any common password with a sequential number after it. If you have to change your password every 6 months, and a person has been there two years, chances are their password is something like password4.
Say you restrict it even more: must be at least 8 characters, cannot have any sequential letters, must have a letter, a number and a special character (this is a real password policy that many would consider secure). Trying to break into John Quincy Smith's account? Know he was born March 6th? There's a good chance his password is something like jqs0306! (or maybe jqs0306~).
Now, I'm not saying that letting your users have the password password is a good idea either, just don't kid yourself thinking that your forced "secure" passwords are secure.
To elaborate on the best practice:
What krosenvold said: log num_failed_logins and last_failed_time in the user table (except when the user is suspended), and once the number of failed logins reach a treshold, you suspend the user for 30 seconds or a minute. It is the best practice.
That method effectively eliminates single-account brute-force and dictionary attacks. However, it does not prevent an attacker from switching between user names - ie. keeping the password fixed and trying it with a large number of usernames. If your site has enough users, that kind of attack can be kept going for a long time before it runs out of unsuspended accounts to hit. Hopefully, he will be running this attack from a single IP (not likely though, as botnets are really becoming the tool of the trade these days) so you can detect that and block the IP, but if he is distributing the attack... well, that's another question (that I just posted here, so please check it out if you haven't).
One additional thing to remember about the original idea is that you should of course still try to let the legitimate user through, even while the account is being attacked and suspended -- that is, IF you can tell the real user and the bot apart.
And you CAN, in at least two ways.
If the user has a persistent login ("remember me") cookie, just let him pass through.
When you display the "I'm sorry, your account is suspended due to a large number of unsuccessful login attempts" message, include a link that says "secure backup login - HUMANS ONLY (bots: no lying)". Joke aside, when they click that link, give them a reCAPTCHA-authenticated login form that bypasses the account's suspend status. That way, IF they are human AND know the correct login+password (and are able to read CAPTCHAs), they will never be bothered by delays, and your site will be impervious to rapid-fire attacks.
Only drawback: some people (such as the vision-impaired) cannot read CAPTCHAs, and they MAY still be affected by annoying bot-produced delays IF they're not using the autologin feature.
What ISN'T a drawback: that the autologin cookie doesn't have a similar security measure built-in. Why isn't this a drawback, you ask? Because as long as you've implemented it wisely, the secure token (the password equivalent) in your login cookie is twice as many bits (heck, make that ten times as many bits!) as your password, so brute-forcing it is effectively a non-issue. But if you're really paranoid, set up a one-second delay on the autologin feature as well, just for good measure.
You should implement a cache in the application not associated with your backend database for this purpose.
First and foremost delaying only legitimate usernames causes you to "give up" en-mass your valid customer base which can in itself be a problem even if username is not a closely guarded secret.
Second depending on your application you can be a little smarter with an application specific delay countermeasures than you might want to be with storing the data in a DB.
Its resistant to high speed attempts that would leak a DOS condition into your backend db.
Finally it is acceptable to make some decisions based on IP... If you see single attempts from one IP chances are its an honest mistake vs multiple IPs from god knows how many systems you may want to take other precautions or notify the end user of shady activity.
Its true large proxy federations can have massive numbers of IP addresses reserved for their use but most do make a reasonable effort to maintain your source address for a period of time for legacy purposes as some sites have a habbit of tieing cookie data to IP.
Do like most banks do, lockout the username/account after X login failures. But I wouldn't be as strict as a bank in that you must call in to unlock your account. I would just make a temporary lock out of 1-5 minutes. Unless of course, the web application is as data sensitive as a bank. :)
This is an old post. However, I thought of putting my findings here so that it might help any future developer.
We need to prevent brute-force attack so that the attacker can not harvest the user name and password of a website login. In many systems, they have some open ended urls which does not require an authentication token or API key for authorization. Most of these APIs are critical. For example; Signup, Login and Forget Password APIs are often open (i.e. does not require a validation of the authentication token). We need to ensure that the services are not abused. As stated earlier, I am just putting my findings here while studying about how we can prevent a brute force attack efficiently.
Most of the common prevention techniques are already discussed in this post. I would like to add my concerns regarding account locking and IP address locking. I think locking accounts is a bad idea as a prevention technique. I am putting some points here to support my cause.
Account locking is bad
An attacker can cause a denial of service (DoS) by locking out large numbers of accounts.
Because you cannot lock out an account that does not exist, only valid account names will lock. An attacker could use this fact to harvest usernames from the site, depending on the error responses.
An attacker can cause a diversion by locking out many accounts and flooding the help desk with support calls.
An attacker can continuously lock out the same account, even seconds after an administrator unlocks it, effectively disabling the account.
Account lockout is ineffective against slow attacks that try only a few passwords every hour.
Account lockout is ineffective against attacks that try one password against a large list of usernames.
Account lockout is ineffective if the attacker is using a username/password combo list and guesses correctly on the first couple of attempts.
Powerful accounts such as administrator accounts often bypass lockout policy, but these are the most desirable accounts to attack. Some systems lock out administrator accounts only on network-based logins.
Even once you lock out an account, the attack may continue, consuming valuable human and computer resources.
Consider, for example, an auction site on which several bidders are fighting over the same item. If the auction web site enforced account lockouts, one bidder could simply lock the others' accounts in the last minute of the auction, preventing them from submitting any winning bids. An attacker could use the same technique to block critical financial transactions or e-mail communications.
IP address locking for a account is a bad idea too
Another solution is to lock out an IP address with multiple failed logins. The problem with this solution is that you could inadvertently block large groups of users by blocking a proxy server used by an ISP or large company. Another problem is that many tools utilize proxy lists and send only a few requests from each IP address before moving on to the next. Using widely available open proxy lists at websites such as http://tools.rosinstrument.com/proxy/, an attacker could easily circumvent any IP blocking mechanism. Because most sites do not block after just one failed password, an attacker can use two or three attempts per proxy. An attacker with a list of 1,000 proxies can attempt 2,000 or 3,000 passwords without being blocked. Nevertheless, despite this method's weaknesses, websites that experience high numbers of attacks, adult Web sites in particular, do choose to block proxy IP addresses.
My proposition
Not locking the account. Instead, we might consider adding intentional delay from server side in the login/signup attempts for consecutive wrong attempts.
Tracking user location based on IP address in login attempts, which is a common technique used by Google and Facebook. Google sends a OTP while Facebook provides other security challenges like detecting user's friends from the photos.
Google re-captcha for web application, SafetyNet for Android and proper mobile application attestation technique for iOS - in login or signup requests.
Device cookie
Building a API call monitoring system to detect unusual calls for a certain API endpoint.
Propositions Explained
Intentional delay in response
The password authentication delay significantly slows down the attacker, since the success of the attack is dependent on time. An easy solution is to inject random pauses when checking a password. Adding even a few seconds' pause will not bother most legitimate users as they log in to their accounts.
Note that although adding a delay could slow a single-threaded attack, it is less effective if the attacker sends multiple simultaneous authentication requests.
Security challenges
This technique can be described as adaptive security challenges based on the actions performed by the user in using the system earlier. In case of a new user, this technique might throw default security challenges.
We might consider putting in when we will throw security challenges? There are several points where we can.
When user is trying to login from a location where he was not located nearby before.
Wrong attempts on login.
What kind of security challenge user might face?
If user sets up the security questions, we might consider asking the user answers of those.
For the applications like Whatsapp, Viber etc. we might consider taking some random contact names from phonebook and ask to put the numbers of them or vice versa.
For transactional systems, we might consider asking the user about latest transactions and payments.
API monitoring panel
To build a monitoring panel for API calls.
Look for the conditions that could indicate a brute-force attack or other account abuse in the API monitoring panel.
Many failed logins from the same IP address.
Logins with multiple usernames from the same IP address.
Logins for a single account coming from many different IP addresses.
Excessive usage and bandwidth consumption from a single use.
Failed login attempts from alphabetically sequential usernames or passwords.
Logins with suspicious passwords hackers commonly use, such as ownsyou (ownzyou), washere (wazhere), zealots, hacksyou etc.
For internal system accounts we might consider allowing login only from certain IP addresses. If the account locking needs to be in place, instead of completely locking out an account, place it in a lockdown mode with limited capabilities.
Here are some good reads.
https://en.wikipedia.org/wiki/Brute-force_attack#Reverse_brute-force_attack
https://www.owasp.org/index.php/Blocking_Brute_Force_Attacks
http://www.computerweekly.com/answer/Techniques-for-preventing-a-brute-force-login-attack
I think you should log againt the username. This is the only constant (anything else can be spoofed). And yes it could lock out a legitimate user for a day. But if I must choose between an hacked account and a closed account (for a day) I definitely chose the lock.
By the way, after a third failed attempt (within a certain time) you can lock the account and send a release mail to the owner. The mail contains a link to unlock the account. This is a slight burden on the user but the cracker is blocked. And if even the mail account is hacked you could set a limit on the number of unlockings per day.
A lot of online message boards that I log into online give me 5 attempts at logging into an account, after those 5 attempts the account is locked for an hour or fifteen minutes. It may not be pretty, but this would certainly slow down a dictionary attack on one account. Now nothing is stopping a dictionary attack against multiple accounts at the same time. Ie try 5 times, switch to a different account, try another 5 times, then circle back. But it sure does slow down the attack.
The best defense against a dictionary attack is to make sure the passwords are not in a dictionary!!! Basically set up some sort of password policy that checks a dictionary against the letters and requires a number or symbol in the password. This is probably the best defense against a dictionary attack.
You could add some form of CAPTCHA test. But beware that most of them render access more difficult eye or earing impaired people. An interesting form of CAPTCHA is asking a question,
What is the sum of 2 and 2?
And if you record the last login failure, you can skip the CAPTCHA if it is old enough. Only do the CAPTCHA test if the last failure was during the last 10 minutes.
For .NET Environment
Dynamic IP Restrictions
The Dynamic IP Restrictions Extension for IIS provides IT Professionals and Hosters a configurable module that helps mitigate or block Denial of Service Attacks or cracking of passwords through Brute-force by temporarily blocking Internet Protocol (IP) addresses of HTTP clients who follow a pattern that could be conducive to one of such attacks. This module can be configured such that the analysis and blocking could be done at the Web Server or the Web Site level.
Reduce the chances of a Denial of Service attack by dynamically blocking requests from malicious IP addresses
Dynamic IP Restrictions for IIS allows you to reduce the probabilities of your Web Server being subject to a Denial of Service attack by inspecting the source IP of the requests and identifying patterns that could signal an attack. When an attack pattern is detected, the module will place the offending IP in a temporary deny list and will avoid responding to the requests for a predetermined amount of time.
Minimize the possibilities of Brute-force-cracking of the passwords of your Web Server
Dynamic IP Restrictions for IIS is able to detect requests patterns that indicate the passwords of the Web Server are attempted to be decoded. The module will place the offending IP on a list of servers that are denied access for a predetermined amount of time. In situations where the authentication is done against an Active Directory Services (ADS) the module is able to maintain the availability of the Web Server by avoiding having to issue authentication challenges to ADS.
Features
Seamless integration into IIS 7.0 Manager.
Dynamically blocking of requests from IP address based on either of the following criteria:
The number of concurrent requests.
The number of requests over a period of time.
Support for list of IPs that are allowed to bypass Dynamic IP Restriction filtering.
Blocking of requests can be configurable at the Web Site or Web Server level.
Configurable deny actions allows IT Administrators to specify what response would be returned to the client. The module support return status codes 403, 404 or closing the connection.
Support for IPv6 addresses.
Support for web servers behind a proxy or firewall that may modify the client IP address.
http://www.iis.net/download/DynamicIPRestrictions
Old post but let me post what I have in this the end 2016. Hope it still could help.
It's a simple way but I think it's powerful to prevent login attack. At least I always use it on every web of mine. We don't need CAPTCHA or any other third party plugins.
When user login for the first time. We create a session like
$_SESSION['loginFail'] = 10; // any number you prefer
If login success, then we will destroy it and let user login.
unset($_SESSION['loginFail']); // put it after create login session
But if user fail, as we usually sent error message to them, at the same time we reduce the session by 1 :
$_SESSION['loginFail']-- ; // reduce 1 for every error
and if user fail 10 times, then we will direct them to other website or any web pages.
if (!isset($_SESSION['loginFail'])) {
if ($_SESSION['login_fail'] < 1 ) {
header('Location:https://google.com/'); // or any web page
exit();
}
}
By this way, user can not open or go to our login page anymore, cause it has redirected to other website.
Users has to close the browser ( to destroy session loginFail that we created), open it 'again' to see our login page 'again'.
Is it helpful?
There are several aspects to be considered to prevent brute-force. consider given aspects.
Password Strenght
Force users to create a password to meet specific criteria
Password should contain at least one uppercase, lowercase, digit and symbol(special character).
Password should have a minimum length defined according to your criteria.
Password should not contain a user name or the public user id.
By creating the minimum password strength policy, brute-force will take time to guess the password. meanwhile, your app can identify such thing and migrate it.
reCaptcha
You can use reCaptcha to prevent bot scripts having brute-force function. It's fairly easy to implement the reCaptcha in web application. You can use Google reCaptcha. it has several flavors of reCaptcha like Invisible reCaptcha and reCaptcha v3.
Dynamic IP filtering Policy
You can dynamically identify the pattern of request and block the IP if the pattern matches the attack vector criteria. one of the most popular technique to filter the login attempts is Throttling. Read the Throttling Technique using php to know more. A good dynamic IP filtering policy also protects you from attacks like DoS and DDos. However, it doesn't help to prevent DRDos.
CSRF Prevention Mechanism
the csrf is known as cross-site request forgery. Means the other sites are submitting forms on your PHP script/Controller. Laravel has a pretty well-defined approach to prevent csrf. However, if you are not using such a framework, you have to design your own JWT based csrf prevention mechanism. If your site is CSRF Protected, then there is no chance to launch brute-force on any forms on your website. It's just like the main gate you closed.

Resources