Preventing registered users from sharing passwords - security

Below is a proposal for dealing with a situation of website security. I am wondering whether it seems feasible, from both a technical and usability point of view. I want to make sure that the proposal does contain any glaring errors.
A. THE WEBSITE
The website in question is a school website where students may purchase various items. These students get an account on the website with a username and password, which they can use to login. Once they login, they have access to protected pages with private content which are unavailable to the public at large.
B. THE SECURITY CONCERN
The owners of the website wish to prevent a situation where a single student signs up on the website, obtains a username and password, and then circulates those credentials to a circle of friends who are then able to login to the website and illegally view the private content.
C. THE SOLUTION
The central idea we have come up with to deal with this security problem is to permit each student to login to the website on two devices only. Once a student logs in on two different devices, they are restricted to those two devices. If they then attempt to login on a third device, the system would simply not permit them to do so. It is our understanding that other websites offering private content, such as Netflix, use such an approach.
D. IMPLEMENTATION
Two ideas come to mind to implement the above security measure: a. IP address. b. Cookies. We rule out IP addresses, which can change, and choose cookies. Websites such as amazon.com allow their customers to login once, and then whenever they return to the website, they are always recognized. This is almost certainly achieved through cookies.
Thus each time a student logs in, we will store on their device a cookie. And we will also store this cookie in our database under that student's account. Thus each time a student logs in on any device, we will check whether the device they are currently logging in on contains the cookie we have stored for that student. If it does not, we will know that the student is logging in on a different device. We will thus be able to know how many devices the student is trying to login on.
E. DRAWBACKS
We have identified at least three possible drawbacks to this approach:
Clearing Cookies. People can, for a variety of reasons, choose to clear the cookies from their computer.
A bona fide person may occasionally not have access to their usual device, and wish to login on a different computer.
People do purchase new devices from time to time.
These are examples of situations where a bona fide user, for legitimate reasons, wishes to login, but will be unable to, due to the website's security restriction of two devices.
We have some ideas as to how to build logic into the system to deal with such situations, which we may implement in the future, but for the time being, we feel that such situations are sufficiently rare that we do not need to handle them programmatically.
Rather, for now, in the event that a student is locked out, they will get a screen with a message explaining why we have not allowed them into the system, and a button which they can click on which will automatically generate an email to the site administrators.
The email will inform them that a student wishes to login on a third device. The administators can then contact the student, and, if they are satisfied that the need is bona fide, they will be able to take steps from the CMS to allow that student in.
The size of the student body is sufficiently manageable that the above approach should be feasible.
F. FAIR WARNING
We will inform the students of these security measures when their account is activated, in order to prevent unpleasant surprises.

This is a typical example of trying to fix via IT a real-world policy problem; in this case, forbidding students to share their credentials.
The solution you are proposing limits usability (for the reasons which you already listed under Drawbacks) and does not ensure security because cookies - and especially content - can be copied around. In short, it is feasible, but can easily be circumvented. It's up to you to decide whether it is worth to implement it. It would probably be better to enforce the "no sharing credentials" rule e.g. set up random checks and send any student caught violating the rule to the Dean.

Related

How should signup form error responses be displayed

I have a subscription based application that is build using MERN. I've recently submitted the application to be security tested and one of the responses that I received was that the application should not specifically tell the user why their signup application has been rejected for all cases. For example, if they enter a username or email that has already been registered, I shouldn't return an error message that says "Sorry, this username is already registered", as this would allow the user to build a list of users and emails that have registered with our site.
I understand why we need to prevent this, but I don't understand how I can tell the user why there signup submission failed without telling them that it's because that email has already been registered. It seems pointless to reject their signup form without giving them a specific reason, does anyone know what the best thing to do here is?
I have a subscription based application that is build using MERN
The fact you're using MongoDB, Express, React and NodeJS is irrelevant to how your end-users and visitors use your product.
I've recently submitted the application to be security tested...
Watch out - most "security consultants" I've come across that offer to do "analysis" just run some commodity scripts and vulnerability scanners against a website and then lightly touch-up the generated reports to make them look hand-written.
one of the responses that I received was that the application should not specifically tell the user why their signup application has been rejected for all cases
Hnnnng - not in "all" cases, yes - but unfortunately usability and security tend to be opposite ends of a seesaw that you need to carefully balance.
If you're a non-expert or otherwise inexperienced, I'd ask your security-consultant for an exhaustive list of those cases where they consider harmful information-disclosure is possible and then you should run that list by your UX team (and your legal team) to have them weigh-in.
I'll add (if not stress) that the web-application security scene is full of security-theatre and cargo-cult-programming practices, and bad and outdated advice sticks around in peoples' heads for too long (e.g. remember how everyone used to insist on changing your password every ~90 days? not anymore: it turns out that due to human-factors reasons that changing passwords frequently is often less secure).
For example, if they enter a username or email that has already been registered, I shouldn't return an error message that says "Sorry, this username is already registered", as this would allow the user to build a list of users and emails that have registered with our site.
Before considering any specific scenarios, first consider the nature of your web-application and your threat-model and ask yourself if the damage to the end user-experience is justified by the security gains, or even if there's any actual security gained at all.
For example, and using that issue specifically (i.e. not informing users on the registration page if a username and/or e-mail address is already in-use), I'd argue that for a public Internet website with a general-audience that usernames (i.e. login-names, screen-names, etc) are not particularly sensitive, and they're usually mutable, so there is no real end-user harm by disclosing if a username is already taken or not.
...but the existence or details of an e-mail address in your user-accounts database generally should not be disclosed to unauthenticated visitors. However, I don't think this is really possible to hide from visitors: if someone completes your registration form with completely valid data (excepting an already-in-use e-mail address) and the website rejects the registration attempt with a vague or completely useless error message then a novice user is going to be frustrated and give-up (and think your website is just broken), while a malicious user (with even a basic knowledge of how web-applications work) is going to instantly know it's because the e-mail address is in-use because it will work when they submit a different e-mail address - ergo: you haven't actually gained any security benefit while simultaneously losing business because your registration process is made painfully difficult.
However, consider alternative approaches:
One possible alternative approach for this problem specifically is to make it appear that the registration was successful, but to not let the malicious user in until they verify the e-mail address via emailed link (which they won't be able to do if it isn't their address), and if it is just a novice-user who is already registered and didn't realise it then just send them an email reminding them of that fact. This approach might be preferable on a social-media site where it's important to not disclose anything relating to any other users' PII - but this approach probably wouldn't be appropriate for a line-of-business system.
Another alternative approach: don't have your own registration system: just use OIDC and let users authenticate and register via Google, Facebook, Apple, etc. This also saves your users from having to remember another password.
As for the risk of information-harvesting: I appreciate that bots that brute-force large amounts of form-submissions sounds like a good match for never revealing information, a better solution is to just add a CAPTCHA and to rate-limit clients (both by limiting total requests-per-hour as well as adding artificial delays to user registration processing (e.g. humans generally don't care if a registration form POST takes 500ms or 1500ms, but that 1000ms difference will drastically affect bots.
In all my time building web-applications, I've never encountered any serious attempts at information-harvesting via automated registration form or login form submissions: it's always just marketing spam, and adding a CAPTCHA (even without rate-limiting) was all that was needed to put an end to that.
(The "non-serious" attempts at information-harvesting that I have seen were things like non-technical human-users manually "brute-forcing" themselves by typing through their keyboard: they all give-up after a few dozen attempts).
I understand why we need to prevent this, but I don't understand how I can tell the user why there signup submission failed without telling them that it's because that email has already been registered. It seems pointless to reject their signup form without giving them a specific reason, does anyone know what the best thing to do here is?
I'm getting the feeling maybe you got scammed by your security "consultants" making-up overstated risks in their report to you - rather than your web-application actually being at risk of being exploited.

How can I keep spambots from getting past multiple web security measures?

I am trying to stop spam accounts from being created on my website. I run a website that has approximately 50-80k pageviews per month. It's a social media website. Users sign up and communicate with one another for free. We've been battling with spam as of late even though we have implemented multiple security measures to counteract bots. I'd like to get any further suggestions of tips and tricks that I can try and also some help to see if I can identify if these are people coming from clickfarms, etc. (i.e. real people or computers)
Problem:
Signup form being completed and users posting spam in their profile information. Spammer signs up for the website by completing the signup form, activates their account via an email account, Logs into their account, and then completes their profile, putting spam in the description box with a link/url to their website they are advertising (everything from ##$%S enlargment to random blogs, to web developer websites, etc.) If there was one link they were posting we could detect it and ban them but they are not -- They are coming from multiple IP's, posting various links, using multiple email provider addresses for activating the accounts, registering with information from multiple countries, and creating about 10-30 accounts per day. Before implementing many security measures we were getting moreso around 100-200 fake accounts per day, but now we're down to 10-30 ... so we've seen some improvement, but the issue is still annoying me. So I'm half thinking now that the security measures are helping quite a bit, but that this is possibly humans still targeting our website and perhaps getting paid per signup they do or something similar to that. Even if so, is there any way I could confirm they are humans versus bots?
Security measures:
I won't get into all of the details here (for security reasons), but I'll just indicate what we've done to counteract the spambots:
Created honeypots at various areas of our website which automatically ban based on IP
IP banning - based on known botter/spammer ip addresses
Duration detection of signup form pageload to form submission -- if less than 5 seconds to complete our signup form, we're confirming you're a bot and then preventing the signup
Hidden checkbox in signup form -- there is a hidden checkbox in the signup form that is invisible to regular users (if a bot checks it we are automatically detecting and preventing the signup)
Google re-Captcha - We've enabled Google re-Captcha in our signup form as well
Email activation link - We send our users an activation email with a link that they have to click on to signup -- they are not able to sign into our website until they've activated their account.
Future actions include:
Detecting what users are posting in their descriptions in their profiles and banning based on that -- string detection for banned words, etc.
Any other suggestions or tips or tricks? In all honesty, if spam bots are getting through all of those security measures above --
do you think they are just that intelligent?
do you think we're being targeted?
Also, any way I can determine if they are bots or real humans? Suggestions?
This is a perennial problem; over the years I've found that as I add more anti-spam measures, the spammers continually get better at circumventing my measures.
I recommend doing an analysis of your spam to figure out how you can detect it. The spam itself contains the key to how to outsmart it. Look at the patterns, the structure, and decide what information is most useful and how the easiest way is to filter it out. Your spam detection doesn't need to be perfect, but generally, you want to get as much as possible, while getting as few false positives as possible.
Also, to answer your one question, you can make your bot-detection perfect, but there will always be humans submitting spam. And humans are tough to outsmart, and you may always need some manual attention to do it.
You are already implementing a lot of measures. Here are some more I would suggest:
When a signup form is generated, put a hidden field with a unique hash generated from the user's browser info, including the user's IP, HTTP user agent, and the date. Then, when the form is submitted, check the hash. This one method eliminated a surprising amount of spam.
If you want to take the previous method even farther, use a custom, time-sensitive hash in the URL of your contact form, and have the link to this form be dynamically generated. This way, if a spammer stores the form's URL, it won't work, but the link will work for every legitimate user of the site.
Make it so newly created, non-trusted users, cannot display any public profile information, such as URL's or text even. With a site as small as yours you could require manual approval of each user, and if your userbase got bigger, you could use an automated reputation system, a lot like Stack Overflow and the other Stack Exchange sites use. This removes the incentive for spam. Also, I found an overwhelming majority of spammers only ever logged onto the site once. If you wait to do the manual approval of users, until they have logged on twice, or even have returned to the site on another day, using a persistent cookie, you will filter out the vast majority of spammers and you will only have to do a small amount of manual approval work. Then have the system delete the unvalidated/inactive accounts after a certain amount of time.
Check for certain keywords or structure of info. I found an overwhelming majority of my spammers would use certain words or phrases that were never used by my legitimate users. Another one was entering a phone number in their profile, a common pattern in spammers, that no legitimate user ever did. Also look for signs of foul play like XSS attacks. A huge portion of spammers will, at some point, submit something that has a ton of HTML tags in it, you can either use the tags itself to filter them out, or you can do something like stripping the HTML tags and then comparing string length and banning them if it's more than a small amount (i.e. allow someone to do something simple like a few <em></em> or <strong></strong> tags.) Usually, if there are HTML tags in the entry, there's a ton of it. Also look for material with weird encodings or characters that don't make sense. This is often an attempt at sophisticated SQL injection attacks, XSS, or other types of hacking attempts.
Use external IP blacklists. AbuseIPDB is one example; it has an API that you can use to check new IP's before storing them in your temporary database. Their free plan allows checking of up to 1000 IP's a day and you can pay for more than that. It won't catch all the manual spam but I find they catch a ton of the automated spam.
Are they targeting you? Yes. They are targeting everyone. But any site with 50k+ pageviews a month is high enough volume to be an attractive target. The higher traffic you get, the more attractive of a target you will be. Even some of my tiny sites have been targeted with suprisingly sophisticated attacks these days. Everyone needs to be on guard.
Good luck. I wish this weren't so much of a problem, but it is.

Possible solutions for keeping track of anonymous users

I'm currently developing a web application that has one feature while allows input from anonymous users (No authorization required). I realize that this may prove to have security risks such as repeated arbitrary inputs (ex. spam), or users posting malicious content. So to remedy this I'm trying to create a sort of system that keeps track of what each anonymous user has posted.
So far all I can think of is tracking by IP, but it seems as though it may not be viable due to dynamic IPs, are there any other solutions for anonymous user tracking?
I would recommend requiring them to answer a captcha before posting, or after an unusual number of posts from a single ip address.
"A CAPTCHA is a program that protects websites against bots by generating and grading tests >that humans can pass but current computer programs cannot. For example, humans can read >distorted text as the one shown below, but current computer programs can't"
That way the spammers are actual humans. That will slow the firehose to a level where you can weed out any that does get through.
http://www.captcha.net/
There's two main ways: clientside and serverside. Tracking IP is all that I can think of serverside; clientside there's more accurate options, but they are all under user's control, and he can reanonymise himself (it's his machine, after all): cookies and storage come to mind.
Drop a cookie with an ID on it. Sure, cookies can be deleted, but this at least gives you something.
My suggestion is:
Use cookies for tracking of user identity. As you yourself have said, due to dynamic IP addresses, you can't reliably use them for tracking user identity.
To detect and curb spam, use IP + user browser agent combination.

How to Check for Shared Accounts

We have an application that includes a voting component.
To try and minimise voter fraud we allow N number of votes from the same IP address within a specific period. If this limit is hit we ignore the IP address for a while.
The issue with this approach is if a group of people from a school or similar vote they quickly hit the number. Their voting can also occur very quickly (e.g. a user in the class asks his classmates to vote which causes a large number in a short period).
We can look to set a cookie on the user's computer to help determine if they are sharing accounts or check the user agent string and use that too.
Apart from tracking by IP, what other strategies do people use to determine if a user is a legitimate or a shared account when the actual IP is shared?
If your goal is to prevent cheating in on-line voting, the answer is: you can't, unless you use something like SSL client certificates (cumbersome).
Some techniques to make it harder would be using some kind of one time token sent trough e-mail or SMS. Every smart kid knows how to cheat control cookies using privacy mode of modern web browsers.

Preventing Brute Force Logins on Websites

As a response to the recent Twitter hijackings and Jeff's post on Dictionary Attacks, what is the best way to secure your website against brute force login attacks?
Jeff's post suggests putting in an increasing delay for each attempted login, and a suggestion in the comments is to add a captcha after the 2nd failed attempt.
Both these seem like good ideas, but how do you know what "attempt number" it is? You can't rely on a session ID (because an attacker could change it each time) or an IP address (better, but vulnerable to botnets). Simply logging it against the username could, using the delay method, lock out a legitimate user (or at least make the login process very slow for them).
Thoughts? Suggestions?
I think database-persisted short lockout period for the given account (1-5 minutes) is the only way to handle this. Each userid in your database contains a timeOfLastFailedLogin and numberOfFailedAttempts. When numbeOfFailedAttempts > X you lockout for some minutes.
This means you're locking the userid in question for some time, but not permanently. It also means you're updating the database for each login attempt (unless it is locked, of course), which may be causing other problems.
There is at least one whole country is NAT'ed in asia, so IP's cannot be used for anything.
In my eyes there are several possibilities, each having cons and pros:
Forcing secure passwords
Pro: Will prevent dictionary attacks
Con: Will also prevent popularity, since most users are not able to remember complex passwords, even if you explain to them, how to easy remember them. For example by remembering sentences: "I bought 1 Apple for 5 Cent in the Mall" leads to "Ib1Af5CitM".
Lockouts after several attempts
Pro: Will slow down automated tests
Con: It's easy to lock out users for third parties
Con: Making them persistent in a database can result in a lot of write processes in such huge services as Twitter or comparables.
Captchas
Pro: They prevent automated testing
Con: They are consuming computing time
Con: Will "slow down" the user experience
HUGE CON: They are NOT barrier-free
Simple knowledge checks
Pro: Will prevent automated testing
Con: "Simple" is in the eye of the beholder.
Con: Will "slow down" the user experience
Different login and username
Pro: This is one technic, that is hardly seen, but in my eyes a pretty good start to prevent brute force attacks.
Con: Depends on the users choice of the two names.
Use whole sentences as passwords
Pro: Increases the size of the searchable space of possibilities.
Pro: Are easier to remember for most users.
Con: Depend on the users choice.
As you can see, the "good" solutions all depend on the users choice, which again reveals the user as the weakest element of the chain.
Any other suggestions?
You could do what Google does. Which is after a certain number of trys they have a captacha show up. Than after a couple of times with the captacha you lock them out for a couple of minutes.
I tend to agree with most of the other comments:
Lock after X failed password attempts
Count failed attempts against username
Optionally use CAPTCHA (for example, attempts 1-2 are normal, attempts 3-5 are CAPTCHA'd, further attempts blocked for 15 minutes).
Optionally send an e-mail to the account owner to remove the block
What I did want to point out is that you should be very careful about forcing "strong" passwords, as this often means they'll just be written on a post-it on the desk/attached to the monitor. Also, some password policies lead to more predictable passwords. For example:
If the password cannot be any previous used password and must include a number, there's a good chance that it'll be any common password with a sequential number after it. If you have to change your password every 6 months, and a person has been there two years, chances are their password is something like password4.
Say you restrict it even more: must be at least 8 characters, cannot have any sequential letters, must have a letter, a number and a special character (this is a real password policy that many would consider secure). Trying to break into John Quincy Smith's account? Know he was born March 6th? There's a good chance his password is something like jqs0306! (or maybe jqs0306~).
Now, I'm not saying that letting your users have the password password is a good idea either, just don't kid yourself thinking that your forced "secure" passwords are secure.
To elaborate on the best practice:
What krosenvold said: log num_failed_logins and last_failed_time in the user table (except when the user is suspended), and once the number of failed logins reach a treshold, you suspend the user for 30 seconds or a minute. It is the best practice.
That method effectively eliminates single-account brute-force and dictionary attacks. However, it does not prevent an attacker from switching between user names - ie. keeping the password fixed and trying it with a large number of usernames. If your site has enough users, that kind of attack can be kept going for a long time before it runs out of unsuspended accounts to hit. Hopefully, he will be running this attack from a single IP (not likely though, as botnets are really becoming the tool of the trade these days) so you can detect that and block the IP, but if he is distributing the attack... well, that's another question (that I just posted here, so please check it out if you haven't).
One additional thing to remember about the original idea is that you should of course still try to let the legitimate user through, even while the account is being attacked and suspended -- that is, IF you can tell the real user and the bot apart.
And you CAN, in at least two ways.
If the user has a persistent login ("remember me") cookie, just let him pass through.
When you display the "I'm sorry, your account is suspended due to a large number of unsuccessful login attempts" message, include a link that says "secure backup login - HUMANS ONLY (bots: no lying)". Joke aside, when they click that link, give them a reCAPTCHA-authenticated login form that bypasses the account's suspend status. That way, IF they are human AND know the correct login+password (and are able to read CAPTCHAs), they will never be bothered by delays, and your site will be impervious to rapid-fire attacks.
Only drawback: some people (such as the vision-impaired) cannot read CAPTCHAs, and they MAY still be affected by annoying bot-produced delays IF they're not using the autologin feature.
What ISN'T a drawback: that the autologin cookie doesn't have a similar security measure built-in. Why isn't this a drawback, you ask? Because as long as you've implemented it wisely, the secure token (the password equivalent) in your login cookie is twice as many bits (heck, make that ten times as many bits!) as your password, so brute-forcing it is effectively a non-issue. But if you're really paranoid, set up a one-second delay on the autologin feature as well, just for good measure.
You should implement a cache in the application not associated with your backend database for this purpose.
First and foremost delaying only legitimate usernames causes you to "give up" en-mass your valid customer base which can in itself be a problem even if username is not a closely guarded secret.
Second depending on your application you can be a little smarter with an application specific delay countermeasures than you might want to be with storing the data in a DB.
Its resistant to high speed attempts that would leak a DOS condition into your backend db.
Finally it is acceptable to make some decisions based on IP... If you see single attempts from one IP chances are its an honest mistake vs multiple IPs from god knows how many systems you may want to take other precautions or notify the end user of shady activity.
Its true large proxy federations can have massive numbers of IP addresses reserved for their use but most do make a reasonable effort to maintain your source address for a period of time for legacy purposes as some sites have a habbit of tieing cookie data to IP.
Do like most banks do, lockout the username/account after X login failures. But I wouldn't be as strict as a bank in that you must call in to unlock your account. I would just make a temporary lock out of 1-5 minutes. Unless of course, the web application is as data sensitive as a bank. :)
This is an old post. However, I thought of putting my findings here so that it might help any future developer.
We need to prevent brute-force attack so that the attacker can not harvest the user name and password of a website login. In many systems, they have some open ended urls which does not require an authentication token or API key for authorization. Most of these APIs are critical. For example; Signup, Login and Forget Password APIs are often open (i.e. does not require a validation of the authentication token). We need to ensure that the services are not abused. As stated earlier, I am just putting my findings here while studying about how we can prevent a brute force attack efficiently.
Most of the common prevention techniques are already discussed in this post. I would like to add my concerns regarding account locking and IP address locking. I think locking accounts is a bad idea as a prevention technique. I am putting some points here to support my cause.
Account locking is bad
An attacker can cause a denial of service (DoS) by locking out large numbers of accounts.
Because you cannot lock out an account that does not exist, only valid account names will lock. An attacker could use this fact to harvest usernames from the site, depending on the error responses.
An attacker can cause a diversion by locking out many accounts and flooding the help desk with support calls.
An attacker can continuously lock out the same account, even seconds after an administrator unlocks it, effectively disabling the account.
Account lockout is ineffective against slow attacks that try only a few passwords every hour.
Account lockout is ineffective against attacks that try one password against a large list of usernames.
Account lockout is ineffective if the attacker is using a username/password combo list and guesses correctly on the first couple of attempts.
Powerful accounts such as administrator accounts often bypass lockout policy, but these are the most desirable accounts to attack. Some systems lock out administrator accounts only on network-based logins.
Even once you lock out an account, the attack may continue, consuming valuable human and computer resources.
Consider, for example, an auction site on which several bidders are fighting over the same item. If the auction web site enforced account lockouts, one bidder could simply lock the others' accounts in the last minute of the auction, preventing them from submitting any winning bids. An attacker could use the same technique to block critical financial transactions or e-mail communications.
IP address locking for a account is a bad idea too
Another solution is to lock out an IP address with multiple failed logins. The problem with this solution is that you could inadvertently block large groups of users by blocking a proxy server used by an ISP or large company. Another problem is that many tools utilize proxy lists and send only a few requests from each IP address before moving on to the next. Using widely available open proxy lists at websites such as http://tools.rosinstrument.com/proxy/, an attacker could easily circumvent any IP blocking mechanism. Because most sites do not block after just one failed password, an attacker can use two or three attempts per proxy. An attacker with a list of 1,000 proxies can attempt 2,000 or 3,000 passwords without being blocked. Nevertheless, despite this method's weaknesses, websites that experience high numbers of attacks, adult Web sites in particular, do choose to block proxy IP addresses.
My proposition
Not locking the account. Instead, we might consider adding intentional delay from server side in the login/signup attempts for consecutive wrong attempts.
Tracking user location based on IP address in login attempts, which is a common technique used by Google and Facebook. Google sends a OTP while Facebook provides other security challenges like detecting user's friends from the photos.
Google re-captcha for web application, SafetyNet for Android and proper mobile application attestation technique for iOS - in login or signup requests.
Device cookie
Building a API call monitoring system to detect unusual calls for a certain API endpoint.
Propositions Explained
Intentional delay in response
The password authentication delay significantly slows down the attacker, since the success of the attack is dependent on time. An easy solution is to inject random pauses when checking a password. Adding even a few seconds' pause will not bother most legitimate users as they log in to their accounts.
Note that although adding a delay could slow a single-threaded attack, it is less effective if the attacker sends multiple simultaneous authentication requests.
Security challenges
This technique can be described as adaptive security challenges based on the actions performed by the user in using the system earlier. In case of a new user, this technique might throw default security challenges.
We might consider putting in when we will throw security challenges? There are several points where we can.
When user is trying to login from a location where he was not located nearby before.
Wrong attempts on login.
What kind of security challenge user might face?
If user sets up the security questions, we might consider asking the user answers of those.
For the applications like Whatsapp, Viber etc. we might consider taking some random contact names from phonebook and ask to put the numbers of them or vice versa.
For transactional systems, we might consider asking the user about latest transactions and payments.
API monitoring panel
To build a monitoring panel for API calls.
Look for the conditions that could indicate a brute-force attack or other account abuse in the API monitoring panel.
Many failed logins from the same IP address.
Logins with multiple usernames from the same IP address.
Logins for a single account coming from many different IP addresses.
Excessive usage and bandwidth consumption from a single use.
Failed login attempts from alphabetically sequential usernames or passwords.
Logins with suspicious passwords hackers commonly use, such as ownsyou (ownzyou), washere (wazhere), zealots, hacksyou etc.
For internal system accounts we might consider allowing login only from certain IP addresses. If the account locking needs to be in place, instead of completely locking out an account, place it in a lockdown mode with limited capabilities.
Here are some good reads.
https://en.wikipedia.org/wiki/Brute-force_attack#Reverse_brute-force_attack
https://www.owasp.org/index.php/Blocking_Brute_Force_Attacks
http://www.computerweekly.com/answer/Techniques-for-preventing-a-brute-force-login-attack
I think you should log againt the username. This is the only constant (anything else can be spoofed). And yes it could lock out a legitimate user for a day. But if I must choose between an hacked account and a closed account (for a day) I definitely chose the lock.
By the way, after a third failed attempt (within a certain time) you can lock the account and send a release mail to the owner. The mail contains a link to unlock the account. This is a slight burden on the user but the cracker is blocked. And if even the mail account is hacked you could set a limit on the number of unlockings per day.
A lot of online message boards that I log into online give me 5 attempts at logging into an account, after those 5 attempts the account is locked for an hour or fifteen minutes. It may not be pretty, but this would certainly slow down a dictionary attack on one account. Now nothing is stopping a dictionary attack against multiple accounts at the same time. Ie try 5 times, switch to a different account, try another 5 times, then circle back. But it sure does slow down the attack.
The best defense against a dictionary attack is to make sure the passwords are not in a dictionary!!! Basically set up some sort of password policy that checks a dictionary against the letters and requires a number or symbol in the password. This is probably the best defense against a dictionary attack.
You could add some form of CAPTCHA test. But beware that most of them render access more difficult eye or earing impaired people. An interesting form of CAPTCHA is asking a question,
What is the sum of 2 and 2?
And if you record the last login failure, you can skip the CAPTCHA if it is old enough. Only do the CAPTCHA test if the last failure was during the last 10 minutes.
For .NET Environment
Dynamic IP Restrictions
The Dynamic IP Restrictions Extension for IIS provides IT Professionals and Hosters a configurable module that helps mitigate or block Denial of Service Attacks or cracking of passwords through Brute-force by temporarily blocking Internet Protocol (IP) addresses of HTTP clients who follow a pattern that could be conducive to one of such attacks. This module can be configured such that the analysis and blocking could be done at the Web Server or the Web Site level.
Reduce the chances of a Denial of Service attack by dynamically blocking requests from malicious IP addresses
Dynamic IP Restrictions for IIS allows you to reduce the probabilities of your Web Server being subject to a Denial of Service attack by inspecting the source IP of the requests and identifying patterns that could signal an attack. When an attack pattern is detected, the module will place the offending IP in a temporary deny list and will avoid responding to the requests for a predetermined amount of time.
Minimize the possibilities of Brute-force-cracking of the passwords of your Web Server
Dynamic IP Restrictions for IIS is able to detect requests patterns that indicate the passwords of the Web Server are attempted to be decoded. The module will place the offending IP on a list of servers that are denied access for a predetermined amount of time. In situations where the authentication is done against an Active Directory Services (ADS) the module is able to maintain the availability of the Web Server by avoiding having to issue authentication challenges to ADS.
Features
Seamless integration into IIS 7.0 Manager.
Dynamically blocking of requests from IP address based on either of the following criteria:
The number of concurrent requests.
The number of requests over a period of time.
Support for list of IPs that are allowed to bypass Dynamic IP Restriction filtering.
Blocking of requests can be configurable at the Web Site or Web Server level.
Configurable deny actions allows IT Administrators to specify what response would be returned to the client. The module support return status codes 403, 404 or closing the connection.
Support for IPv6 addresses.
Support for web servers behind a proxy or firewall that may modify the client IP address.
http://www.iis.net/download/DynamicIPRestrictions
Old post but let me post what I have in this the end 2016. Hope it still could help.
It's a simple way but I think it's powerful to prevent login attack. At least I always use it on every web of mine. We don't need CAPTCHA or any other third party plugins.
When user login for the first time. We create a session like
$_SESSION['loginFail'] = 10; // any number you prefer
If login success, then we will destroy it and let user login.
unset($_SESSION['loginFail']); // put it after create login session
But if user fail, as we usually sent error message to them, at the same time we reduce the session by 1 :
$_SESSION['loginFail']-- ; // reduce 1 for every error
and if user fail 10 times, then we will direct them to other website or any web pages.
if (!isset($_SESSION['loginFail'])) {
if ($_SESSION['login_fail'] < 1 ) {
header('Location:https://google.com/'); // or any web page
exit();
}
}
By this way, user can not open or go to our login page anymore, cause it has redirected to other website.
Users has to close the browser ( to destroy session loginFail that we created), open it 'again' to see our login page 'again'.
Is it helpful?
There are several aspects to be considered to prevent brute-force. consider given aspects.
Password Strenght
Force users to create a password to meet specific criteria
Password should contain at least one uppercase, lowercase, digit and symbol(special character).
Password should have a minimum length defined according to your criteria.
Password should not contain a user name or the public user id.
By creating the minimum password strength policy, brute-force will take time to guess the password. meanwhile, your app can identify such thing and migrate it.
reCaptcha
You can use reCaptcha to prevent bot scripts having brute-force function. It's fairly easy to implement the reCaptcha in web application. You can use Google reCaptcha. it has several flavors of reCaptcha like Invisible reCaptcha and reCaptcha v3.
Dynamic IP filtering Policy
You can dynamically identify the pattern of request and block the IP if the pattern matches the attack vector criteria. one of the most popular technique to filter the login attempts is Throttling. Read the Throttling Technique using php to know more. A good dynamic IP filtering policy also protects you from attacks like DoS and DDos. However, it doesn't help to prevent DRDos.
CSRF Prevention Mechanism
the csrf is known as cross-site request forgery. Means the other sites are submitting forms on your PHP script/Controller. Laravel has a pretty well-defined approach to prevent csrf. However, if you are not using such a framework, you have to design your own JWT based csrf prevention mechanism. If your site is CSRF Protected, then there is no chance to launch brute-force on any forms on your website. It's just like the main gate you closed.

Resources