Mitigating botnet brute force attack with Javascript Detection? - security

Hi this is my first post here. I am developing a website project with public user registration and log in functions. Right now, I am conceptualizing the logic flow of the whole project and currently stuck at the authentication and security part.
I have googled and unable to find an answer. I also searched sof but effort is futile as well. What I want to ask is quite specific.
From what I read at other sof posts, it seems that to mitigate a brute-force attack, ip banning or time delay after n number of attempted log ins is one of the best solutions. Of course not forgetting whitelisting and blacklisting and Captcha. Of course I already plan to implement the above-mentioned techniques.(maybe not captcha)
My question is, is it possible to detect javascript enabled or unabled to block illegitimate log-in attempts base on the assumption that bot-net or 'hackers' do not use a javascript enabled browser to do their 'work'?
For example, "if js disabled, stop the 'rendering' of the log-in
form. //brute force attempt detected"
Finally, this assumption is base on the fact that all of my users will have a js enabled browser in order to use my website.
This approach will be run concurrent with other mitigate methods.
Please enlighten.
Thanks!

Not reliably or efficiently. You can't really trust anything that comes from the client side.
Anything you ask the client to do in javascript to prove that javascript is present can be impersonated. Forcing them to impersonate will force them to be more intelligent and expend more resources, which might be a win for you.

While u2702 makes a good point, I think it's good to keep in mind that just because a solution doesn't dissuade all attackers doesn't mean it should be abandoned. In fact, I think this is a quite good method to deter the vast majority of bots.
To answer the question at hand, you could have all prospective users calculate the MD5 of a value you've generated using a cryptographically secure random function. Faking this without javascript would be more-or-less impossible (as far as I can see).
Disclaimer: Implementation of this method will ban Richard Stallman and people who use NoScript from your site.

Related

What is the best login design for web applications?

First, let's define "best" here. The "best" login design/flow/algorithm/technique for the purposes of this question should be:
Simple. I don't think I need to explain why a simple system is better than a complicated one. OAuth 2, for example, is a very complicated system in my view. It defines, if I recall correctly, no less than nine different flows for granting an application access to a person's data. I find this superfluous, but that's not up for debate here.
Language-agnostic. Please do not answer by giving an implementation. I don't want an implementation. I want a design (or several...). You can give examples, but the solution you propose should be easy to code in any (read: most) language without requiring significant workarounds for features that aren't there.
Secure. The design should cover most common security problems on the net. XSS, CSRF, etc etc etc. I think a good set would be obtainable by going to Coding Horror and searching for "security"...
Now for some smaller details:
JavaScript is allowed. If a design can fall back to noscript environments, cool. But it's not a requirement.
Flash, Java applets are not allowed. This goes against the language-agnostic clause above: if your design requires something that is only available through Flash or Java, it's a flaw.
Password storage. There's a whole class of problems related to password storage. I don't want to hear about it.
Password transmission. This is important. Transmitting a password in plain is just plain evil. Over SSL, it might be acceptable, but if you can have a system that is (relatively) secure without relying onto end-to-end encryption, it would be awesome.
Given all this, propose the "best" user/login/logout design/flow/algorithm/technique you think fits the conditions outlined above. Or tell me if you think it's a fool's errand! ;)
I think you have already given some thought to this question. Easiest way to look at the solution is to break it in to different layers.
1) Database
Protected against sql injection. Just use prepared statements. Best and most secure!
Always and I mean always, make sure the db user has only the access privileges it needs.
2) Application
Use HTTPS. Don't even try to use anything else
Don't store user-id in the cookie or anything. Use session's if you must
If you don't have a session, generate a random id use that to look up a user. It's important to not have the cookie id not predictable.
3) HTML/Javascript
Protect against CSRF by doing a token system. This is the only legit way
Escape all you user input and sanitize it before writing to stream. In JSP, for example <c:out/> should be used
Don't do anything secure in javascript. This is an obvious answer but sometimes it good to remind
4) Etc
Keep patches up to date
Don't recreate the wheel. In Rails, there are already some excellent authorization gems. Use them!
I think out all of these, using SSL is the most important. You can create the most complicated system to do a double submit with an awesome encyrption algorithm. But with all of that, at the end, you still won't have a system that is more secure and better tested than SSL.
Interesting question.
I would consider:
When required I think web content should be public. So, as a user, I think it's best to have login only when it is required
SSO We should have mechanism to cross connect web applications more easily. I know that applications do not implement permissions the same way and we can't go wild there. That's where OAuth is filling the gap.
Do not use CAPTCHA (it is considered inaccessible). Unless you use something similar, like these
csrf hidden field to make sure that the form that are submitting is a valid one and not a random post to an endpoint
Always use SSL the big guys is doing it and it's unsafe to let our users send their passwords in clear text. Some proved it.
Always plan for without javascript Just in case, anyway it's not because we can do it that it's good to do.
That's my timeout for tonight. :)

How to defend against TabNabbing?

I got very concerned reading this genius post by Aza Raskin.
What are the non-browsers solutions to defend against TabNabbing? Are there any?
"Tab Nabbing" is not a new attack, Mr Raskin is ripping off other researchers work. PDP from GnuCitizen discovered this back in 2008.
The biggest threat as I see it is Phishing. To be honest I don't think there is a good solution to stop phishing. This particular issues I think should be fixed by the browser. Eventually Firefox and Chrome will get around to fixing it. To be honest SSLStrip is a bigger threat that all browsers face, which can be used along side this redirection attack. Currently chrome has a fix in the form of STS and Firefox in the form of HTTPs Everywhere. Using noscript will also help mitigate this redirection attack attack.
One thing that will prevent this sort of thing from happening is two factor authentication using something like an RSA token (unfortunately only one bank in this country provides this method).
The RSA token is a little USB stick sized gadget that has a continuously changing serial/sequence number on it, and it is issued to you (each stick has a different sequence of numbers). When you logon to your bank's website, you have to supply you log/pass, and also the current number on the RSA token - that number changes every two minutes. That means that if the bad guys collect your login details they have less than two minutes to login to your account before the current RSA sequence number changes and the captured login details become impossible to reuse.
This 2 factor authentication is not the silver bullet though, i don't see Google rolling this out for your random Gmail account, and neither will Facebook. It should be mandatory for financial institutions and online government departments, this will cut the scope of this type of attack. It is a commonly used protection mechanism for remote access to company website portals and remote network logins, and it is quite successful for this.
This still hasn't answered your question though - how can you as an website author or owner prevent this? You can't, unless you don't run third party scripts, and regularly check your pages to make sure you haven't been compromised and had a script inserted. You should never consider trying to scan any third party scripts, because they can be obfuscated to an incredible degree which you can't possibly scan for. If you do run third party scripts and feel strongly enough about this, then you might want to setp a machine which all it does is automated UI tests on your web site - it is an easy enough thing to set up with some basic tests and just leave it testing your live site every 30 or 60 minutes looking for unexpected results.
Like he suggests, use the password manager. There are quite a few other problems that can happen if you type your password every time. For sites that the password manager doesn't work, you're screwed. Client certificates ftw.
I just visited the page which you mention and my free virus checker (AVG) immediately detected a threat (I presume that he has an example on the page) and warned me of a Tabnapping Exploit.
So that's one, easy, possibility

Is it immoral to put a captcha on a login form?

In a recent project I put a captcha test on a login form, in order to stop possible brute force attacks.
The immediate reaction of other coworkers was a request to remove it, saying that it was inapropiate for that purpose, and that it was quite exotic to see a captcha in that place.
I've seen captcha images on signup, contact, password recovery forms, etc. So I personally don't see inapropiate to put a captcha also on a place like that. Well, it obviously burns down usability a little bit, but it's a matter of time and getting used to it.
With the lack of a captcha test, one would have to put some sort of blacklist / account locking mechanism, which also has some drawbacks.
Is it a good choice for you? Am I getting somewhat captcha-aholic and need some sort of group therapy?
Thanks in advance.
Just add a CAPTCHA test for cases when there have been failed login attempts for a given user. This is what lots of websites currently do (all popular email services for instance) and is much less invasive.
Yet it completely thwarts brute force attacks, as long as the attacker cannot break your CAPTCHA.
It's not immoral per se. It's bad usability.
Consider security implications: the users will consider logging in to be time consuming and will:
be less likely to use your system at all
never log out of your system and leave open sessions unattended.
Consider other forms of brute-force attack detection and prevention.
Captcha isn't a very traditional choice in login forms. The traditional protection against brute force attacks seems to be account locking. As you said, it has it's drawbacks, for example, if your application is vulnerable to account enumeration, then an attacker could easily perform a denial of service attack.
I would tend to agree with your co-workers. A captcha can be necessary on forms where you do not have to be authorized to submit data, because otherwise spambots will bomb them, but I fail to see what kind of abuse you are preventing by adding the captcha to a login form?
A captcha does not provide any form of securtiy, the way your other options, like the blacklist, would. It just verifies that the user is a human being, and hopefully the username/password fields would verify that.
If you want to prevent bruteforce attacks, then almost any other form of protection would be more usefull - throtteling the requests if there is too many, or banning IPs if the enter wrong passwords too many times, for instance.
Also, I think you are underestimating the impact on usability. A lot of browsers provide a lot of utilities to deal with username/password forms and all of these utilities are rendered useless if you add a captcha.
I would like to address the question in the titleā€”the question of morality.
I would consider a captcha immoral under the following circumstances:
It excludes participation in the application to those with physical or mental challenges, when the main portion and purpose of the application would otherwise not make such an exclusion.
The mechanism of the captcha exposes users to distressing language or images beyond what would normally be expected in the application.
The captcha mechanism as presented to the user is deceptive or misleading in some way.
A captcha may also be considered immoral if its intent is to exclude genuinely sentient machine intelligences from participation for reasons of prejudice against non-humans. Of course, technology has not yet advanced to the level at which this is an issue, and, further, when it does become an issue, I expect human-excluding gates will be more feasible and common.
Many popular (most used) mail server doesn't have it?!

Paranoid attitude: What's your degree about web security concerns?

this question can be associated to a subjective question, but this is not a really one.
When you develop a website, there is several points you must know: XSS attacks, SQL injection, etc.
It can be very very difficult (and take a long time to code) to secure all potential attacks.
I always try to secure my application but I don't know when to stop.
Let's take the same example: a social networking like Facebook. (Because a bank website must secure all its datas.)
I see some approaches:
Do not secure XSS, SQL injection... This can be really done when you trust your user: back end for a private enterprise. But do you secure this type of application?
Secure attacks only when user try to access non owned datas: This is for me the best approach.
Secure all, all, all: You secure all datas (owner or not): the user can't break its own datas and other user datas: this is very long to do and is it very useful?
Secure common attacks but don't secure very hard attacks (because it's too long to code comparing to the chance of being hacked).
Well, I don't know really what to do... For me, I try to do 1, 2, 4 but I don't know if it's the great choice.
Is there an acceptable risk to not secure all your datas? May I secure all datas but it takes me double time to code a thing? What's the enterprise approach between risk and "time is money"?
Thank you to share this because I think a lot of developers don't know what is the good limit.
EDIT: I see a lot of replies talking about XSS and SQL injection, but this is not the only things to take care about.
Let's take a forum. A thread can be write in a forum where we are moderator. So when you send data to client view, you add or remove the "add" button for this forum. But when a user tries to save a thread in server side, you must check that user has the right to dot it (you can't trust on client view security).
This is a very simple example, but in some of my apps, I've got a hierarchy of rights which can be very very difficult to check (need a lot of SQL queries...) but in other hand, it's really hard to find the hack (datas are pseudo encrypted in client view, there is a lot of datas to modify to make the hack runs, and the hacker needs a good understanding of my app rules to do a hack): in this case, may I check only surface security holes (really easy hack) or may I check very hard security holes (but it will decrease my performances for all users, and takes me a long time to develop).
The second question is: Can we "trust" (to not develop a hard and long code which decreases performance) on client view for very hard hack?
Here is another post talking of this sort of hack: (hibernate and collection checking) Security question: how to secure Hibernate collections coming back from client to server?
I think you should try and secure everything you can, the time spent doing this is nothing compared to the time needed to fix the mess done by someone exploiting a vulnerability you left somewhere.
Most things anyway are quite easy to fix:
sql injections have really nothing to do with sql, it's just string manipulation, so if you don't feel comfortable with that, just use prepared statements with bound parameters and forget about the problem
cross site exploit are easily negated by escaping (with htmlentities or so) every untrusted data before sending it out as output -- of course this should be coupled with extensive data filtering, but it's a good start
credentials theft: never store data which could provide a permanent access to protected areas -- instead save a hashed version of the username in the cookies and set a time limit to the sessions: this way an attacker who might happen to steal this data will have a limited access instead of permanent
never suppose that just because a user is logged in then he can be trusted -- apply security rules to everybody
treat everything you get from outside as potentially dangerous: even a trusted site you get data from might be compromised, and you don't want to fall down too -- even your own database could be compromised (especially if you're on a shared environment) so don't trust its data either
Of course there is more, like session hijacking attacks, but those are the first things you should look at.
EDIT regarding your edit:
I'm not sure I fully understand your examples, especially what you mean by "trust on client security". Of course all pages with restricted access must start with a check to see if the user has rights to see the content and optionally if he (or she) has the correct level of privilege: there can be some actions available to all users, and some others only available to a more restricted group (like moderators in a forum). All this controls have to be done on the server side, because you can never trust what the client sends you, being it data through GET, POST and even COOKIES. None of these are optional.
"Breaking data" is not something that should ever be possible, by the authorized user or anybody else. I'd file this under "validation and sanitation of user input", and it's something you must always do. If there's just the possibility of a user "breaking your data", it'll happen sooner or later, so you need to validate any and all input into your app. Escaping SQL queries goes into this category as well, which is both a security and data sanitation concern.
The general security in your app should be sound regardless. If you have a user management system, it should do its job properly. I.e. users that aren't supposed to access something should not be able to access it.
The other problem, straight up XSS attacks, has not much to do with "breaking data" but with unauthorized access to data. This is something that depends on the application, but if you're aware of how XSS attacks work and how you can avoid them, is there any reason not to?
In summary:
SQL injection, input validation and sanitation go hand in hand and are a must anyway
XSS attacks can be avoided by best-practices and a bit of consciousness, you shouldn't need to do much extra work for it
anything beyond that, like "pro-active" brute force attack filters or such things, that do cause additional work, depend on the application
Simply making it a habit to stick to best practices goes a long way in making a secure app, and why wouldn't you? :)
You need to see web apps as the server-client architecture they are. The client can ask a question, the server gives answers. The question is just a URL, sometimes with a bit of attached POST data.
Can I have /forum/view_thread/12345/ please?
Can I POST this $data to /forum/new_thread/ please?
Can I have /forum/admin/delete_all_users/ please?
Your security can't rely only on the client not asking the right question. Never.
The server always needs to evaluate the question and answer No when necessary.
All applications should have some degree of security. You generally don't ask for SSL on intranet websites, but you need to take care of SQL/XSS attacks.
All data your user enters into your application should be under your responsibility. You must make sure nobody unauthorized get access to it. Sometimes, a "not critical" information can pose a very security problem, because we're all lazy people.
Some time ago, a friend used to run a games website. Users create their profiles, forum , all that stuff. Then, some day, someone found a SQL injection open door somewhere. That attacker get all user and password information.
Not a big deal, huh? I mean, who cares about a player account into a website? But most users used same user/password to MSN, Counter Strike, etc. So become a big problem very fast.
Bottom line is: all applications should have some security concern. You should take a look into STRIDE to understand your attack vectors and take best action.
I personally prefer to secure everything at all times. It might be a paranoid approach, but when I see tons of websites throughout internet, that are vulnerable to SQL injection or even much simpler attacks, and they are not bothered to fix it until someone "hacks" them and steal their precious data, it makes me pretty much afraid. I don't really want to be the one responsible for leaked passwords or other user info.
Just ask someone with hacking experiences to check your application / website. It should give you a fair idea what's wrong and what should be updated.
You want to have strong API side ACL. Some days ago I saw a problem where a guy had secured every single UI, but the website was vulnerable through AJAX, just because his API (where he was sending requests) just trusted every single request to be checked. I could basically pull whole database through this bug.
I think it's helpful to distinguish between preventing code injection and plain data authorization.
In my opinion, all it takes is a few good coding habits to completely eliminate SQL injection. There is simply no excuse for it.
XSS injection is a little bit different - i think it can always be prevented, but it may not be trivial if your application features user generated content. By that I simply mean that it may not be as trivial to secure your app against XSS as it is compared to SQL injection. So I do not mean that it is ok to allow XSS - I still think there is no excuse for allowing it, it's just harder to prevent than SQL injection if your app revolves around user generated content.
So SQL injection and XSS are purely technical matters - the next level is authorization: how thoroughly should one shield of access to data that is no business of the current user. Here I think it really does depend on the application, and I can imagine that it makes sense to distinguish between: "user X may not see anything of user Y" vs "Not bothering user X with data of user Y would improve usability and make the application more convenient to use".

I want to use security through obscurity for the admin interface of a simple website. Can it be a problem?

For the sake of simplicity I want to use admin links like this for a site:
http://sitename.com/somegibberish.php?othergibberish=...
So the actual URL and the parameter would be some completely random string which only I would know.
I know security through obscurity is generally a bad idea, but is it a realistic threat someone can find out the URL? Don't take the employees of the hosting company and eavesdroppers on the line into account, because it is a toy site, not something important and the hosting company doesn't give me secure FTP anyway, so I'm only concerned about normal visitors.
Is there a way of someone finding this URL? It wouldn't be anywhere on the web, so Google won't now it about either. I hope, at least. :)
Any other hole in my scheme which I don't see?
Well, if you could guarantee only you would ever know it, it would work. Unfortunately, even ignoring malicious men in the middle, there are many ways it can leak out...
It will appear in the access logs of your provider, which might end up on Google (and are certainly read by the hosting admins)
It's in your browsing history. Plugins, extensions etc have access to this, and often use upload it elsewhere (i.e. StumbleUpon).
Any proxy servers along the line see it clearly
It could turn up as a Referer to another site
some completely random string
which only I would know.
Sounds like a password to me. :-)
If you're going to have to remember a secret string I would suggest doing usernames and passwords "properly" as HTTP servers will have been written to not leak password information; the same is not true of URLs.
This may only be a toy site but why not practice setting up security properly as it won't matter if you get it wrong. So hopefully, if you do have a site which you need to secure in future you'll have already made all your mistakes.
I know security through obscurity is
generally a very bad idea,
Fixed it for you.
The danger here is that you might get in the habit of "oh, it worked for Toy such-and-such site, so I won't bother implementing real security on this other site."
You would do a disservice to yourself (and any clients/users of your system) if you ignore Kerckhoff's Principle.
That being said, rolling your own security system is a bad idea. Smarter people have already created security libraries in the other major languages, and even smarter people have reviewed and tweaked those libraries. Use them.
It could appear on the web via a "Referer leak". Say your page links to my page at http://entrian.com/, and I publish my web server referer logs on the web. There'll be an entry saying that http://entrian.com/ was accessed from http://sitename.com/somegibberish.php?othergibberish=...
As long as the "login-URL" never posted anywhere, there shouldn't be any way for search engines to find it. And if it's just a small, personal toy-site with no personal or really important content, I see this as a fast and decent-working solution regarding security compared to implementing some form of proper login/authorization system.
If the site is getting a big number of users and lots of content, or simply becomes more than a "toy site", I'd advice you to do it the proper way
I don't know what your toy admin page would display, but keep in mind that when loading external images or linking to somewhere else, your referrer is going to publicize your URL.
If you change http into https, then at least the url will not be visible to anyone sniffing on the network.
(the caveat here is that you also need to consider that very obscure login system can leave interesting traces to be found in the network traces (MITM), somewhere on the site/target for enabling priv.elevation, or on the system you use to log in if that one is no longer secure and some prefer admin login looking no different from a standard user login to avoid that)
You could require that some action be taken # of times and with some number of seconds of delays between the times. After this action,delay,action,delay,action pattern was noticed, the admin interface would become available for login. And the urls used in the interface could be randomized each time with a single use url generated after that pattern. Further, you could only expose this interface through some tunnel and only for a minute on a port encoded by the delays.
If you could do all that in a manner that didn't stand out in the logs, that'd be "clever" but you could also open up new holes by writing all that code and it goes against "keep it simple stupid".

Resources