I asked this question about reasons to use Drupal 7's Forms API as opposed to just processing form submission requests myself and eventually calling a function like node_save() or comment_save(). while a variety of reasons were given for using the Forms API, only one possible security vulnerability was raised: by not using Drupal 7's Forms API, I'd be missing out on the CSRF prevention techniques it uses. From what I've read, this basically involves the use of a token for validating requests.
My question is twofold:
Is it possible to leverage Drupal's token method of CSRF prevention in the script I write to process the Ajax request, thereby entirely eliminating the added risk I'm assuming by not using the Forms API? If so, how?
Does the Forms API employ techniques beyond the use of tokens that I should also implement?
Please note that I do not want this question to become a discussion of whether I should use the Forms API or not.
The token is generated by drupal_get_token() and validated using drupal_valid_token().
Related
For a project I’m working on currently I am developing an API using Node/Express/Mongo and separately developing a website using the same tools. Ideally I want to host these on separate servers so they can be scaled as required.
For authentication I am using jsonwebtoken which I’ve set up and I’m generally pleased with how it’s working.
BUT…
On the website I want to be able to restrict (using Express) certain routes to authenticated users and I’m struggling a little with the best way to implement this. The token is currently being saved in LocalStorage.
I think I could pass the token through a get parameter to any routes I want to protect and then check this token on the website server (obviously this means including the jwt secret here too but I don’t see a huge problem with that).
So my questions are
Would this work?
Would it mean (no pun intended) I end up with ugly URLs
Would I just be better hosting both on the same server as I could then save the generated token on the server side?
Is there a better solution?
I should say I don’t want to use Angular - I’m aware this would solve some of my problems but it would create more for me!
First off, I'll answer your questions directly:
Will this work? Yes, it will work. But there are many downsides (see below for more discussion).
Not necessarily. I don't really consider ugly urls to include the querystring. But regardless, all authentication information (tokens, etc.) should be included in the HTTP Authorization HEADER itself -- and never in the URL (or querystring).
This doesn't matter so much in your case, because as long as your JWT-generating code has the same secret key that your web server does, you can verify the token's authenticity.
Yes! Read below.
So, now that we got those questions out of the way, let me explain why the approach you're taking isn't the best idea currently (you're not too far off from a good solution though!):
Firstly, storing any authentication tokens in Local Storage is a bad idea currently, because of XSS (Cross Site Scripting attacks). Local Storage doesn't offer any form of domain limitation, so your users can be tricked into giving their tokens up quite easily.
Here's a good article which explains more about why this is a bad idea in easy-to-understand form: http://michael-coates.blogspot.com/2010/07/html5-local-storage-and-xss.html
What you should be doing instead: storing your JWT in a client-side cookie that is signed and encrypted. In the Node world, there's an excellent mozilla session library which handles this for you automatically: https://github.com/mozilla/node-client-sessions
Next up, you never want to pass authentication tokens (JWTs) via querystrings. There are several reasons why:
Most web servers will log all URL requests (including querystrings), meaning that if anyone gets a hold of these logs they can authenticate as your users.
Users see this information in the querystring, and it looks ugly.
Instead, you should be using the HTTP Authorization header (it's a standard), to supply your credentials to the server. This has numerous benefits:
This information is not typically logged by web servers (no messy audit trail).
This information can be parsed by lots of standard libraries.
This information is not seen by end-users casually browsing a site, and doesn't affect your URL patterns.
Assuming you're using OAuth 2 bearer tokens, you might craft your HTTP Authorization header as follows (assuming you're representing it as a JSON blob):
{"Authorization": "Bearer myaccesstokenhere"}
Now, lastly, if you're looking for a good implementation of the above practices, I actually wrote and maintain one of the more popular auth libraries in Node: stormpath-express.
It handles all of the use cases above in a clean, well audited way, and also provides some convenient middlewares for handling authentication automatically.
Here's a link to the middleware implementations (you might find these concepts useful): https://github.com/stormpath/stormpath-express/blob/master/lib/authentication.js
The apiAuthenticationRequired middleware, itself, is pretty nice, as it will reject a user's request if they're not authenticating properly via API authentication (either HTTP Basic Auth or OAuth2 with Bearer tokens + JWTs).
Hopefully this helps!
All the examples of CSRF exploits tend to be against pages which process the incoming request.
If the page doesn't have a form processing aspect do I need to worry about CSRF ?
The situation I'm looking # :
the page in question contains sensitive data
as such users need to establish a session to view the page
... my understanding is that a malicious page will be able to redirect a client to this page by embedding a link to it, however since there's no action on the target to perform there's no harm that can result, right ?
There's no way for said malicious site can view the sensitive page, correct ?
Why I ask: I want the url to the page with sensitive data to have a 'simple' URL which allows people to email the link to other people (who will in turn need a session to view the page). The token-based solution I've seen for most CSRF solutions remove this possibility, and so I'd like to avoid them if possible.
There's no way for said malicious site can view the sensitive page, correct ?
Correct in terms of CSRF.
The blog you linked is talking about Cross-Origin Script Inclusion, which is a different animal. To be vulnerable to XOSI your sensitive page would have to be interpretable as JavaScript, and you'd have to be either serving it without a proper HTML MIME type, or the browser would have to be an old one that didn't enforce type checking on scripts.
You might also potentially worry about clickjacking, where another site includes yours in a frame and overlays misleading UI elements. There are some sneaky ways that has been used to extract sensitive data (see the next generation clickjacking paper and this amusing info leak in Firefox) so you may wish to disallow framing with the X-Frame-Options header.
Why I ask: I want the url to the page with sensitive data to have a 'simple' URL which allows people to email the link to other people (who will in turn need a session to view the page). The token-based solution I've seen for most CSRF solutions remove this possibility
You definitely shouldn't be putting a CSRF token in a GET URL. Apart from the ugliness, and breakage of navigation, URLs are easy to leak from the browser or other infrastructure, potentially compromising the confidentiality of the token.
Normal practice is not to put CSRF protection on side-effect-free actions.
In general, CSRF is independent from whether the request causes any side effects or not. The CWE describes CSRF (CWE-352) as follows:
The web application does not, or can not, sufficiently verify whether a well-formed, valid, consistent request was intentionally provided by the user who submitted the request.
So CSRF is a general request intention authenticity problem.
However, although CSRF is not really feasible without any effects other than data retrieval as the same-origin policy restricts the attacker from accessing the response, the attacker could exploit another vulnerability to profit from retrieval-only requests as well and gain access to sensitive data.
I'm developing a web application in which all dynamic content is retrieved as JSON with Ajax requests. I'm considering whether I should protect GET API calls from being invoked from different origins?
GET requests do not modify state and a common wisdom is that they do not require CSRF protection. But I wonder if there are no corner cases in which browser leaks the result of such requests to a different origin site?
For example, if a different origin site GETs /users/emails as script, css or img, is it possible that a browser would leak resulting json to the calling site (for example via javascript onerror handler)?
Do Browsers give strong enough guarantees that a content of a cross origin JSON response won't be leaked? Do you think protecting GET request against cross origin calls makes sense or is it overkill?
You have nailed a corner case and yet highly relevant issue. Indeed, there is this possibility, and it's called JSON Inclusion or Cross Site Scripting Inclusion or Javascript Inclusion, depending on who you refer to. The attack is, basically, doing a on an evil site, and then accessing the results via javascript once the js engine has parsed it.
The short story is that ALL your JSON responses have to be contained in an Object, not an Array or JSONP (so: {...}) and for better measure you should start all responses with a prefix (while(1), for(;;) or a parser breaker). Look at facebook's or google's JSON responses to have a live example.
Or, you can make your URLs unguessable by using a CSRF protection - both approach works.
No:
This is not a CSRF issue, as long as you're returning pure JSON and your GET's are side affect free, it DOES NOT have to be csrf protected.
what Paradoxengine mentioned is another vulnerabilty: if you are using JSONP it is possible for an attacker to read the JSON sent to an authenticated user. Users of very old browsers (IE 5.5) can also be attacked in this way even using regular JSON.
You can send requests to a different domain (which is what CSRF attacks do), but you can't read the responses.
I learn this in another stack overflow question from here It seems like I understand CSRF incorrectly?
hope this help you understand the question.
I'm implementing a voting system like Stackoverflow's. How can I implement this so it is hack proof?
I've got some PHP that does database work according to the ajax request sent after the javascript parses it. Would doing a query to check the current vote state of a user be enough to avoid unauthorised votes?
It is definitely possible to implement pretty reliable solution. But this must be done server-side.
Basic rule of security: you don't trust client data.
Move all your checks to PHP and make your javascript as dumb as
$(".vote").click(function(e) {
$.post('/vote.php', vote_data, function(result) {
// update UI according to returned result
}
}
It's a common thing, however, to still do checks on the client, but as a way to improve usability (mark required form fields that weren't filled) or reduce server load (by not sending obviously incomplete data). These client checks are for user's comfort, not for your security.
Answering to your updated question:
If you store full log of when which user voted for which question, then yes, it's pretty easy to prevent multiple voting (when user can vote for the same thing several times). Assuming, of course, that anonymous votes are not allowed.
But if you have a popular site, this log can get pretty big and be a problem. Some systems try to get away by disabling voting on old articles (and removing corresponding log entries).
What if someone intentionally tries to hack me?
There are different types of attacks a malicious user can perform.
CSRF (cross-site request forgery)
The article lists some methods for preventing the attack. Modern Ruby on Rails has built-in protection, enabled by default. Don't know how it is in PHP world.
Clickjacking
This attack tricks users into clicking on something what isn't what they think. For example, they may click "Play video", but the site will intercept this click and post on user's wall instead.
There are some articles on the Web as well.
Wiki on clickjacking
5 ways to prevent clickjacking
Javascript to prevent clickjacking
NOTE: THIS IS AN ANSWER TO THE ORIGINAL QUESTIONDon't downvote it just because the OP radically changed his question.
It's a huge error even just thinking of relying on browser-side components to enforce application logic. Javascript should be used, in untrusted environments, exclusively for presentation purposes.
All application logic should be implemented, validated and enforced server-side.
Please let me know if the following approach to protecting against CSRF is effective.
Generate token and save on server
Send token to client via cookie
Javascript on client reads cookie and adds token to form before POSTing
Server compares token in form to saved token.
Can anyone see any vulnerabilities with sending the token via a cookie and reading it with JavaScript instead of putting it in the HTML?
The synchroniser token pattern relies on comparing random data known on the client with that posted in the form. Whilst you'd normally get the latter from a hidden form populated with the token at page render time, I can't see any obvious attack vectors by using JavaScript to populate it. The attacking site would need to be able to read the cookie to reconstruct the post request which it obviously can't do due to cross-domain cookie limitations.
You might find OWASP Top 10 for .NET developers part 5: Cross-Site Request Forgery (CSRF) useful (lot's of general CSRF info), particularly the section on cross-origin resource sharing.
If a persons traffic is being monitored the hacker will likely get the token also. But it sounds like a great plan. I would try to add a honeypot. Try to disguise the token as something else so It's not obvious. If it's triggered, send the bad user into the honeypot so they don't know they've been had.
My philosophy with security is simple and best illustrated with a story.
Two men are walking through the woods. They see a bear, freak out and start running. As the bear catches up to them and gaining one of them tells the other, "we'll never outrun this bear". the other guy responses, "I don't have to outrun the bear, I only have to outrun you!"
Anything you can add to your site to make it more secure the better off you'll be. Use a framework, validate all inputs (including all those in any public method) and you should be ok.
If your storing sensitive data I would setup a second sql server with no internet access. Have your back-end server constantly access your front-end server, pull and replace the sensitive data with bogus data. If your front-end server needs that sensitive data, which is likely, use a special method that uses a different database user (that has access) to pull it from the back-end server. Someone would have to completely own your machine to figure this out... and it would still take enough time that you should be able to pull the plug. Most likely, they'll pull all your data before realizing it's bogus... ha ha.
I wish I had a good solution on how to protect your customers better to avoid CSRF. But what you have looks like a pretty good deterrent.
This question over on Security Stack Exchange has some useful discussion on the subject.
I especially like #AviD's answer:
Don't.
-
Most common frameworks have this protection already built in (ASP.NET, Struts, Ruby I think), or there are existing libraries that have already been vetted. (e.g. OWASP's CSRFGuard).