What would be the security loophole if a logoff request is not validated with XSRF/CSRF token?
Don't think of Anti-CSRF tokens as a mechanism implemented on individual endpoints/requests. Ideally, such a mechanism is baked in as a critical part of the framework you're developing in.
An Anti-CSRF may seem redundant on a logout link, which is not what worries me here. What worries me is designing a system which allows, or rather, does not enforce Anti-CSRF mechanisms.
In this context, the CSRF may seem benign. What happens however, when the logout link is vulnerable to say, XSS? Suddenly the Anti-CSRF token is no longer there to protect you.
Always practice Defence in Depth, in that your security should be wrapped in layers, Anti-CSRF being one of them.
Could be combined with an OWASP A10, e.g. the attacker also provides a return URL that points somewhere bad, e.g. a fake "sign on again" page where he can capture your password.
Related
I would like to not have to implement things like redis or storing refresh tokens in a database. I would like to utilize the "full power" of JWTs (scalability, no need for storing anything related to sessions/tokens, stateless, etc.)
Say I have a cookie which is set as secure, httpOnly, samesite=lax.
The cookie will expire when the user closes the website, unless the user specified the "remember me" option. And inside this cookie a JWT will be stored which will never expire.
Of course some CSRF protection would also be implemented!
Is this secure enough? How would an attacker ever retrieve the JWT from a client given the requirements (cookie requirements) mentioned above?
If the JWT is completely inaccessible from an attacker then there is no need to revoke the JWT or have it expire at some point right?
To the question "Is this secure enough?", only you can answer and decide if you want to go with this solution or not.
For an outsider point of view; I think a JWT set with no expiration is already not good practice, because by doing so you are kind of giving away some endless access.
Then you write "How would an attacker ever retrieve ....". That is one more unsafe assumption. The proper of a "successful attack" is usually, exactly when an attacker does something in a way you did not expect.
To sum it up I would say: what you are thinking of is possible; but it is not something to recommend.
To get more in depth opinions you may check further documentations or with experts. To start with you may have a look at these few JWT tutorials.
IMHO if you are protecting sensitive information, having long living auth tokens in the cookies is dangerous. A user could be tricked using social engineering to open the dev tools and share the cookie content. Security is pain but pays off in the log run.
Quote:
The web has changed a lot since CSRF was a "big thing", and tactics
used with CSRF attacks are becoming outdated.
I personally don't protect against CSRF, mainly because I don't use
blatantly insecure methods of authentication.
Does he make any sense?
Please provide me with some arguments If I am correct, why this guy is being (stubborn?) or not thinking clearly as I am trying to make a point, but I am actually not hundred about to how to express it...
There is no sense in the quoted argument as-is, but presumably there is some other context we are missing.
It is unclear what form of authentication your colleague is proposing that would be 'not blatantly insecure', free from CSRF issues.
There are some possibilities, for example in a fully AJAX-driven app you might be passing in an auth token as an input parameter instead of relying on a session. In that case you wouldn't need an additional anti-CSRF measure as the auth token would already be a secret unavailable to attackers.
But CSRF in general has not gone away; browsers have not grown magic features to stop it happening. For the typical model of webapp that uses a browser-persistent authentication model (cookies, HTTP Authentication), you definitely still need to address it in some way.
We are making an app on android and iphone. One method is to save password hash in local device and login remote server every time (with token). The other method is to login once and then get the token to communicate with server. The app save the token in device, so if user don't logout manually, the token won't expire.
Some teammates think the latter method is better instead of saving password hash in local device. But I think keep token is also unsafe. Could anyone please give us some suggestion?
We probably need a little more detail to evaluate what you're considering. Either could in theory be built well. There are several things to consider.
First, it is best to have your authentication token expire periodically. This closes the window on stolen tokens.
Authentication should always be challenge/response in order to avoid replay attacks. You should generally not send the token itself. You send the response to a challenge that proves you have it.
Of course you start with TLS as a transport layer. Ideally you should validate your certs. Together, this alone can protect against a wide variety of attacks. Not all attacks; TLS is not magic security dust, but it does provide a very nice "belt and suspenders" defense in depth.
It's interesting that you're saving the "password hash." How are you using this and how are you salting it? In particular, if many people have the password "password1", will all of them have the same hash? Without TLS, this can open you up to significant problems if you're sending the hash itself across the wire.
On iPhone, you should store sensitive credentials in the keychain. SFHFkeychainutils makes a decent wrapper around the keychain (I've got my beef with it, but it's ok). Unfortunately, I don't believe Android has a similar OS-provided credential store. (No, iPhone's keychain does not protect against all kinds of attacks, but it does provide useful protections against certain kinds of attacks and is worth using.)
You want your protocol to make it possible to deauthenticate a device that has been stolen. That could take the form of the user changing the password, or revoking a token, but the user needs a way to achieve this.
Again, it's hard to evaluate a broad, hypothetical security approach. Tokens or passwords in the protocol can each be fine. What matters is the rest of the protocol.
The way to analyze this is to assume that nothing on the device is safe. The question then becomes, what's the worst that can happen if (when) the device is compromised. If you save a token, then the user's credentials are safe and you can implement a method on the server of revoking a token. If you save a password hash, then (if I understand what you mean by this) the user will need to change passwords (and possibly a token needs to be revoked).
Also, if you tie the token to some sort of device identifier, then it would be harder to use the token from other devices. A password hash (unless it also included data about the device) would not be as hard to use on other devices. (These attacks would be available between the time the device was compromised and when corrective action was taken at the server.)
As you might guess, I agree with your colleagues about which of these two approaches is better. (I also should make clear that I don't think either of these is the most robust approach. You might want to do a little research -- search for mobile application security to find a lot of information about different approaches.)
In an effort to increase performance, I was thinking of trying to eliminate a plain 'session cookie', but encrypt all the information in the cookie itself.
A very simple example:
userid= 12345
time=now()
signature = hmac('SHA1',userid + ":" + time, secret);
cookie = userid + ':' + time + ':' + signature;
The time would be used for a maximum expirytime, so cookies won't live on forever.
Now for the big question: is this a bad idea?
Am I better off using AES256 instead? In my case the data is not confidential, but it must not be changed under any circumstances.
EDIT
After some good critique and comments, I'd like to add this:
The 'secret' would be unique per-user and unpredictable (random string + user id ?)
The cookie will expire automatically (this is done based on the time value + a certain amount of seconds).
If a user changes their password, (or perhaps even logs out?) the secret should change.
A last note: I'm trying come up with solutions to decrease database load. This is only one of the solutions I'm investigating, but it's kind of my favourite. The main reason is that I don't have to look into other storage mechanism better suited for this kind of data (memcache, nosql) and it makes the web application a bit more 'stateless'.
10 years later edit
JWT is now a thing.
A signed token is a good method for anything where you want to issue a token and then, when it is returned, be able to verify that you issued the token, without having to store any data on the server side. This is good for features like:
time-limited-account-login;
password-resetting;
anti-XSRF forms;
time-limited-form-submission (anti-spam).
It's not in itself a replacement for a session cookie, but if it can eliminate the need for any session storage at all that's probably a good thing, even if the performance difference isn't going to be huge.
HMAC is one reasonable way of generating a signed token. It's not going to be the fastest; you may be able to get away with a simple hash if you know about and can avoid extension attacks. I'll leave you to decide whether that's worth the risk for you.
I'm assuming that hmac() in whatever language it is you're using has been set up to use a suitable server-side secret key, without which you can't have a secure signed token. This secret must be strong and well-protected if you are to base your whole authentication system around it. If you have to change it, everyone gets logged out.
For login and password-resetting purposes you may want to add an extra factor to the token, a password generation number. You can re-use the salt of the hashed password in the database for this if you like. The idea is that when the user changes passwords it should invalidate any issued tokens (except for the cookie on the browser doing the password change, which gets replaced with a re-issued one). Otherwise, a user discovering their account has been compromised cannot lock other parties out.
I know this question is very old now but I thought it might be a good idea to update the answers with a more current response. For anyone like myself who may stumble across it.
In an effort to increase performance, I was thinking of trying to
eliminate a plain 'session cookie', but encrypt all the information in
the cookie itself.
Now for the big question: is this a bad idea?
The short answer is: No it's not a bad idea, in fact this is a really good idea and has become an industry standard.
The long answer is: It depends on your implementation. Sessions are great, they are fast, they are simple and they are easily secured. Where as a stateless system works well however, is a bit more involved to deploy and may be outside the scope of smaller projects.
Implementing an authentication system based on Tokens (cookies) is very common now and works exceedingly well for stateless systems/apis. This makes it possible to authenticate for many different applications with a single account. ie. login to {unaffiliated site} with Facebook / Google.
Implementing an oAuth system like this is a BIG subject in and of itself. So I'll leave you with some documentation oAuth2 Docs. I also recommend looking into Json Web Tokens (JWT).
extra
A last note: I'm trying come up with solutions to decrease database
load. This is only one of the solutions I'm investigating
Redis would work well for offloading database queries. Redis is an in memory simple storage system. Very fast, ~temporary storage that can help reduce DB hits.
Update: This answer pertains to the question that was actually asked, not to an imagined history where this question was really about JWT.
The most important deviations from today's signed tokens are:
The question as originally posed didn't evince any understanding of the need for a secret in token generation. Key management is vital for JWT.
The questioner stated that they could not use HTTPS, and so they lacked confidentiality for the token and binding between the token and the request. In the same way, even full-fledged JWT can't secure a plain HTTP request.
When the question was revised to explain how a secret could be incorporated, the secret chosen required server-side state, and so fell short of the statelessness provided by something like JWT.
Even today, this homebrew approach would be a bad idea. Follow a standard like JWT, where both the scheme and its implementations have been carefully scrutinized and refined.
Yes, this is a bad idea.
For starters, it's not secure. With this scheme, an attacker can generate their own cookie and impersonate any user.
Session identifiers should be chosen from a large (128-bit) space by a cryptographic random number generator.
They should be kept private, so that attackers cannot steal them and impersonate an authenticated user. Any request that performs an action that requires authorization should be tamper-proof. That is, the entire request must have some kind of integrity protection such as an HMAC so that its contents can't be altered. For web applications, these requirements lead inexorably to HTTPS.
What performance concerns do you have? I've never seen a web application where proper security created any sort of hotspot.
If the channel doesn't have privacy and integrity, you open yourself up to man-in-the-middle attacks. For example, without privacy, Alice sends her password to Bob. Eve snoops it and can log in later as Alice. Or, with partial integrity, Alice attaches her signed cookie to a purchase request and sends them to Bob. Eve intercepts the request and modifies the shipping address. Bob validates the MAC on the cookie, but can't detect that the address has been altered.
I don't have any numbers, but it seems to me that the opportunities for man-in-the-middle attacks are constantly growing. I notice restaurants using the wi-fi network they make available to customers for their credit-card processing. People at libraries and in work-places are often susceptible to sniffing if their traffic isn't over HTTPS.
You should not reinvent the wheel. The session handler that comes with your development platform far is more secure and certainly easier to implement. Cookies should always be very large random numbers that links to server side data. A cookie that contains a user id and time stamp doesn't help harden the session from attack.
This proposed session handler is more vulnerable to attack than using a Cryptographic nonce for each session. An attack scenario is as follows.
It is likely that you are using the same secret for your HMAC calculation for all sessions. Thus this secret could be brute forced by an attacker logging in with his own account. By looking at his session id he can obtain everything except for the secret. Then the attacker could brute force the secret until the hmac value can be reproduced. Using this secret he can rebuild a administrative cookie and change his user_id=1, which will probably grant him administrative access.
What makes you think this will improve performance vs. secure session IDs and retrieving the userid and time information from the server-side component of the session?
If something must be tamper-proof, don't put it in the toddlers' hands. As in, don't give it to the client at all, even with the tamper-proof locking.
Ignoring the ideological issues, this looks pretty decent. You don't have a nonce. You should add that. Just some random garbage that you store along with the userid and time, to prevent replay or prediction.
I've just seen Doctype's episode on CSRF.
In it they say that the best prevention for CSRF is to create a token from some user unique data (e.g. hash a session ID) and then POST that along with your request.
Would it be less secure to generate a difficult to guess value (e.g. GUID) and store that as a session variable and put it into the page as a hidden field?
Each time the page is loaded the value would change, but the test of the POSTed data would come before that.
This seems to me to be just as secure. Am I wrong?
Where the token comes from is probably not that interesting as long as it is not guessable or determinable in any way. But watch out on generating a new token on each request as this will mean that your site will not work for a user who opens two or more browser tabs to your site. By sticking to one token value for the duration of a user's session, you can circumvent this problem.
Changing the token every request is arguably more secure. But the penalty could well be considered too high. Like almost anything when it comes to security, you often find you have to make trade-offs against the ease of the user's experience -- find me one user that enjoys CAPTCHAs!. Finding the right balance for your application and your users is important- to both your security and your usability.
There's some good reading on CSRF (and much more) over at the Open Web Application Security Project
Also bear in mind that if you have just one cross-site scripting vulnerability on a token-protected page, then your CSRF token is now useless. See also the OWASP XSS (Cross Site Scripting) Prevention Cheat Sheet.