Signed session cookies. A good idea? - security

In an effort to increase performance, I was thinking of trying to eliminate a plain 'session cookie', but encrypt all the information in the cookie itself.
A very simple example:
userid= 12345
time=now()
signature = hmac('SHA1',userid + ":" + time, secret);
cookie = userid + ':' + time + ':' + signature;
The time would be used for a maximum expirytime, so cookies won't live on forever.
Now for the big question: is this a bad idea?
Am I better off using AES256 instead? In my case the data is not confidential, but it must not be changed under any circumstances.
EDIT
After some good critique and comments, I'd like to add this:
The 'secret' would be unique per-user and unpredictable (random string + user id ?)
The cookie will expire automatically (this is done based on the time value + a certain amount of seconds).
If a user changes their password, (or perhaps even logs out?) the secret should change.
A last note: I'm trying come up with solutions to decrease database load. This is only one of the solutions I'm investigating, but it's kind of my favourite. The main reason is that I don't have to look into other storage mechanism better suited for this kind of data (memcache, nosql) and it makes the web application a bit more 'stateless'.
10 years later edit
JWT is now a thing.

A signed token is a good method for anything where you want to issue a token and then, when it is returned, be able to verify that you issued the token, without having to store any data on the server side. This is good for features like:
time-limited-account-login;
password-resetting;
anti-XSRF forms;
time-limited-form-submission (anti-spam).
It's not in itself a replacement for a session cookie, but if it can eliminate the need for any session storage at all that's probably a good thing, even if the performance difference isn't going to be huge.
HMAC is one reasonable way of generating a signed token. It's not going to be the fastest; you may be able to get away with a simple hash if you know about and can avoid extension attacks. I'll leave you to decide whether that's worth the risk for you.
I'm assuming that hmac() in whatever language it is you're using has been set up to use a suitable server-side secret key, without which you can't have a secure signed token. This secret must be strong and well-protected if you are to base your whole authentication system around it. If you have to change it, everyone gets logged out.
For login and password-resetting purposes you may want to add an extra factor to the token, a password generation number. You can re-use the salt of the hashed password in the database for this if you like. The idea is that when the user changes passwords it should invalidate any issued tokens (except for the cookie on the browser doing the password change, which gets replaced with a re-issued one). Otherwise, a user discovering their account has been compromised cannot lock other parties out.

I know this question is very old now but I thought it might be a good idea to update the answers with a more current response. For anyone like myself who may stumble across it.
In an effort to increase performance, I was thinking of trying to
eliminate a plain 'session cookie', but encrypt all the information in
the cookie itself.
Now for the big question: is this a bad idea?
The short answer is: No it's not a bad idea, in fact this is a really good idea and has become an industry standard.
The long answer is: It depends on your implementation. Sessions are great, they are fast, they are simple and they are easily secured. Where as a stateless system works well however, is a bit more involved to deploy and may be outside the scope of smaller projects.
Implementing an authentication system based on Tokens (cookies) is very common now and works exceedingly well for stateless systems/apis. This makes it possible to authenticate for many different applications with a single account. ie. login to {unaffiliated site} with Facebook / Google.
Implementing an oAuth system like this is a BIG subject in and of itself. So I'll leave you with some documentation oAuth2 Docs. I also recommend looking into Json Web Tokens (JWT).
extra
A last note: I'm trying come up with solutions to decrease database
load. This is only one of the solutions I'm investigating
Redis would work well for offloading database queries. Redis is an in memory simple storage system. Very fast, ~temporary storage that can help reduce DB hits.

Update: This answer pertains to the question that was actually asked, not to an imagined history where this question was really about JWT.
The most important deviations from today's signed tokens are:
The question as originally posed didn't evince any understanding of the need for a secret in token generation. Key management is vital for JWT.
The questioner stated that they could not use HTTPS, and so they lacked confidentiality for the token and binding between the token and the request. In the same way, even full-fledged JWT can't secure a plain HTTP request.
When the question was revised to explain how a secret could be incorporated, the secret chosen required server-side state, and so fell short of the statelessness provided by something like JWT.
Even today, this homebrew approach would be a bad idea. Follow a standard like JWT, where both the scheme and its implementations have been carefully scrutinized and refined.
Yes, this is a bad idea.
For starters, it's not secure. With this scheme, an attacker can generate their own cookie and impersonate any user.
Session identifiers should be chosen from a large (128-bit) space by a cryptographic random number generator.
They should be kept private, so that attackers cannot steal them and impersonate an authenticated user. Any request that performs an action that requires authorization should be tamper-proof. That is, the entire request must have some kind of integrity protection such as an HMAC so that its contents can't be altered. For web applications, these requirements lead inexorably to HTTPS.
What performance concerns do you have? I've never seen a web application where proper security created any sort of hotspot.
If the channel doesn't have privacy and integrity, you open yourself up to man-in-the-middle attacks. For example, without privacy, Alice sends her password to Bob. Eve snoops it and can log in later as Alice. Or, with partial integrity, Alice attaches her signed cookie to a purchase request and sends them to Bob. Eve intercepts the request and modifies the shipping address. Bob validates the MAC on the cookie, but can't detect that the address has been altered.
I don't have any numbers, but it seems to me that the opportunities for man-in-the-middle attacks are constantly growing. I notice restaurants using the wi-fi network they make available to customers for their credit-card processing. People at libraries and in work-places are often susceptible to sniffing if their traffic isn't over HTTPS.

You should not reinvent the wheel. The session handler that comes with your development platform far is more secure and certainly easier to implement. Cookies should always be very large random numbers that links to server side data. A cookie that contains a user id and time stamp doesn't help harden the session from attack.
This proposed session handler is more vulnerable to attack than using a Cryptographic nonce for each session. An attack scenario is as follows.
It is likely that you are using the same secret for your HMAC calculation for all sessions. Thus this secret could be brute forced by an attacker logging in with his own account. By looking at his session id he can obtain everything except for the secret. Then the attacker could brute force the secret until the hmac value can be reproduced. Using this secret he can rebuild a administrative cookie and change his user_id=1, which will probably grant him administrative access.

What makes you think this will improve performance vs. secure session IDs and retrieving the userid and time information from the server-side component of the session?
If something must be tamper-proof, don't put it in the toddlers' hands. As in, don't give it to the client at all, even with the tamper-proof locking.
Ignoring the ideological issues, this looks pretty decent. You don't have a nonce. You should add that. Just some random garbage that you store along with the userid and time, to prevent replay or prediction.

Related

Is it secure if I store a sha1 password and a userID in secure flag & https Cookies?

I'm doing a connection system for my users. So I decided to use Cookies to store the User ID and the password (in sha1). But I have one question. If a random user gets the value of both cookies and their names, can he creates them with for example a js function and get into the account?
Is it secure if I store a sha1 password and a userID in secure flag &
https Cookies?
No.
I suppose you want to know why? First, define "safe." What threat are you trying to mitigate?
Once the credentials are hashed, there's no way to get the plaintext back. Since you can't render the hashed string back to plaintext then we can assume that the intent is to compare them to the same hashed string held at the server, yes? That's awesome if the threat you want to mitigate is somebody discovering the password and user ID and you use something like SHA256 instead of SHA1.
But if the threats you want to mitigate include replay attack or session hijacking, then these are no better than any other fixed string. In fact they are worse. If the user is obliged to provide their password for each HTTPS request it sucks for them but at least the app can throttle login attempts and foil a brute force attack. If the credentials are hashed and exchanged in cookies, then they are exposed to adversaries and if obtained can be subjected to brute force cracking or looked up in a rainbow table so on net sending the credentials back out, even encrypted or hashed, kinda sucks.
The question doesn't mention salt or session keying. An adversary will look at the cookies to see identical values are returned over multiple sessions. To prevent replay attack you'd need to append a nonce before hashing to act as a salt so the hashed string changes each time. But it doesn't solve the problem of sending a transformed credential pair outside of control of your own server or that this is far worse than just using a long random string for the same purpose.
Furthermore, the hash of the credentials doesn't time out until and unless the user changes their password - at which point it tells an adversary that the user just changed their password which is a great piece of info with which to social engineer the IT support person who does password recovery. "Hi, I just changed my password and locked the account. Can you reset it? Employee ID? Well if I had access I could look it up. Can you just reset it? I'm really me. How else would anyone I know I just changed it?"
The support person would never guess the answer to that question is "because Victor's app design told me it was just changed" and might just reset it for the adversary. But if the session is kept alive by a session cookie or a triparte login token then the unique string representing that user's session mitigates all of the threats discussed so far:
An attacker can't reverse it or crack it to discover credentials because they aren't in there.
It can't be used for session replay since it is generated to be unique for each session.
It expires within a short period of time so it can't be resurrected from browser cache or history.
But rather than answer the question as asked, I'd like to answer the question "Is there an authoritative source for comprehensive web application security best practices?" That's a much easier question to answer and it potentially answers your initial question if you follow through with the required study.
Please see: Open Web Application Security Project (OWASP).
In particular, please see the Session Management Cheat Sheet and the Authentication Management Cheat Sheet as these cover much of what you are trying to do here.
OWASP periodically analyzes all reported breaches for a recent period and then publishes the Top 10 Vulnerability List based on the root causes that showed up most often during the sample period. When I QA new web sites on behalf of clients they almost always have several of the defects in OWASP's Top 10 list. If as a developer or web site development company you want to stand head and shoulders above the crowd and get a lot of repeat business, all you need to do is make sure the site you deliver doesn't have any defects in OWASP's list. The question suggests any application built as proposed would have at least 4 or 5 defects from the OWASP Top 10 so that's an aspirational goal for now. Aim high.

Are breaches of JWT-based servers more damaging?

UPDATE: I have concluded my research on this problem and posted a lengthy blog entry explaining my findings: The Unspoken Vulnerability of JWTs. I explain how the big push to use JWTs for local authentication is leaving out one crucial detail: that the signing key must be protected. I also explain that unless you're willing to go to great lengths to protect the keys, you're better off either delegating authentication via Oauth or using traditional session IDs.
I have seen much discussion of the security of JSON Web Tokens -- replay, revocation, data transparency, token-specified alg, token encryption, XSS, CSRF -- but I've not seen any assessment of the risk imposed by relying on a signing key.
If someone breaches a server and acquires a JWT signing key, it seems to me that this person could thereafter use the key to forge unexpired JWTs and secretly gain access. Of course, a server could look up each JWT on each request to confirm its validity, but servers use JWTs exactly so they don't have to do this. The server could confirm the IP address, but that also involves a lookup if the JWT is not to be trusted, and apparently doing this precludes reliable mobile access anyway.
Contrast this with a breach of a server based on session IDs. If this server is hashing passwords, the attacker would have to snag and use a session ID separately for each user before it expires. If the server were only storing hashes of the session IDs, the attacker would have to write to the server to ensure access. Regardless, it seems that the attacker is less advantaged.
I have found one architecture that uses JWTs without this disadvantage. A reverse proxy sits between untrusted clients externally and a backend collection of microservices internally, described here by Nordic APIs. A client acquires an opaque token from an authorization server and uses that token to communicate with the server app for all requests. For each request, the proxy translates the opaque token into a JWT and caches their association. The external world never provides JWTs, limiting the damage wrought by stealing keys (because the proxy goes to the authentication server to confirm the opaque tokens). However, this approach requires dereferencing each client token just as session IDs require per-request dereferencing, eliminating the benefit of JWTs for client requests. In this case, JWTs just allow services to pass user data among themselves without having to fully trust one another -- but I'm still trying to understand the value of the approach.
My concern appears to apply only to the use of JWTs as authentication tokens by untrusted clients. Yet JWTs are used by a number of high-profile APIs, including Google APIs. What am I missing? Maybe server breaches are rarely read-only? Are there ways to mitigate the risk?
I believe you're thinking about this the wrong way. Don't get me wrong, it's great you're considering security, however the way you're approaching it in regards to double checking things server-side, adding additional checks that defeat the objective of stateless sessions, etc, appear to be along a one way street towards the end of your own sanity.
To sum up the two standard approaches:
JWTs are sessionless state objects, MAC'd by a secret key held server side.
Traditional Session Identifiers are stored either in memory or in a database server-side, and as you say are often hashed to prevent sessions from being hijacked should this data be leaked.
You are also right that write access is often harder for an attacker to achieve. The reason is that database data is often extracted from a target system via a SQL injection exploit. This almost always provides read access to data, but it is harder to insert data using this technique, although not impossible (some exploits actually result in full root access of the target machine being achieved).
If you have a vulnerability that allows access to the key when using JWTs or one that allows database tables to be written to when using session identifiers, then it's game over - you are compromised because your user sessions can be hijacked.
So not more damaging necessarily, it all depends on the depth of the vulnerability.
Double check that the security of your JWT keys align with your risk appetite:
Where are they stored?
Who has access?
Where are backups stored?
Are different keys used in pre-production and production deployments of your app?
The ways to mitigate is as good practise dictates with any web app:
Regular security assessments and penetration testing.
Security code reviews.
Intrusion detection and prevention (IDS/IPS).
WAF.
These will help you evaluate where your real risks lie. It is pointless concentrating on one particular aspect of your application so much, because this will lead to the neglect of others, which may well be higher risk to your business model. JWTs aren't dangerous and have no more risk than other components of your system necessarily, however if you've chosen to use them you should make sure you're using them appropriately. Whether you are or not comes down to the particular context of your application and that is difficult to assess in a general sense, so I hope my answer guides you in the right direction.
When an attacker is able to get hold of the signing key in a JWT based system that means that he is able to get access to the server backend itself. In that case all hope is lost. In comparison to that, when the same attack succeeds in session based systems the attacker would be able to intercept username/password authentication requests to the backend, and/or generate sessions ids himself, and/or change the validation routines required to validate the session ids and/or modify the data to which the session id points. Any security mechanism used to mitigate this works as well for session systems as for JWT systems.

Need cookie to remember two-factor authentication success (not persistent login)

I've read a lot here and other places about using a cookie for a "remember me" option, but what I'm looking for is a way to design a cookie to record success of a two-factor authentication. This is what, for example, Google does: If the second step succeeds (e.g., you entered the code that you received via SMS), then it sets a cookie good for a period of time (e.g., 30 days) that means that the second step can be bypassed. Call this the "verification cookie." My understanding is that if in that time you logout and then in again, it won't do the second step, but only the first step. (I tested this and that seemed to be the case.)
My question is how to design this cookie. One idea is to put the user ID and a 128-bit random number in the cookie, and then to store that number in the database along with the user ID. This is what Charles Miller recommends (http://fishbowl.pastiche.org/2004/01/19/persistent_login_cookie_best_practice/) for persistent-login cookies.
However, that's not good enough, I think. The problem is that, since the user is using a two-factor authorization, whatever cookie is used to record that the second step was successful, should be safer than would be the case with a one-factor authorization.
What I want to avoid is this: The cracker has the hashed/salted passwords from the database, and has somehow gotten the password. If he/she has that much, I assume that the 128-bit random number that was in the verification cookie is available as well. (If the cracker has gotten the password some other way, and doesn't have the database, then the verification cookie is safe unless he/she has physical access to the computer. I'm only worried about the compromised database case.)
Maybe an idea is to encrypt the 128-bit random number? (Needs to be 2-way -- not a hash.) The encryption key would be accessible to the application, maybe stored however the database credentials are.
Has anyone implemented what I'm calling a verification cookie (not a persistent login cookie) and can tell me (us) how it was done?
UPDATE: Thinking about this, what would I think be secure enough would be this: Cookie consists of userID and 128-bit random number -- call it R.
Database contains password and R, each hashed and salted (e.g., using PhPass). R is then considered to be a second password. Benefit: Even if first password is bad (e.g., "password1"), R is a very good password. Database really can't be cracked, so it should not be a worry. (I was unnecessarily worried about it, I think.)
I think you have a pretty good plan here. Generally speaking the cookie should be completely random and should not contain any data that is used by the server. The idea is that anything that is client controlled can be tampered with. Even when the value is encrypted, I've seen attackers twiddle bits and get the tampered data to decrypt to a different user's ID (yeah that one scared me a bit). That being said I think Charlie Miller's suggestion is fine, because 128-bits is a good amount of entropy. Me personally, I would go with completely random bytes for a cookie such that no pattern emerges whatsoever.
Our last implementation of a verification cookie was a completely random 256 bit value printed in ascii-hex that was mapped to a user ID and session information in our DB. We kept the session information encrypted with a secret key, so if an attacker SQL injected our DB it would all be useless encrypted info. Of course a total compromise of the DB machine would provide access to the key, but that is a lot harder to do because it involves multiple exploits and pivots.
Some good advice is not to over-think it too much. We ran into implementation problems because we "over-engineered", and in the end we didn't get much security advantage anyway. A simple random number is the best you can do (as long as it is long enough to provide sufficient entropy).
There is a good answer to this problem on the security stackexchange site (which is maybe where this question belongs, anyway):
https://security.stackexchange.com/questions/31327/how-to-remember-a-trusted-machine-using-two-factor-authentication-like-googles

Which one method is safer: save password in device or use token?

We are making an app on android and iphone. One method is to save password hash in local device and login remote server every time (with token). The other method is to login once and then get the token to communicate with server. The app save the token in device, so if user don't logout manually, the token won't expire.
Some teammates think the latter method is better instead of saving password hash in local device. But I think keep token is also unsafe. Could anyone please give us some suggestion?
We probably need a little more detail to evaluate what you're considering. Either could in theory be built well. There are several things to consider.
First, it is best to have your authentication token expire periodically. This closes the window on stolen tokens.
Authentication should always be challenge/response in order to avoid replay attacks. You should generally not send the token itself. You send the response to a challenge that proves you have it.
Of course you start with TLS as a transport layer. Ideally you should validate your certs. Together, this alone can protect against a wide variety of attacks. Not all attacks; TLS is not magic security dust, but it does provide a very nice "belt and suspenders" defense in depth.
It's interesting that you're saving the "password hash." How are you using this and how are you salting it? In particular, if many people have the password "password1", will all of them have the same hash? Without TLS, this can open you up to significant problems if you're sending the hash itself across the wire.
On iPhone, you should store sensitive credentials in the keychain. SFHFkeychainutils makes a decent wrapper around the keychain (I've got my beef with it, but it's ok). Unfortunately, I don't believe Android has a similar OS-provided credential store. (No, iPhone's keychain does not protect against all kinds of attacks, but it does provide useful protections against certain kinds of attacks and is worth using.)
You want your protocol to make it possible to deauthenticate a device that has been stolen. That could take the form of the user changing the password, or revoking a token, but the user needs a way to achieve this.
Again, it's hard to evaluate a broad, hypothetical security approach. Tokens or passwords in the protocol can each be fine. What matters is the rest of the protocol.
The way to analyze this is to assume that nothing on the device is safe. The question then becomes, what's the worst that can happen if (when) the device is compromised. If you save a token, then the user's credentials are safe and you can implement a method on the server of revoking a token. If you save a password hash, then (if I understand what you mean by this) the user will need to change passwords (and possibly a token needs to be revoked).
Also, if you tie the token to some sort of device identifier, then it would be harder to use the token from other devices. A password hash (unless it also included data about the device) would not be as hard to use on other devices. (These attacks would be available between the time the device was compromised and when corrective action was taken at the server.)
As you might guess, I agree with your colleagues about which of these two approaches is better. (I also should make clear that I don't think either of these is the most robust approach. You might want to do a little research -- search for mobile application security to find a lot of information about different approaches.)

Login system, security

I need to make a log-in system and having basically no previous knowledge of how it's done (with security in mind) I studied it on the internet. The way I would do it now is something like this:
Server has login information in a database table - username and a password hash per user (encrypted with SHA224 for example).
When client wants to authenticate, password is encrypted with SHA224 (client-side) and sent with username to the server to verify a match in the database.
If the user ticked "Remember me" option, an authentication key is generated on the server, inserted into a database along with the IP of the client.
The authentication key is sent to the client and stored in cookies.
Now, when the client returns, authentication key from cookies is sent to the server, the server finds it in the database and checks if the IPs match as well. If it does, the user is authenticated and a new authentication key is generated and sent to the user (and stored in cookies) for next visit.
My questions are:
How does encrypting password make this any safer? The hash still can be captured on the way from client to server and misused just as well as if it was plaintext. I know that this is an elementary question but I somehow couldn't find an answer to this one.
Is this security system secure enough? (or better yet - Did I get it right?)
Why does hashing a password make the system more secure
Hashing is not equal to encryption. Encrypted data can be decrypted back into plain text. Hashed data cannot be decrypted.
By hashing your user's passwords, nobody can see what passwords are used. So if your data gets stolen, the hashes cannot be decrypted by the hacker. The same goes for the system administrator, he/she cannot 'lookup' a password. This can be an all to common scenario in shared hosting environments.
Storing passwords
The easiest way to get your password storage scheme secure is by using a standard library.
Because security tends to be a lot more complicated and with more invisible screw up possibilities than most programmers could tackle alone, using a standard library is almost always easiest and most secure (if not the only) available option.
The good thing is that you do not need to worry about the details, those details have been programmed by people with experience and reviewed by many folks on the internet.
For more information on password storage schemes, read Jeff`s blog post: You're Probably Storing Passwords Incorrectly
Whatever you do if you go for the 'I'll do it myself, thank you' approach, do not use MD5 anymore. It is a nice hashing algorithm, but broken for security purposes.
Currently, using crypt, with CRYPT_BLOWFISH is the best practice.
From my answer to: Help me make my password storage safe
As for the infamous remember me option.
Create a random token and give it to the user in the form of a cookie.
If the user presents a cookie with this token, you give them access. Key is to only accept each token once. So after it is used, replace it with a new random token.
This token is, in essence, just another password. So in order to keep it safe, you do not store the token, but a hash of it. (just as you did with the password)
Your suggestion of binding the cookie to an IP-address will unfortunately not work. Many people have dynamic IP-addresses, some even change from request to request during a single session. (this is for example caused by load-balancing proxies).
Sending passwords to the server
The only method currently usable for sending a password from a web browser to server is by using a SSL-secured connection. Anything else will not be safe, as you cannot guarantee the integrity of the solution on the client side.
Some points I want to add:
the hashing of the password is not done on the client. You cannot do it reliably. The necessary technique for computing the hash (JavaScript in your case) might not be available and you cannot trust the result. If somebody can retrieve the hashes of the passwords in your database he could just login without knowing the actual passwords.
make sure to use SSL or another secure transport for transmitting the given passwords from the client to the server. SSL is a good idea for everything after all.
you should not use a single hash algorithm for storing the passwords in the database. Have a look at HMAC. That is far better. Additionally read about salts in cryptography.
Never ever invent your own crypto
mechanisms. Use someone else's.
Crypto is beyond tricky, and unless
you're Bruce Schneier, you have an
extremely slim chance of improving
it, while having a huge chance of
screwing it royaly.
Do not encrypt passwords, hash them.
If you're using hashes, salt them.
If you don't have
to use straight hashes, use HMAC,
they're much more resistant to
precalculated attacks.
If you're
sending stuff across an unsecure
link, add a NONCE to the transmission
to prevent replay attacks. This goes
for both client->server and
server->client.
If you're using salts and nonces, make sure they have high entropy. 'secret' is not a good one. Make it random, make it long, make it use large character sets. The extra computation cost is minimal, but the security you gain from it is enormous. If you're not sure how, use a random password generator, and then use ent to measure entropy.
Do NOT use a
timestamp as a nonce, unless you have
a very specific need and really know
what you're doing.
Use session
protection. SSL isn't perfect but
it's helluva better than nothing.
If you're using SSL, make sure to disable weak protocols. SSL session starts with 'offerings' of lists of ciphers both sides can do. If you let clients use a weak one, an attacker will definitely use that.

Resources