Whether it is cookie signing, super user username and password for the database, or JWT token signing, on the server side there are secret keys to secure the action.
However this key is likely just a plain string on the server code. I have been wondering is there any more procedure to protect these key strings on the server side? Concerns about this, for example, if the attacker managed to get an access to the server's ssh then he could easily get the key. Or should I assume that I have already lose everything once an attacker can access the server?
(This leads me to assume that putting secrets directly in the code is what everyone is doing and is the common practice.)
Related
I am working on an application where scalability is a big concern. In the past I've used session-based authentication, but decided to go with a stateless server this time around in order to facilitate horizontal scaling.
I am not security expert, but in researching JWTs, it began to seem like these are very insecure. The whole reason we hash passwords is so that if our database is compromised, the attacker cannot impersonate a user. With JWT, we store a secret on the server. If the attacker gains access to the secret, can't they impersonate any user they want? Doesn't this mean that using JWTs would have the same level of security as storing plain text passwords?
I have read that people will sometimes use reddis to cross reference JWTs, but then the server isn't stateless, and I fail to see the benefit of using JWTs at all.
Could someone help clarify this issue for me?
Session based authentication systems, at least any that are worth using, also store a secret on the server. Just like the JWT, the secret is used to sign the data stored in the cookie that session based authentication uses. So this is no different than a JWT.
All of this is totally unrelated to password storage, as the password is only used when you don't have a cookie/JWT.
EDIT:
Not sure what to say about using Redis in conjunction with a JWT... What is being stored in Redis, the token? That seems pointless, as all the server needs to know is the secret to decode the token.
Here are some of the benefits to a using a JWT:
It's stateless, as you've already mentioned
It's not subject to CSRF/XSRF attacks. These attacks work by tricking your browser into sending the cookie to a server that didn't generate the cookie. This can't happen w/a JWT b/c the browser doesn't send the JWT automatically like it does w/cookies.
JWT's are standardized. There is a well defined way to generate them, which means that JWT's are more portable and the process has been vetted by the security community.
The server consuming a JWT token (resource server) does not need access to any secret. All it needs is the public key that belongs to the private key with which the token is digitally signed.
The authorization server that issues the token needs to keep its signing key secret obviously. But the nice thing about token based authentication is that this server can be created by an external party with much more resources/expertise to keep these things secret (Google, Facebook, Microsoft etc).
The resource server does not need to check the database to validate the token as you would need in case of username and password. This helps the scalability of the system and takes away a single point of failure.
If a client/user loses the JWT token, an attacker can impersonate the client/user until the token expires. A good reason to keep the lifetime of tokens short.
I don't see the point of storing JWT tokens in in a Reddis cache. There's no need to share tokens between servers as each call comes with a token in the Authorization HTTP header. Storing them in a cache only increases the risk of tokens being stolen.
So I want to have a play with making a tokenised login system but wanted to get my head around some of the nitty gritty. Please don't go and tell me to use OAuth etc as that isn't what I'm trying to achieve.. I want to understand the best way of how they work.
So this is the basic understanding of how I'd see the system working:
User registers on their phone application, which sends the username and password to a server via HTTPS. The server then generates two tokens, a public token and a private token which are both returned to the client.
The client then stores both of these tokens locally using localstorage.
So for subsequent page requests that need authentication, the client will send the public token and a hashed version of the private token to the server.
The server checks the database for the public token, unhashes the hash that was sent to the server, using something like timestamp and compares them. If both match, then the user is authenticated.
Now this fine, I understand that this should in theory be pretty secure from the point of view that the private token is only transmitted the once, ever over HTTPS so the chances of someone getting hold of it and authenticating as the user are minimal.
Now comes my real question on security... how can you protect user authentication if someone were to hack access to your database only (assuming the database isn't on the same server as the server side code). If I were to login and get the database then I'd have username,encrypted password,public and private tokens. I could technically then use these two tokens to authenticate myself as the user. What's the way to avoid this?
Hope that made sense!
Update
Okay, so is this process secure enough:
User registers by sending username and password over HTTPS.
The password uses bcrypt and is stored in the database
User comes to login and enters their username and password
The password is checked throug bcrypt against the one stored in the database
If there is a match, a JWT is generated using a secret key and sent back to the client
All future authentications that contain this token are verified against the secret key and if they match, the user is authenticated.
All of the above would be over HTTPS.
Is that secure enough? Cuts out the issue of having the token stored on the server as it would only be stored on the clients system and also the passwords are hashed in the database if that were to be leaked.
Imagine this situation: your users give you their credentials (username/password) to access a third party service. So you have to produce those credentials when connecting to the service, you cannot just store a salted hash.
The environment is Grails, with psql as DB. From the programmer point of view, ideally the user/password would still be part of the domain objects (so they are easy to use).
What would be the best practice to securely store them?
*(I'm not a security or crypto expert; this is my understanding based on my reading and research, but is very far from authoritative advice. Get the advice of web-app security professionals and get a proper security audit.)*
The best you can really do is have your app unable to decrypt them when the user isn't actively logged in.
Encrypt the credentials with a key based partially on the user's raw, unhashed password. You never store the user's password to log into your service on your systems, of course, so you only have access to it for a brief moment during authentication (and only then because the web hasn't caught up with the mid-90's and adopted sane challenge-response authentication schemes). You can, at the moment of user log-in, decrypt the saved credentials for the 3rd party services and store them in the volatile server-side session for the user.
For the encryption key you might hash the username and user raw password with a large-ish salt value you generate randomly for each (user,3rd-party-credential) pair and store alongside the encrypted credentials. The salt should be different to their salt used for their stored hashed password.
This is far from ideal and has all sorts of problems - but the credentials won't be accessible after the user's session expires or they log our and you purge their session.
It also means your app cannot act on their behalf when they aren't actively logged in, a limitation that may be a showstopper for you depending on your requirements.
A weaker option is to have a key for all user credentials that's manually entered by the sysadmin when the app is re-started. This key has to be stored in memory, but it's at least not sitting on the disk or in the database, so someone stealing a dump of your database will have a much harder time decrypting the stored credentials.
Neither option will help you if the attacker finds a way to trick your app into revealing those domain objects after decryption - or getting it to let them impersonate that user, getting it to perform actions on the 3rd party service on behalf of another user, etc. It'll at least protect against theft of database dumps and similar attacks, though.
One further recommendation: Rather than using pgcrypto to the crypto in the DB, do it on the application side. This means the DB never sees the key material required to decrypt the data; it can never be leaked into database logs, sniffed out of pg_stat_activity, etc.
I'm implementing a REST web service using C# which will be hosted on Azure as a cloud service. Since it is a REST service, it is stateless and therefore no cookies or session states.
The web service can only be accessed over HTTPS (Certificate provided by StartSSL.com).
Upon a user successfully logging into the service they will get a security token. This token will provide authentication in future communications.
The token will contain a timestamp, userid and ip address of the client.
All communication will only happen over HTTPS so I'm not concerned about the token being intercepted and used in replay attacks; the token will have an expiry anyway.
Since this is a public facing service I am however concerned that someone could register with the service, login and then modifying the token that they receive to access the accounts of other users.
I'm wondering how best to secure the content of the token and also verify that it hasn't been tampered with.
I plan on doing the following to secure the token:
The client successfully logs into the service and the service does:
Generate a random value and hash it with SHA256 1000 times.
Generate a one-time session key from private key + hashed random value.
Hash the session key with SHA256 1000 times and then use it to encrypt the token
Use private key to sign the encrypted token using RSA.
Sends the encrypted token + the signature + the hashed random value to the client in an unencrypted JSON package.
When the client calls a service it sends the encrypted token and signature in an unencrypted JSON package to the service. The service will
Recreate the session key from the private key + the hashed random value
Use the private key to verify the signature
Use the hashed session key to decrypt the token
Check that the token hasn't expired
Continue with the requested operation...
I don't really know anything about encryption so I have some questions:
Is this sufficient or is it overkill?
I read that to detect tampering I should include an HMAC with the token. Since I am signing with the private key, do I still need an HMAC?
Should I be using Rijndael instead of RSA?
If Rijndael is preferred, is the generated IV required for decrypted? i.e. can i throw it away or do I need to send it will the encrypted token? e.g. Encrypted Token + HMAC + IV + hashed random value.
Since all communication happens over HTTPS the unencrypted JSON package isn't really unencrypted until it reaches the client.
Also I may want to re-implement the service in PHP later so this all needs to be doable in PHP as well.
Thanks for your help
You are really over-thinking the token. Truthfully, the best token security relies on randomness, or more accurately unpredictability. The best tokens are completely random. You are right that a concern is that a user will modify his/her token and use it to access the accounts of others. This is a common attack known as "session stealing." This attack is nearly impossible when the tokens are randomly generated and expired on the server side. Using the user's information such as IP and/or a time stamp is bad practice because it improves predictability. I did an attack in college that successfully guessed active tokens that were based on server time stamps in microseconds. The author of the application thought microseconds would change fast enough that they'd be unpredictable, but that was not the case.
You should be aware that when users are behind proxy servers, the proxy will sometimes view their SSL requests in plain text (for security reasons, many proxies will perform deep packet inspection). For this reason it is good that you expire the sessions. If you didn't your users would be vulnerable to an attack such as this, and also possible XSS and CSRF.
RSA or Rijndael should be plenty sufficient, provided a reasonable key length. Also, you should use an HMAC with the token to prevent tampering, even if you're signing it. In theory it would be redundant, since you're signing with a private key. However, HMAC is very well tested, and your implementation of the signing mechanism could be flawed. For that reason it is better to use HMAC. You'd be surprised how many "roll your own" security implementations have flaws that lead them to compromise.
You sound pretty savvy on security. Keep up the good work! We need more security conscious devs in this world.
EDIT:
It is considered safe to include timestamps/user IDs in the token as long as they are encrypted with a strong symmetric secret key (like AES, Blowfish, etc) that only the server has and as long as the token includes a tamper-proof hash with it such as HMAC, which is encrypted with the secret key along with the user ID/timestamp. The hash guarantees integrity, and the encryption guarantees confidentiality.
If you don't include the HMAC (or other hash) in the encryption, then it is possible for users to tamper with the encrypted token and have it decrypt to something valid. I did an attack on a server in which the User ID and time stamp were encrypted and used as a token without a hash. By changing one random character in the string, I was able to change my user ID from something like 58762 to 58531. While I couldn't pick the "new" user ID, I was able to access someone else's account (this was in academia, as part of a course).
An alternative to this is to use a completely random token value, and map it on the server side to the stored User ID/time stamp (which stays on the server side and is thus outside of the clients control). This takes a little more memory and processing power, but is more secure. This is a decision you'll have to make on a case by case basis.
As for reusing/deriving keys from the IV and other keys, this is usually ok, provided that the keys are only valid for a short period of time. Mathematically it is unlikely someone can break them. It is possible however. If you want to go the paranoid route (which I usually do), generate all new keys randomly.
Am I correct that OAuth 1.0a credentials need to be stored in plaintext (or in a way that can be retrieved as plaintext) on the server, at least when doing 2-legged authentication? Isn't this much less secure than using a username and salted+hashed password, assuming you're using HTTPS or other TLS? Is there a way to store those credentials in such a way that a security breach doesn't require every single one to be revoked?
In more detail: I'm implementing an API and want to secure it with OAuth 1.0a. There will possibly be many different API clients in the future, but the only one so far has no need for sensitive user data, so I'm planning to use "2-legged" OAuth.
As I understand it, this means I generate a consumer key and a shared secret for each API client. On every API request, the client provides both the consumer key, and a signature generated with the shared secret. The secret itself is not sent over the wire, and I definitely understand why this is better than sending a username and password directly.
However, as I understand it, both the consumer and the provider must explicitly store both the consumer key and the shared secret (correct me if I'm wrong), and this seems like a major security risk. If an attacker breached the provider's data store containing the consumer keys and shared secrets, every single API client would be compromised and the only way to re-secure the system would be to revoke every single key. This is in contrast to passwords, which are (ideally) never stored in a reversible fashion on the server. If you're salting and hashing your passwords, then an attacker would not be able to break into any accounts just by compromising your database.
All the research I've done seems to just gloss over this problem by saying "secure the credentials as you would with any sensitive data", but that seems ridiculous. Breaches do occur, and while they may expose sensitive data they shouldn't allow the attacker to impersonate the user, right?
You are correct. oAuth allows you however to login on the behalf of a user, so the target server (the one you access data from) needs to trust the token you present.
Password hashes are good when you are the receiver of the secret as keyed-in by the user (which, by the way, is what effectively what happens when oAuth presents the login/acceptance window to the user to generate afterwards the token). This is where the "plaintext" part happens (the user inputs his password in plaintext).
You need to have an equivalent mechanism so that the server recognizes you; what oAuth offers is the capacity to present something else than a password - a limited authorization form the use to login on his behalf. If this leaks then you need to invalidate it.
You could store these secrets in more or less elaborated ways, at the end of the day you still need to present the "plaintext" version t the server (that server, however, may use a hash to store it for checking purposes, as it just needs to verify that what you present in plain text, when hashed, corresponds to the hash they store)