Sessions vs. Token based authentication - security

I want to know which is more safe to implement for authentication and why?
Session based authentication OR Token based authentication?
I know sessions can be used for other things as well, but right now I am only interested about authentication.
Is it true that nothing is stored on server side if using tokens (not even in memory)? If yes, then how it identifies against expired tokens as that had also been signed using the same secret?

The question on Information Security linked in the comment above has a lot of relevant information. That being said, a few additional concerns raised in this question should be addressed:
Safety
Knowing nothing about the server implementation, both methods can be as secure. Session-based authentication mostly relies on the guessability of the session identifier (which, as described in the Information Security answer, it in itself a very simple token). If the session identifier is a monotonously incrementing numeric id, then it is not very secure, OTOH it could be an opaque cryptographically strong unique ID with a huge keyspace, making it very safe. You are probably going to use the session implementation offered by your server framework of choice, so you need to check that. After that, using session authentication, your server implementation needs to verify that the server stored session contains the relevant authorization (i.e. user account data, role, etc) - as a lot of server session frameworks will be default auto-generate empty sessions as needed, the fact that a session exists must not be relied upon as proof enough for a valid authentication and authorization.
For example, PHP's internal session ID generation uses a completely random 288 bits number (default setting) so it is considered safe, OTOH - by default it generates sessions automatically so the previous comment must be adhered to (or to disable automatic session creation and make sure the server only creates session as needed).
Also, if the session ID is passed using an HTTP URL query string, as was the default in days of old, then the session can be stolen quite easily and that makes the whole process insecure.
Token safety is mostly based around secure token generation: if the server generates tokens in a secure manner - i.e. non-guessable and verifiable - as demonstrated in the Information Security answer. A naive implementation (that I saw one time) might be to MD5 hash a known token, such as a user name, and that makes it very unsafe, even when salted. When using cryptographic tokens, safety is closely related to the strength of encryption which is determined by the algorithm used, length of the key, and - most importantly - how well the server key is secured: if the server key is hard-coded into the server implementation and that code is then open-sourced...
Storage
Whether the server needs to store anything generally depends on the implementation of the tokens.
A lot of implementations use the concept of an "API key" as "token authentication" and so often tokens are just some cryptographically secure ID to a database that records which "API keys" have been generated. This requires storage but has the advantage of a simpler implementation and - more importantly - the ability to revoke tokens.
The other approach is to have the token carry its own authenticity - this allows the server to essentially offload the storage of tokens to the client and use the client as the database - very much like how HTTP Cookies allow servers to offload some storage requirements to the client (often used for client's specific settings, like whether the user wants a light interface or a dark interface).
The two patterns used for this are demonstrated well in the Information Security answer: signing and encrypting.
Signing: The token is some simple encoding (like JSON or CSV) of the authenticator credentials (such as the username) and possibly the token's expiry time (if you want to have tokens expire - generally a good idea, if you can't revoke tokens), and then the server signs the generated text using a server secret and adds that to the token. When the client submits the token, the server can extract the clear text from the token, re-sign it and compare the new signature to the signature part in the submitted token - if they are identical, then the token is valid. After validation, you probably want to check the validated expiry date against the current time. The main disadvantage here is that care should be taken that the clear-text authentication details are strongly insufficient for an attacker to re-authenticate - otherwise, it harms the safety requirement. i.e. don't send the password as part of the token, or any other internal detail.
Encrypting: the token is generated by again encoding all the relevant authentication details and then encrypting the clear-text with a server secret and only submitting the encrypted result. If the encryption scheme can be trusted, the authentication details can include internal data - but care should be taken as having a large crypt-text available offline to an attacker is a larger attack surface than a small signature and weak encryption algorithms will be less resilient in this usage than in just signing.
In both methods, safety is closely related to the strength of the encryption/signing algorithm - a weak algorithm will allow an attacker to reverse engineer the server secret and generate new valid tokens without authentication.
A personal note
In my opinion, cryptographic token-based authentication tends to be less safe than session-based, as it relies on the (often single) developer doing everything right from design to implementation to deployment, while session-based authentication can leverage existing implementations to do most of the heavy lifting, where it is very easy to find high-quality, secure and massively used and tested session storage implementations. I would need a very compelling reason as to why session storage is unwanted before recommending using cryptographic tokens.
Always remember the no.1 rule of cryptographic security: never design your own single-use cryptographic measures.

Related

JWT advantages over simple randomly-generated tokens in database?

Let's say I have a typical CRUD API for a web application. I need to authorize users by a token, check user roles, etc.
Is there any reason why I should consider JWT over storing a randomly-generated token in a table like tokens(token, refresh_token, expiration_date)?
In my opinion, JWT is adding more complexity here:
Additional code to handle encoding/decoding
Need to store JWT secrets and keys
Token revocation problem
I have to hit a database to check user roles(although I can include them in a payload, there's also other stuff that I should check in my application), so no advantages here. The only benefit I can see here is that I can check token expiration data without hitting a database.
At the same time storing a randomly-generated token in a database is a dead-simple solution.
Am I missing something?
JWTs are often misunderstood. The main benefit they provide is statelessness. If you go to your database to query privileges upon each request anyway, that is pretty much lost, if not from a theoretical but from a practical point of view.
They are typically not stored in http-only cookies, which makes them vulnerable to XSS, but at the same time allows Javascript clients to read the payload (eg. who is logged in, what privileges they have and so on). Not being stored in cookies also allows them to be sent to different origins, which is pretty much the only reason they should not be stored in a http-only cookie (if and only if you understand and accept the risks of this).
JWTs are in no way better or magically more secure than plain old random session tokens - quite the opposite in most cases, especially that it is often overlooked that as opposed to server-side sessions, JWT payload is plaintext. It is protected against tampering by message authentication, but not protected against the user having a look, which sometimes might become an issue.
If you don't need the features above (statelessness, access from javascript), you should just not have the additional complexity of a JWT, you just need a plain old session then.
First thing you need to consider is who will generate the token.
In case of JWT, a valid OAUTH provider will generate the token. The benefits are as follows,
you can validate the legitimacy of the token, which will include checks for following,
a. audience
b. expiry
c. issued at
d. not before date
e. issuer
you can check the issuer based on just the public key which would avoid a network trip to another remote server such as database.
you can inject any payload which can include a userinfo, email, role or any custom attributes.
for a standard OAUTH JWT, you can test using common providers such as 'Okta'. As opposed to your custom generator, where you will have to provide a user a mechanism to retrieve such a token.
as far as the code to check, the JWT is quite straightforward with 3 parts separated by a "period". But there are libraries in java and other languages which can do the parsing for you - com.auth0, specifically jwt and rsa which will let you do the parsing and verification.
your code will be compliant and easily portable to another provider.

Best practices for server-side handling of JWT tokens [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
(spawned from this thread since this is really a question of its own and not specific to NodeJS etc)
I'm implementing a REST API server with authentication, and I have successfully implemented JWT token handling so that a user can login through a /login endpoint with username/password, upon which a JWT token is generated from a server secret and returned to the client. The token is then passed from the client to the server in each authenticated API request, upon which the server secret is used to verify the token.
However, I am trying to understand the best practices for exactly how and to what extent the token should be validated, to make a truly secure system. Exactly what should be involved in "validating" the token? Is it enough that the signature can be verified using the server-secret, or should I also cross-check the token and/or token payload against some data stored in the server?
A token based authentication system will only be as safe as passing username/password in each request provided that it's equally or more difficult to obtain a token than to obtain a user's password. However, in the examples I've seen, the only information required to produce a token is the username and the server-side secret. Doesn't this mean that assuming for a minute that a malicious user gains knowledge of the server secret, he can now produce tokens on behalf of any user, thereby having access not only to one given user as would be the fact if a password was obtained, but in fact to all user accounts?
This brings me to the questions:
1) Should JWT token validation be limited to verifying the signature of the token itself, relying on the integrity of the server secret alone, or accompanied by a separate validation mechanism?
In some cases I've seen the combined use of tokens and server sessions where upon successful login through the /login endpoint a session is established. API requests validate the token, and also compare the decoded data found in the token with some data stored in the session. However, using sessions means using cookies, and in some sense it defeats the purpose of using a token based approach. It also may cause problems for certain clients.
One could imagine the server keeping all tokens currently in use in a memcache or similar, to ensure that even if the server secret is compromised so that an attacker may produce "valid" tokens, only the exact tokens that were generated through the /login endpoint would be accepted. Is this reasonable or just redundant/overkill?
2) If JWT signature verification is the only means of validating tokens, meaning the integrity of the server secret is the breaking point, how should server secrets be managed? Read from an environment variable and created (randomized?) once per deployed stack? Re-newed or rotated periodically (and if so, how to handle existing valid tokens that were created before rotation but needs to be validated after rotation, perhaps it's enough if the server holds on to the current and the previous secret at any given time)? Something else?
Maybe I'm simply being overly paranoid when it comes to the risk of the server secret being compromised, which is of course a more general problem that needs to be addressed in all cryptographic situations...
I've been playing with tokens for my application as well. While I'm not an expert by any means, I can share some of my experiences and thoughts on the matter.
The point of JWTs is essentially integrity. It provides a mechanism for your server verify that the token that was provided to it is genuine and was supplied by your server. The signature generated via your secret is what provides for this. So, yes, if your secret is leaked somehow, that individual can generate tokens that your server would think are its own. A token based system would still be more secure than your username/password system simply because of the signature verification. And in this case, if someone has your secret anyway, your system has other security issues to deal with than someone making fake tokens (and even then, just changing the secret ensures that any tokens made with the old secret are now invalid).
As for payload, the signature will only tell you that the token provided to you was exactly as it was when your server sent it out. verifying the that the payloads contents are valid or appropriate for your application is obviously up to you.
For your questions:
1.) In my limited experience, it's definitely better to verify your tokens with a second system. Simply validating the signature just means that the token was generated with your secret. Storing any created tokens in some sort of DB (redis, memcache/sql/mongo, or some other storage) is a fantastic way of assuring that you only accept tokens that your server has created. In this scenario, even if your secret is leaked, it won't matter too much as any generated tokens won't be valid anyway. This is the approach I'm taking with my system - all generated tokens are stored in a DB (redis) and on each request, I verify that the token is in my DB before I accept it. This way tokens can be revoked for any reason, such as tokens that were released into the wild somehow, user logout, password changes, secret changes, etc.
2.) This is something I don't have much experience in and is something I'm still actively researching as I'm not a security professional. If you find any resources, feel free to post them here! Currently, I'm just using a private key that I load from disk, but obviously that is far from the best or most secure solution.
Here are some things to consider when implementing JWT's in your application:
Keep your JWT lifetime relatively short, and have it's lifetime managed at the server. If you don't, and later on need to require more information in your JWTs, you'll have to either support 2 versions, or wait until your older JWTs have expired before you can implement your change. You can easily manage it on the server if you only look at the iat field in the jwt, and ignore the exp field.
Consider including the url of the request in your JWT. For example, if you want your JWT to be used at endpoint /my/test/path, include a field like 'url':'/my/test/path' in your JWT, to ensure it's only ever used at this path. If you don't, you may find that people start using your JWTs at other endpoints, even ones they weren't created for. You could also consider including an md5(url) instead, as having a big url in the JWT will end up making the JWT that much bigger, and they can get quite big.
JWT expiry should be configurable by each use case if JWTs are being implemented in an API. For example, if you have 10 endpoints for 10 different use cases for JWT's, make sure you can make each endpoint accept JWTs that expire at different times. This allows you to lock down some endpoints more than others, if for example, the data served by one endpoint is very sensitive.
Instead of simply expiring JWTs after a certain time, consider implementing JWTs that support both:
N usages - can only be used N times before they expire and
expire after certain amount of time (if you have a one use only token, you don't want it living forever if not used, do you?)
All JWT authentication failures should generate an "error" response header that states why the JWT authentication failed. e.g. "expired", "no usages left", "revoked", etc. This helps implementers know why their JWT is failing.
Consider ignoring the "header" of your JWTs as they leak information and give a measure of control to hackers. This is mostly concerning the alg field in the header - ignore this and just assume that the header is what you want to support, as this avoids hackers trying to use the None algorithm, which removes the signature security check.
JWT's should include an identifier detailing which app generated the token. For example if your JWT's are being created by 2 different clients, mychat, and myclassifiedsapp, then each should include it's project name or something similar in the "iss" field in the JWT e.g. "iss":"mychat"
JWT's should not be logged in log files. The contents of a JWT can be logged, but not the JWT itself. This ensures devs or others can't grab JWT's from log files and do things to other users accounts.
Ensure your JWT implementation doesn't allow the "None" algorithm, to avoid hackers creating tokens without signing them. This class of errors can be avoided entirely by ignoring the "header" of your JWT.
Strongly consider using iat (issued at) instead of exp (expiry) in your JWTs. Why? Since iat basically means when was the JWT created, this allows you to adjust on the server when the JWT expires, based on the creation date. If someone passes in an exp that's 20 years in the future, the JWT basically lives forever! Note that you automatically expire JWTs if their iat is in the future, but allow for a little bit of wiggle room (e.g 10 seconds), in case the client's time is slightly out of sync with the servers time.
Consider implementing an endpoint for creating JWTs from a json payload, and force all your implementing clients to use this endpoint to create their JWTs. This ensures that you can address any security issues you want with how JWTs are created in one place, easily. We didn't do this straight off in our app, and now have to slowly trickle out JWT server side security updates because our 5 different clients need time to implement. Also, make your create endpoint accept an array of json payloads for JWTs to create, and this will decrease the # of http requests coming in to this endpoint for your clients.
If your JWT's will be used at endpoints that also support use by session, ensure you don't put anything in your JWT that's required to satisfy the request. You can easily do this if you ensure your endpoint works with a session, when no JWT is supplied.
So JWT's generally speaking end up containing a userId or groupId of some sort, and allow access to part of your system based on this information. Make sure you're not allowing users in one area of your app to impersonate other users, especially if this provides access to sensitive data. Why? Well even if your JWT generation process is only accessible to "internal" services, devs or other internal teams could generate JWTs to access data for any user, e.g. the CEO of some random client's company. For example, if your app provides access to financial records for clients, then by generating a JWT, a dev could grab the financial records of any company at all! And if a hacker gets into your internal network in anyway, they could do the same.
If you are are going to allow any url that contains a JWT to be cached in any way, ensure that the permissions for different users are included in the url, and not the JWT. Why? Because users may end up getting data they shouldn't. For example, say a super user logs into your app, and requests the following url: /mysite/userInfo?jwt=XXX, and that this url gets cached. They logout and a couple of minutes later, a regular user logs into your app. They'll get the cached content - with info about a super user! This tends to happen less on the client, and more on the server, especially in cases where you're using a CDN like Akamai, and you're letting some files live longer. This can be fixed by including the relevant user info in the url, and validating this on the server, even for cached requests, for example /mysite/userInfo?id=52&jwt=XXX
If your jwt is intended to be used like a session cookie, and should only work on the same machine the jwt was created for, you should consider adding a jti field to your jwt. This is basically a CSRF token, that ensures your JWT can't be passed from one users's browser to anothers.
I don't think I'm an expert but I'd like to share some thoughs about Jwt.
1: As Akshay said, it's better to have a second system to validate your token.
a.: The way I handle it : I store the hash generated into a session storage with the expiricy time. To validate a token, it needs to have been issued by the server.
b.:There is at least one thing that must be checked the signature method used. eg :
header :
{
"alg": "none",
"typ": "JWT"
}
Some libraries validating JWT would accept this one without checking the hash. That means that without knowing your salt used to sign the token, a hacker could grant himself some rights. Always make sure this can't happen.
https://auth0.com/blog/2015/03/31/critical-vulnerabilities-in-json-web-token-libraries/
c.: Using a cookie with a session Id would not be useful to validate your token. If someone wants to hijack the session of a lambda user, he would just have to use a sniffer (eg : wireshark). This hacker would have both information at the same time.
2: It is the same for every secret. There is always a way to know it.
The way I handle it is linked to the point 1.a. : I have a secret mixed with a random variable. The secret is unique for every token.
However, I am trying to understand the best practices for exactly how
and to what extent the token should be validated, to make a truly
secure system.
If you want the best security possible, you should not blindly follow best practices. The best way is to understand what you're doing (I think it's ok when I see your question), and then evaluate the security you need. And if the Mossad want to have access to your confidential data, they 'll always find a way. (I like this blog post : https://www.schneier.com/blog/archives/2015/08/mickens_on_secu.html )
Lots of good answers here. I'll integrate some of the answers I think are most relevant and add some more suggestions.
1) Should JWT token validation be limited to verifying the signature of the token itself, relying on the integrity of the server secret alone, or accompanied by a separate validation mechanism?
No, because of reasons unrelated to the compromise of a token secret. Each time a user logs in via a username and password, the authorization server should store either the token that was generated, or metadata about the token that was generated. Think of this metadata as an authorization record. A given user and application pair should only have one valid token, or authorization, at any given time. Useful metadata is the user id associated with the access token, the app id, and the time when the access token was issued (which allows for the revocation of existing access tokens and the issuing of a new access token). On every API request, validate that the token contains the proper metadata. You need to persist information about when each access tokens was issued, so that a user can revoke existing access tokens if their account credentials are compromised, and log in again and start using a new access token. That will update the database with the time when the access token was issued (the authorization time created). On every API request, check that the issue time of the access token is after the authorization time created.
Other security measures included not logging JWTs and requiring a secure signing algorithm like SHA256.
2) If JWT signature verification is the only means of validating tokens, meaning the integrity of the server secret is the breaking point, how should server secrets be managed?
The compromise of server secrets would allow an attacker to issue access tokens for any user, and storing access token data in step 1 would not necessarily prevent the server from accepting those access tokens. For example, say that a user has been issued an access token, and then later on, an attacker generates an access token for that user. The authorization time of the access token would be valid.
Like Akshay Dhalwala says, if your server-side secret is compromised, then you have bigger problems to deal with because that means that an attacker has compromised your internal network, your source code repository, or both.
However, a system to mitigate the damage of a compromised server secret and avoid storing secrets in source code involves token secret rotation using a coordination service like https://zookeeper.apache.org. Use a cron job to generate an app secret every few hours or so (however long your access tokens are valid for), and push the updated secret to Zookeeper. In each application server that needs to know the token secret, configure a ZK client that is updated whenever the ZK node value changes. Store a primary and a secondary secret, and each time the token secret is changed, set the new token secret to the primary and the old token secret to the secondary. That way, existing valid tokens will still be valid because they will be validated against the secondary secret. By the time the secondary secret is replaced with the old primary secret, all of the access tokens issued with the secondary secret would be expired anyways.
IETF have a RFC in progress in the oAuth Working Group see : https://tools.ietf.org/id/draft-ietf-oauth-jwt-bcp-05.html

REST Web Service authentication token implementation

I'm implementing a REST web service using C# which will be hosted on Azure as a cloud service. Since it is a REST service, it is stateless and therefore no cookies or session states.
The web service can only be accessed over HTTPS (Certificate provided by StartSSL.com).
Upon a user successfully logging into the service they will get a security token. This token will provide authentication in future communications.
The token will contain a timestamp, userid and ip address of the client.
All communication will only happen over HTTPS so I'm not concerned about the token being intercepted and used in replay attacks; the token will have an expiry anyway.
Since this is a public facing service I am however concerned that someone could register with the service, login and then modifying the token that they receive to access the accounts of other users.
I'm wondering how best to secure the content of the token and also verify that it hasn't been tampered with.
I plan on doing the following to secure the token:
The client successfully logs into the service and the service does:
Generate a random value and hash it with SHA256 1000 times.
Generate a one-time session key from private key + hashed random value.
Hash the session key with SHA256 1000 times and then use it to encrypt the token
Use private key to sign the encrypted token using RSA.
Sends the encrypted token + the signature + the hashed random value to the client in an unencrypted JSON package.
When the client calls a service it sends the encrypted token and signature in an unencrypted JSON package to the service. The service will
Recreate the session key from the private key + the hashed random value
Use the private key to verify the signature
Use the hashed session key to decrypt the token
Check that the token hasn't expired
Continue with the requested operation...
I don't really know anything about encryption so I have some questions:
Is this sufficient or is it overkill?
I read that to detect tampering I should include an HMAC with the token. Since I am signing with the private key, do I still need an HMAC?
Should I be using Rijndael instead of RSA?
If Rijndael is preferred, is the generated IV required for decrypted? i.e. can i throw it away or do I need to send it will the encrypted token? e.g. Encrypted Token + HMAC + IV + hashed random value.
Since all communication happens over HTTPS the unencrypted JSON package isn't really unencrypted until it reaches the client.
Also I may want to re-implement the service in PHP later so this all needs to be doable in PHP as well.
Thanks for your help
You are really over-thinking the token. Truthfully, the best token security relies on randomness, or more accurately unpredictability. The best tokens are completely random. You are right that a concern is that a user will modify his/her token and use it to access the accounts of others. This is a common attack known as "session stealing." This attack is nearly impossible when the tokens are randomly generated and expired on the server side. Using the user's information such as IP and/or a time stamp is bad practice because it improves predictability. I did an attack in college that successfully guessed active tokens that were based on server time stamps in microseconds. The author of the application thought microseconds would change fast enough that they'd be unpredictable, but that was not the case.
You should be aware that when users are behind proxy servers, the proxy will sometimes view their SSL requests in plain text (for security reasons, many proxies will perform deep packet inspection). For this reason it is good that you expire the sessions. If you didn't your users would be vulnerable to an attack such as this, and also possible XSS and CSRF.
RSA or Rijndael should be plenty sufficient, provided a reasonable key length. Also, you should use an HMAC with the token to prevent tampering, even if you're signing it. In theory it would be redundant, since you're signing with a private key. However, HMAC is very well tested, and your implementation of the signing mechanism could be flawed. For that reason it is better to use HMAC. You'd be surprised how many "roll your own" security implementations have flaws that lead them to compromise.
You sound pretty savvy on security. Keep up the good work! We need more security conscious devs in this world.
EDIT:
It is considered safe to include timestamps/user IDs in the token as long as they are encrypted with a strong symmetric secret key (like AES, Blowfish, etc) that only the server has and as long as the token includes a tamper-proof hash with it such as HMAC, which is encrypted with the secret key along with the user ID/timestamp. The hash guarantees integrity, and the encryption guarantees confidentiality.
If you don't include the HMAC (or other hash) in the encryption, then it is possible for users to tamper with the encrypted token and have it decrypt to something valid. I did an attack on a server in which the User ID and time stamp were encrypted and used as a token without a hash. By changing one random character in the string, I was able to change my user ID from something like 58762 to 58531. While I couldn't pick the "new" user ID, I was able to access someone else's account (this was in academia, as part of a course).
An alternative to this is to use a completely random token value, and map it on the server side to the stored User ID/time stamp (which stays on the server side and is thus outside of the clients control). This takes a little more memory and processing power, but is more secure. This is a decision you'll have to make on a case by case basis.
As for reusing/deriving keys from the IV and other keys, this is usually ok, provided that the keys are only valid for a short period of time. Mathematically it is unlikely someone can break them. It is possible however. If you want to go the paranoid route (which I usually do), generate all new keys randomly.

OAuth 1.0a, 2-legged: how do you securely store clients' credentials (keys/secrets)?

Am I correct that OAuth 1.0a credentials need to be stored in plaintext (or in a way that can be retrieved as plaintext) on the server, at least when doing 2-legged authentication? Isn't this much less secure than using a username and salted+hashed password, assuming you're using HTTPS or other TLS? Is there a way to store those credentials in such a way that a security breach doesn't require every single one to be revoked?
In more detail: I'm implementing an API and want to secure it with OAuth 1.0a. There will possibly be many different API clients in the future, but the only one so far has no need for sensitive user data, so I'm planning to use "2-legged" OAuth.
As I understand it, this means I generate a consumer key and a shared secret for each API client. On every API request, the client provides both the consumer key, and a signature generated with the shared secret. The secret itself is not sent over the wire, and I definitely understand why this is better than sending a username and password directly.
However, as I understand it, both the consumer and the provider must explicitly store both the consumer key and the shared secret (correct me if I'm wrong), and this seems like a major security risk. If an attacker breached the provider's data store containing the consumer keys and shared secrets, every single API client would be compromised and the only way to re-secure the system would be to revoke every single key. This is in contrast to passwords, which are (ideally) never stored in a reversible fashion on the server. If you're salting and hashing your passwords, then an attacker would not be able to break into any accounts just by compromising your database.
All the research I've done seems to just gloss over this problem by saying "secure the credentials as you would with any sensitive data", but that seems ridiculous. Breaches do occur, and while they may expose sensitive data they shouldn't allow the attacker to impersonate the user, right?
You are correct. oAuth allows you however to login on the behalf of a user, so the target server (the one you access data from) needs to trust the token you present.
Password hashes are good when you are the receiver of the secret as keyed-in by the user (which, by the way, is what effectively what happens when oAuth presents the login/acceptance window to the user to generate afterwards the token). This is where the "plaintext" part happens (the user inputs his password in plaintext).
You need to have an equivalent mechanism so that the server recognizes you; what oAuth offers is the capacity to present something else than a password - a limited authorization form the use to login on his behalf. If this leaks then you need to invalidate it.
You could store these secrets in more or less elaborated ways, at the end of the day you still need to present the "plaintext" version t the server (that server, however, may use a hash to store it for checking purposes, as it just needs to verify that what you present in plain text, when hashed, corresponds to the hash they store)

Can you spot a vulnerability in my authentication protocol?

Some time ago we needed a solution for Single Sign On authentication between multiple web services. At least at that time we considered OpenID protocol too complicated and we were not convinced about the Ruby on Rails plugins for it. Therefore we designed a protocol of our own instead of implementing an OpenID provider and OpenID consumers.
I have two questions:
Was it a bad thing not to create
our own OpenID provider and setup
our OpenID consumers accept only it?
Public login or registration are not
allowed and we wanted to keep
authentication simple.
Can you spot a crucial error or a vulnerability in the following design?
If you as a commune can approve this design, I will consider extracting this code into a Ruby on Rails plugin.
Please look at the flowchart and sequence diagram.
Details:
Authentication Provider ("AP"):
Central service which holds all data
about the users.
Only one "AP" exists in this setup.
It could be possible to have multiple "AP"s, but that should not be relevant in this context.
"AP" knows each "S" beforehand.
Authentication Client (Service "S"):
There exists several internal and external web services.
Each service knows "AP" and its public key beforehand.
Actor ("A"):
The end user who authenticates
herself with AP by a username and password
May request directly any URI of "S" or "AP" prior to her login
Connections between "A", "S" and "AP" are secured by HTTPS.
Authentication logic described briefly:
These are a description for the graphical flowchart and sequence diagram which were linked at the top of this post.
1) Auth Provider "AP"
"AP" makes a server-to-server HTTP POST request to "S" to get a nonce.
"AP" generates an authentication token.
Authentication token is an XML entity which includes:
an expiration date (2 minutes from now),
the previously requested nonce (to prevent replay),
identifying name of "S" (token for Service_1 is not good for Service_2),
information about the end user.
Authentication token is encrypted with AES256 and the encryption key and initialization vector are signed by AP's private RSA key.
Resulting strings ("data", "key" and "iv") are first Base64 encoded and then URL encoded to allow them be delivered in the URL query string.
End user "A" is HTTP-redirected to service "S" (HTTPS GET request).
2) Service "S"
Receives authentication token in URL parameters from user agent.
Decrypts authentication token with AP's pre-shared public key.
Accepts one authentication token only once (token includes a nonce which is valid only once).
Checks that identifying name in authentication token corresponds to service's name.
Checks that authentication token is not expired.
Remarks:
It is not a problem if somebody else can also decrypt the authentication token, because it contains no confidential information about the user. However, it is crucial that nobody else than AP is able to generate a valid authentication token. Therefore the RSA key pair is involved.
RSA private key is used only for signing the token, because it cannot encrypt data which is longer than the actual key length. Therefore AES is used for encryption.
Since the authentication token is delivered as an HTTP GET request, it will be stored e.g. in Apache's log file. Using a disposable nonce and an expiration date should minimize the possibility of a replay attack. POST request would need an HTML page with a form which is submitted automatically by Javascript, which is why GET is used.
Service "S" generates a nonce only in a server-to-server API request. Therefore unauthenticated generation requests should not pose a DoS-vulnerability.
You're confusing authentication ("I am who I say I am") and authorization/access control ("I am allowed to access this"). You can just implement OAuth, and then query a server over HTTPS with "is this OAuth identity allowed to access me?". You don't have to worry about replay attacks, since you're using HTTPS.
"Security is hard, so I'll design my own."
Authentication token is encrypted with AES256 and the encryption key and initialization vector are signed by AP's private RSA key.
AES-256 and AES-192 have weak key schedules. But you're not using it for confidentiality; you're using it as some sort of "integrity" check. It doesn't work: Attacker gets a "signed" authentication token. Attacker recovers the key and IV. Attacker encrypts a different authentication token with the same key and IV, and uses the same "signature".
What's wrong with hashing it and signing the hash? Also note that if you're going to use custom signing, you need to be careful about padding (IIRC PKCS-whatever adds at least 11 bytes).
EDIT: And if you're using a cipher where you should be using a hash/MAC, you really shouldn't be designing a security protocol!
Here are a few quick thoughts about question 1:
Designing a working security protocol is very hard, so on general principle I would favor using an existing one.
However, I appreciate that OpenID might not have been very established at the time. Also OpenID is still relatively new and might not have all of its limitations figured out yet.
Still, you'd be using OpenID in a restricted scenario where the big issue of OpenID (involvement of multiple actors) doesn't come into play. You'd only be using the “technical core” of OpenID, which is easier to understand.
Your requirements and the overview of your protocol remind me of Kerberos. I'm also tempted to push towards LDAP + single sign on, but I don't know what concrete solutions exist for that.
A point in favor of your protocol is that you've taken the time to describe it in detail. Just that puts you above than most self-made security protocol designers!
In short I find this protocol to be over engineered in the wrong places and ultimately vulnerable to attack.
So What is the vulnerability?
End user "A" is HTTP-redirected to service "S" (HTTPS GET request).
This is likely to be a violation of OWASP A9. At no point can a user's session ID be passed over an insecure channel such as http. Even if the session id hasn't been authenticated yet, an attacker is patient he can sniff the wire looking for session id's and then periodically check if they have been authenticated and then use them to access your system.
"Complexity is the worst enemy of security."
--Bruce Schneier
Authentication token is encrypted with
AES256 and the encryption key and
initialization vector are signed by
AP's private RSA key.
First of all RSA can be used to encrypt a message, so aes is unnecessary. HTTPS on the other hand is going to be more efficient and proven to be secure. I don't understand why you have to pass an authentication token to the client, when you already have a secure server-to-server communication channel. A server could just say "Hey someone has been redirected to me with this session id, what is his state information?". Its a matter of the weakest link in the chain, and your session id should be strong enough. This would require the session id to be sent as a GET or POST request by the client when the hand off occures which could open the door to Session Fixation. You could check the ip address before and after the handoff, sometimes the ip address of the client can change legitimately but the handoff is going to be a very narrow window in which this can happen, and most importantly it is stops Session Fixation altogether.
In general you should avoid re-inventing the wheal. Especially when it comes to security problems like this which have already been solved. Kerberos is beautiful especially if you need to tie in non-http authentication. Using LDAP for session management is another possibility.
Do you really just sign AES key and then send encrypted token, RSA signature of key and then key-iv in PLAINTEXT?
It's a fail. Attacker can use this key to decrypt a token, change it in any needed way and encrypt back. Your server will never find a difference.
If you want to check token integrity, just make a hash of it and sign this hash. This is what hashes used for. No need to use encryption here.

Resources