I am trying to implement a simple cache where my cache will consist of just a latest token returned by authentication server to my application. There are different worker threads which try to login simultaneously to authentication server.This increases load on authentication server and also my application becomes slow as for each authentication thread there is a round trip involved with the server. Hence by implementing a simple token cache, a token will be cached at client side and will be updated only when one of the authentication thread fails to login. Whoever fails will go and fetch a new token from authentication server.
The problem I am getting is when any authentication thread fails and updates the token cache, there might be some threads who already have read the old token and will fail eventually and they also will try to update the cache. How can I stop these threads from updating the cache once it is already updated?
One possible solution is to assign a timestamp to the retrieved token. Time timestamp does not have to be "real", a sequence number suffices.
When a thread retrieves the token from the token cache, it also retrieves the sequence number associated with the token. When a thread fails to login, it first has to check the local cache again to see if the cache has been refreshed since it retrieved its token. If the sequence number of the current token is different from the one it currently has, it can try again with the token from the cache.
Updating the token cache is necessary only when login fails and the cache has not been refreshed either since the retrieval of the failing token.
There is a slight chance that the sequence number goes a full cycle wrapping around, and a task is mislead to think that the cache has not been refreshed when in fact it has a great number of times, but this is more of a theoretical possibility than a practical concern.
When a worker thread fails to log in, it should check the token cache to see if the token has changed from the one it was already issued. If the cached token has changed, it should use the new cached token instead of seeking a new token.
When a worker thread does seek a new token, it can also keep the old, no longer valid, token. Then before putting its new token in the cache, it can check if the currently cached token is the same as its old token.
The token cache of course needs to be thread safe and ensure that updates from one thread are seen immediately in other threads, for example by using a volatile reference to the cached token. This is simpler than keeping a separate time stamp with each token.
Assuming, your clients are acting quickly and often, and that time needed to get new token could be few times longer than few client requests, you must introduce some sort of locking.
If you assume, that token is received within 2 seconds, you could set up following rules:
use cache with keys "token" and "locked".
If there is not content for token, and locked is empty, set "lock" to True and try to get new token. As you are done, remove the lock and store new token.
If there is a token, try to use it.
If a token you have is failing, get from cache current token and if it differs, try to use new value. If there is no token, and "lock" is True, wait 2 seconds and try again. If no new token is found, try to get the token yourself (again, use "lock" True).
In short:
if "token" found, try using it
if "token" fails, delete it from cache and set up "lock", get new token, remove lock.
if no "token", check if there is a "lock", if yes, wait 2 seconds, get new token if available, if not available, try getting it on your own.
Related
I'm in the process of rebuilding an existing web app, that uses JWTs to manage authentication. I'm still new to JWTs, so I'm learning about how they should work, while, at the same time, trying to understand why the web app's current implementation is the way it is.
The current version's flow is as follows:
When a user successfully logs in or registers, a user object is returned along with a JWT property. This JWT is included in subsequent API calls as an Authorization header.
Every ten minutes, a get request is made to API endpoint /refresh-token.
If successful, the response body contains a success message, and the response header contains an updated Authorization header.
All subsequent ten-minute timed get requests to /refresh-token use the original JWT that was returned in step 1, and so on.
From what I've read so far, this doesn't correlate with any recommended approaches.
Is it safe enough to replicate this flow in the newer version, or is this something I'm better off not replicating?
Edit: I'm working solely on the front-end - the API isn't being updated for a while, so I'm limited to what it currently returns.
I believe this article summarizes the current state of the art: https://auth0.com/blog/refresh-tokens-what-are-they-and-when-to-use-them/. You usually have two tokens. Access token which is short lived and an refresh token, which lives longer. This way you don't need to call the auth server every x minutes, but you can do it on demand.
I don't know if you need to deal with blacklisting too? I believe blacklisting is easier when you have a separation of access token and refresh token (only refresh token needs to be blacklisted). But I believe you could deal with this problem too, probably in a bit more sophisticated manner.
Having said that. What you have is not wrong. It's hard for me to point out any flaws in the way you are doing besides of what has been pointed out above.
I am currently running into an issue of concurrent requests, NodeJS, with access points to a cookie that holds information that I attain from a server. The thing is the requests being made are asynchronous, and need to remain that way, but I am in charge of asking for the new data sets when the cookie is about to become stale. How do i keep updating the cookie without bogging the server down with requests for a new cookie, if multiple concurrent requests all assume that they are the ones that should be in charge of refreshing the cookie's value.
I.e. Req1->Req30 are fired off. In the process of handling Req17 the cookies time to live is caught so it sends out the refresh command. The thing is Req18->Req30 all assume that they should be the ones to refresh the cookies value, because they also do the staleness checks and fail in that respect.
I have limited ability to actively change the server side code, and due to the sensitive nature of the data cannot readily decide to place it in a DB because at that point, I become charged with ensuring that the data is again secured.
Should I just store multiple key/values in the cookie, and iterate through them, this could become an expensive operation. Also could overwrite the cookie with invalid data on some request, since to update the cookie and append the new key value pairs requires creating a new one, due to immutability with the cookies themselves.
To handle concurrent access on the cookie :
Use of timestamp; only perform the change if the data is more recent
To handle cookie data renewal :
Instead of having workers to perform the check of new data concurrently. Ask one specific worker to handle data update, meanwhile others workers use the data in read only mode.
I'm developing a RestFul Apis for a mobile application (Android App). I'm using 2-Step auth using OTP and remember me token. For the remember me token I'm currently using Remember Me (any other similar strategy npm is welcome). The npm basically sets a unique token to a cookie which the App can use to verify itself. According to documentation in the above NPM, it recommended to re-generate the tokens after every request.
However in the event when the mobile App makes multiple parallel requests, all the parallel request use the same token. This undoubted give an auth error. I guess this is common situation. I wanted to know if there is a standard way to handle this ?
Current Workflow
Mobile App request authentication with a given OTP
Upon successful verification, the App is give a token which is
passed back in a cookie
For calls to protected APIs, the App calls
the API with cookie passed back in the previous step.
The server resets the token in the cookie and sends back the response to the App
Issue with the workflow
The App is successfully logged-in and has a valid cookie.
App makes a call to a protected API /protected_api_1
The server has reset the token in the cookie for the above call but has not yet completed the reponse
App makes a second call /protected_api_2, with the old cookies as the App does not have the new cookie with it.
Auth fails for (3)
Ok, checking your update I think of 3 workarounds for this. Let's say we have 3 actions, (a), (b) and (c) that requires the token to consume the API.
Token Store
With this just I mean a class, file, cookie or object where you can save your current token, and you can update with the new token after an action is completed.
The problem with this solution is that if you make (a), (b) and (c) at the same time with the same token, the first one who finishes would update the store, and the other 2 would fail. You would have run them synchronously or concurrently.
If you want to do it this way, maybe it would be a better idea to have a:
Lock: a boolean variable that indicates that the token is being used and that the current request must wait for the token to execute and update the token.
Queue: just a linked list where you push the requests and they are consumed asynchronously when the lock isn't set. You implement a service in another thread that handles the queue, may in a similar fashion to a reactor pattern.
Grouping The Requests
Let's suppose that your application executes (a), (b) and (c) very often. In that case, it would be a good idea to group them in just one action and execute it on the server with just one callback. This could be complicated in your case because it would require to create new resources or think about your modeling of the problem.
Managing token expiration
I've seen this in some projects. You set a soft expiration for the token, let's say 15 minutes (or even less). After the time has passed, you give the client a new token, before that time you keep the same token. (a), (b) and (c) would run at the same time with the same token. Problems would happen when you run the requests near the expiration time, depending on how long it takes to complete them.
I can't give you more details about implementation because I don't know in what language or framework you are implementing the client, and I've never made an Android Application, but I think it would be a good idea to try one of this ideas or a mix of them. Best wishes.
Original
I don't understand what do you mean by parallel in this context.
Try making a Token Store resource in your app which every parallel request consumes and updates after request is done.
If all requests are sent at the send time, maybe it would be a good idea to group them in just one operation, but that would maybe require API endpoint changes.
I have built an API that generates an authentication token for the users that log in. At this time I also have a client application written in Node.JS.
When I make the request with the user credentials from the client application to the API I get the authentication token: how should I store it in the client application? I'm not supposed to request a token every time I want to make a request for the API, correct?
I thought about putting the token in a Cookie, but I don't think that's the best solution.
What would you recommend?
Upon successful login, a unique, one-use token should be created server side and stored in the database against a user id and timestamp. You store the token in a cookie client-side. You then pass the token up to every subsequent API call. The server should then check the token is valid (ie not expired, say issued or update less then say 30 minutes ago). If it is valid, you can retrieve the user details stored against that token and perform whatever backend functionality you need (as the user is authenticated). You then update the timestamp for that token (refresh the session as you want the login to time out after say 30 minutes of no user interaction). If a token is expired or non-existent when you get the API call, redirect to the login page.
Also, you probably know this already, but make sure the token is unique and non-guessable, I tend to generate new random GUIDs and encrypt them, do not use sequentail ids or anything like that.
I think that this link could help you:
Implementing authentication with tokens for restful applications - https://templth.wordpress.com/2015/01/05/implementing-authentication-with-tokens-for-restful-applications/
In fact, you should have a token with an expiration date, so you don't have to get a new token each time before sending a request. When the token expires, you simply need to get a new one from a service "refresh token".
Regarding the question about how to store the token in the client application, I think that you could keep it in memory (map or embedded database).
Otherwise to finish, I don't think that it's a good idea to use cookies in such use case.
Hope it will help you.
Thierry
We're working on an application that uses a very similar approach. The client application is a static HTML5/JS single-page application (with no server-side generation whatsoever) and communicates with an API server.
The best approach is to store the session token in memory: that is, inside a variable in the JS code. If your client application is a single page, it shouldn't be a problem.
In addition to that, we also keep the session token in sessionStorage to preserve it in case the user refreshes the page. To preserve the token when new tabs are created (sessionStorage is specific to a browser window), we also store it in localStorage when the page is being closed, together with a counter for open tabs (when all tabs of the application are closed, we remove the token.
// Handle page reloads using sessionStorage
var sess = sessionStorage.getItem('session-token')
if(sess && sess !== 'null') { // Sometimes empty values are a string "null"
localStorage.setItem('session-token', sess)
}
// Set a counter to check when all pages/tabs of the application are closed
var counter = parseInt(localStorage.getItem('session-counter') || 0, 10)
counter++
localStorage.setItem('session-counter', counter)
// Event fired when the page/tab is closing
window.onbeforeunload = function() {
var counter = parseInt(localStorage.getItem('session-counter') || 0, 10)
counter--
localStorage.setItem('session-counter', counter)
// All pages are closed: remove the session token
if(counter <= 0) {
// Handle page reloads using sessionStorage
sessionStorage.setItem('session-token', localStorage.getItem('session-token'))
localStorage.removeItem('session-token')
}
}
For more information about localStorage and sessionStorage: https://developer.mozilla.org/en-US/docs/Web/API/Web_Storage_API
Why not cookies? Cookies are bad for two reasons:
1. They are generally more persistent, being shared across browser windows and tabs, and they can persist even after the browser is closed.
2. Most importantly, however, by the HTTP specifications they must be sent to the web server every time a request is made. If you're designing an application where the client is completely separated from the API server, you don't want the client's server to see (or log!) the session token in any case.
Some extra advices:
Session tokens must expire. You can achieve that by storing session tokens in the database on the server and verifying them on every request and/or "signing" them (adding a timestamp to the token in plain text, then adding a signed part, for example a HMAC hash, with the timestamp encoded with a secret key you only know).
Tokens can be reused as many times during their life. However, after a certain number of seconds you may want your server to refresh the tokens, invalidating the old one and sending a new token to the client.
As far as I understand there are two approaches in protecting from CSRF attacks: 1) token per session, and 2) token per request
1) In the first case CSRF token is being generated only once when the user's session is initialized. So there is only one valid token for the user at once.
2) In the second case new CSRF token is being generated on each request and after that an old one becomes invalid.
It makes harder to exploit the vunerability because even if attacker steals a token (via XSS) it expires when the user goes to the next page.
But on the other hand this approach makes webapp less usable. Here is a good quotation from security.stackexchange.com:
For example if they hit the 'back' button and submit the form with new values, the submission will fail, and likely greet them with some hostile error message. If they try to open a resource in a second tab, they'll find the session randomly breaks in one or both tabs
When analizing Node.js Express framework (which is based on Connect) I noticed that a new CSRF token is generated on each request,
but an old one doesn't become invalid.
My question is: what is the reason to provide new CSRF token on each request and not to make invalid an old one?
Why not just generate a single token per session?
Thank you and sorry for my English!
CSRF tokens are nonces. They are supposed to be used only once (or safely after a long time). They are used to identify and authorize requests. Let us consider the two approaches to prevent CSRF:
Single token fixed per session: The drawback with this is that the client can pass its token to others. This may not be due to sniffing or man-in-the-middle or some security lapse. This is betrayal on user's part. Multiple clients can use the same token. Sadly nothing can be done about it.
Dynamic token: token is updated every time any interaction happens between server and client or whenever timeout occurs. It prevents use of older tokens and simultaneous use from multiple clients.
The drawback of the dynamic token is that it restricts going back and continuing from there. In some cases it could be desirable, like if implementing shopping cart, reload is must to check if in stock. CSRF will prevent resending the sent form or repeat buy/sell.
A fine-grained control would be better. For the scenario you mention you can do without CSRF validation. Then don't use CSRF for that particular page. In other words handle the CSRF (or its exceptions) per route.
Update
I can only think of two reasons why single dynamic token is better than multiple:
Multiple tokens are indeed better but have at least one dynamic token like one above. This means designing a detailed workflow which may become complex. For example see here :
https://developers.google.com/accounts/docs/OAuth2
https://dev.twitter.com/docs/auth/implementing-sign-twitter
https://developers.facebook.com/docs/facebook-login/access-tokens/
These are tokens to access their API (form submission etc.) not just login. Each one implements them differently. Not worth doing unless have good use case. Your webpages will use it heavily. Not to mention form submission is not simple now.
Dynamic single token is the easiest, and the readily available in library. So can use it on the go.
Advantages of multiple tokens:
Can implement transactions. You can have ordering between requests.
Can fallback from timeout and authentication errors (you must handle them now).
Secure! More robust than single tokens. Can detect token misuse, blacklist user.
By the way if you want to use multiple tokens you have OAuth2 libraries now.