How long do IBM-Graph authorization tokens last for? - ibm-graph

In IBM-Graph, in order to avoid excessively long authorization for each request we request a session token first, and send that along in the headers of any subsequent requests. Exactly as explained in the documentation.
In order to persist this single token across our applications cluster, we are currently storing the active IBM-Graph session token in memcached. This way each node of our cluster pulls this token out prior to every request to our graph.
Having monitored this key, it appears to not have changed/expired since we made our first request a couple of days ago. Therefore, I have some questions regarding it:
How long do these session tokens last for?
Is our current method of distributing this single key even required?
Is there a better method?
It would be nice to be able to remove the need to hit memcached for every request altogether. Knowing how long they last for could help us to devise a more elegant solution than constantly hammering a single small memcached instance.

How long do these session tokens last for?
IBM Graph tokens are intended to last for a long while - you should expect somewhere around a day, though it's subject to change. It shouldn't ever be shorter than an hour.
Is our current method of distributing this single key even required?
No, not really. I'd write some code to automatically acquire new tokens on HTTP 403 (i.e., at boot time and when they expire) and use them locally. There's no limit to the number of tokens you can have active at one time.

Related

Is this an acceptable approach to refreshing JWTs?

I'm in the process of rebuilding an existing web app, that uses JWTs to manage authentication. I'm still new to JWTs, so I'm learning about how they should work, while, at the same time, trying to understand why the web app's current implementation is the way it is.
The current version's flow is as follows:
When a user successfully logs in or registers, a user object is returned along with a JWT property. This JWT is included in subsequent API calls as an Authorization header.
Every ten minutes, a get request is made to API endpoint /refresh-token.
If successful, the response body contains a success message, and the response header contains an updated Authorization header.
All subsequent ten-minute timed get requests to /refresh-token use the original JWT that was returned in step 1, and so on.
From what I've read so far, this doesn't correlate with any recommended approaches.
Is it safe enough to replicate this flow in the newer version, or is this something I'm better off not replicating?
Edit: I'm working solely on the front-end - the API isn't being updated for a while, so I'm limited to what it currently returns.
I believe this article summarizes the current state of the art: https://auth0.com/blog/refresh-tokens-what-are-they-and-when-to-use-them/. You usually have two tokens. Access token which is short lived and an refresh token, which lives longer. This way you don't need to call the auth server every x minutes, but you can do it on demand.
I don't know if you need to deal with blacklisting too? I believe blacklisting is easier when you have a separation of access token and refresh token (only refresh token needs to be blacklisted). But I believe you could deal with this problem too, probably in a bit more sophisticated manner.
Having said that. What you have is not wrong. It's hard for me to point out any flaws in the way you are doing besides of what has been pointed out above.

Should I store JWT tokens in redis?

I'm building an application with ExpressJS, Mongodb(Mogoose). Application contains routes where user has to be authenticated before accessing it.
Currently I have written a express middleware to do the same. Here with the help of JWT token I'm making mongodb query to check whether user is authenticated or not. but feel this might put unnecessary request load on my database.
should I integrate redis for this specific task?
does it will improve API performance? or should go ahead with existing
mongodb approach?
would be helpful if I get more insights on this.
TLDR: If you want the capability to revoke a JWT at some point, you'll need to look it up. So yes, something fast like Redis can be useful for that.
One of the well documented drawbacks of using JWTs is that there's no simple way to revoke a token if for example a user needs to be logged out or the token has been compromised. Revoking a token would mean to look it up in some storage and then deciding what to do next. Since one of the points of JWTs is to avoid round trips to the db, a good compromise would be to store them in something less taxing than an rdbms. That's a perfect job for Redis.
Note however that having to look up tokens in storage for validity still reintroduces statefulness and negates some of the main benefits of JWTs. To mitigate this drawback make the list a blacklist (or blocklist, i.e. a list of invalid tokens). To validate a token, you look it up on the list and verify that it is not present. You can further improve on space and performance by staggering the lookup steps. For instance, you could have a tiny in-app storage that only tracks the first 2 or 3 bytes of your blacklisted tokens. Then the redis cache would track a slightly larger version of the same tokens (e.g. the first 4 or 5 bytes). You can then store a full version of the blacklisted tokens using a more persistent solution (filesystem, rdbms, etc). This is an optimistic lookup strategy that will quickly confirm that a token is valid (which would be the more common case). If a token happens to match an item in the in-app blacklist (because its first few bytes match), then move on to do an extra lookup on the redis store, then the persistent store if need be. Some (or all) of the stores may be implemented as tries or hash tables. Another efficient and relatively simple to implement data structure to consider is something called a Bloom filter.
As your revoked tokens expire (of old age), a periodic routine can remove them from the stores. Keep your blacklist short and manageable by also shortening the lifespan of your tokens.
Remember that JWTs shine in scenarios where revoking them is the exception. If you routinely blacklist millions of long-lasting tokens, it may indicate that you have a different problem.
You can use Redis for storing jwt label. Redis is much faster and convenient for storing such data. The request to Redis should not greatly affect the performance. You can try the library jwt-redis
JWT contains claims. you can store a claim such as
session : guid
and maintain a set in redis for all keys black listed. the key should stay in set as long as the jwt validity.
when your api is hit
verify jwt signature. if tempered stop
extract claims in a list of key value pairs
get the session key and check in redis in blacklisted set
if found, stop else continue

Why does Express/Connect generate new CSRF token on each request?

As far as I understand there are two approaches in protecting from CSRF attacks: 1) token per session, and 2) token per request
1) In the first case CSRF token is being generated only once when the user's session is initialized. So there is only one valid token for the user at once.
2) In the second case new CSRF token is being generated on each request and after that an old one becomes invalid.
It makes harder to exploit the vunerability because even if attacker steals a token (via XSS) it expires when the user goes to the next page.
But on the other hand this approach makes webapp less usable. Here is a good quotation from security.stackexchange.com:
For example if they hit the 'back' button and submit the form with new values, the submission will fail, and likely greet them with some hostile error message. If they try to open a resource in a second tab, they'll find the session randomly breaks in one or both tabs
When analizing Node.js Express framework (which is based on Connect) I noticed that a new CSRF token is generated on each request,
but an old one doesn't become invalid.
My question is: what is the reason to provide new CSRF token on each request and not to make invalid an old one?
Why not just generate a single token per session?
Thank you and sorry for my English!
CSRF tokens are nonces. They are supposed to be used only once (or safely after a long time). They are used to identify and authorize requests. Let us consider the two approaches to prevent CSRF:
Single token fixed per session: The drawback with this is that the client can pass its token to others. This may not be due to sniffing or man-in-the-middle or some security lapse. This is betrayal on user's part. Multiple clients can use the same token. Sadly nothing can be done about it.
Dynamic token: token is updated every time any interaction happens between server and client or whenever timeout occurs. It prevents use of older tokens and simultaneous use from multiple clients.
The drawback of the dynamic token is that it restricts going back and continuing from there. In some cases it could be desirable, like if implementing shopping cart, reload is must to check if in stock. CSRF will prevent resending the sent form or repeat buy/sell.
A fine-grained control would be better. For the scenario you mention you can do without CSRF validation. Then don't use CSRF for that particular page. In other words handle the CSRF (or its exceptions) per route.
Update
I can only think of two reasons why single dynamic token is better than multiple:
Multiple tokens are indeed better but have at least one dynamic token like one above. This means designing a detailed workflow which may become complex. For example see here :
https://developers.google.com/accounts/docs/OAuth2
https://dev.twitter.com/docs/auth/implementing-sign-twitter
https://developers.facebook.com/docs/facebook-login/access-tokens/
These are tokens to access their API (form submission etc.) not just login. Each one implements them differently. Not worth doing unless have good use case. Your webpages will use it heavily. Not to mention form submission is not simple now.
Dynamic single token is the easiest, and the readily available in library. So can use it on the go.
Advantages of multiple tokens:
Can implement transactions. You can have ordering between requests.
Can fallback from timeout and authentication errors (you must handle them now).
Secure! More robust than single tokens. Can detect token misuse, blacklist user.
By the way if you want to use multiple tokens you have OAuth2 libraries now.

Make an HTTP request non-replayable

I have an application that runs on for example Google TV or Apple TV, which sends HTTP requests to a service of mine.
Now if someone listens in on this request, they can replay it and in that way execute a Denial of Service (DOS) attack our service.
Is there any way to make each request unique, so it cannot be replayed?
I thought of sending the time encrypted in the request and check the difference between the server time and the time the request was sent, but I'm getting too big time differences to compare.
Does anyone have a better idea?
You are in a good situation as you have control both on the server side and the client side (your application is talking). Include into your message
The current time in milliseconds plus + random number
The combined hash produced by these values plus (as a the third input) some key only your application knows. Use some good one way hashing algorithm.
Only the code who knows the mentioned key will be able to compute a correct hash. The used request records (hash and time stamp) can be stored for some expiration time that can be long. Very old request records can be easily expired as they contain the time stamp.
The positive feature of the proposed approach is it does not require to connect in advance in order to receive a token, needs no authentication, needs no registration and can use the open protocol. Using token just by itself does not help much against DoS as an attacker quickly writes a script to connect and obtain the token in advance as well.

Secure Token URL - How secure is it? Proxy authentication as alternative?

I know it as secure-token URL, maby there is another name out there. But I think you all know it.
Its a teqniuque mostly applied if you want to restrict content delivery to a certain client, that you have handed a specific url in advance.
You take a secret token, concatenate it with the resource you want to protect, has it and when the client requests the this URL on one of your server, the hash is re-constructed from the information gathered from the request and the hash is compared. If its the same, the content is delivered, else the user gets redirected to your webseite or something else.
You can also put a timestamp in the has to put a time to live on the url, or include the users ip adress to restrict the delivere to his connection.
This teqnique is used by Amazon (S3 and Cloudfront),Level 3 CDN, Rapidshare and many others. Its also a basic part of http digest authentication, altough there is it taken a step further with link invalidation and other stuff.
Here is a link to the Amazon docs if you want to know more.
Now my concerns with this method is that if one person cracks one token of your links, the attacker gets your token plain-text and can sign any URL in your name himself.
Or worse, in the case of amazon, access your services on an administrative scope.
Granted, the string hashed here is usually pretty long. And you can include a lot of stuff or even force the data to have a minimum length by adding some unnecessary data to the request. Maby some pseudo variable in the URL that is not used, and fill it up with random data.
Therefore brute force attacks to crack the sha1/md5 or whatever you use hash are pretty hard. But protocol is open, so you only have to fill in the gap where the secret token is and fill up the rest with the data known from the requst. AND today hardware is awesome and can calculate md5s at a rate of multiple tens of megabytes per second. This sort of attack can be distributed to a computing cloud and you are not limited to something like "10 tries per minute by a login server or so" which makes hashing approaches usually quite secure. And now with amazon EC2 you can even rent the hardware for short time (beat them with their own weapons haha!)
So what do you think? Do my concerns have a basis or am I paranoic?
However,
I am currently designing an object storage cloud for special needs (integrated media trans coding and special delivery methods like streaming and so on).
Now level3 introduced an alternative approach to secure token urls. Its currently beta and only open to clients who specifically request it. They call it "Proxy authentication".
What happens is that the content-delivery server makes a HEAD request to a server specified in your (the client's) settings and mimics the users request. So the same GET path and IP Address (as x_forwarder) is passed. You respond with a HTTP status code that tells the server to go a head with the content delivery or not.
You also can introduce some secure-token process into this and you can also put more restrictions on it. Like let a URL only be requested 10 times or so.
It obviously comes with a lot of overhead because additional request and calculations take place, but I think its reasonable and I don't see any caveats with it. Do you?
You could basically reformulate your question to: How long a secret token is needed to be safe.
To answer this consider the number of possible characters (alphanumeric + uppercase is is already 62 options per character). Secondly ensure that the secret token is random, and not in a dictionary or something. Then for instance if you would take a secret token of 10 characters long, it would take 62^10 (= 839.299.365.868.340.224 )attempts to bruteforce (worstcase; average case would be half of that of course). I wouldn't really be scared of that, but if you are, you could always ensure that the secret token is at least 100 chars long, in which case it takes 62^100 attempts to bruteforce (which is a number of three lines in my terminal).
In conclusion: just take a token big enough, and it should suffice.
Of course proxy authentication does offer your clients extra control, since they can way more directly control who can look and not, and this would for instance defeat emailsniffing as well. But I don't think the bruteforcing needs to be a concern given a long enough token.
It's called MAC as far as I understand.
I don't understand what's wrong with hashes. Simple calculations show that a SHA-1 hash, 160 bits, gives us very good protection. E.g. if you have a super-duper cloud which does 1 billion billions attempts per second, you need ~3000 billions billions years to brute force it.
You have many ways to secure a token :
Block IP after X failed token decoding
Add a timestamp in your token (hashed or crypted) to revoke the token after X days or X hours
My favorite : use a fast database system such as Memcached or better : Redis to stokre your tokens
Like Facebook : generate a token with timestamp, IP etc... and crypt it !

Resources