Why do session stores have static timeouts/maxAge - node.js

I am using node.js + redis for session persistency, however I'm noticing that in nearly every example of redis store or other session persistence, there is a static maxAge or timeout for sessions that you can configure.
It makes sense to me that the session length should be based on the last interaction, and thus allow me to make an update on the timeout. Redis's documentation on its EXPIRE documentation has a section on refreshing the timeoutl
Is refreshing the session timeout bad by design? Should static timeouts always be used?
Edit
My original question was very general since I couldn't find documentation for my specific case and I assumed perhaps it was bad practice! I finally discovered how to do this with Connect + Node after looking at the source code:
Connect listens to the header end event (to know to update the session)
When the event fires, it asks the session store to save the session
Specifically as part of connect-redis, the save method updates the maxAge
In short, I was looking at the wrong place for documentation. Connect#session documents how if maxAge is assigned a new value, session stores (like connect-redis) should honor that.

There is no such thing as bad design, only bad choices.
Static Max Timeout
A good choice where security is of the utmost importance. Using a tight session timeout, especially with authentication, ensures that the end user is the intended user and not someone who dropped in while the principal user was away from his/her pc or device. The major downside to this approach is negative impact to user experience. The last thing you want is the session going stale just before the user was about to checkout or do something important; with a static timeout, this is inevitable and will happen often enough to piss off users.
Reset Timeout Based on Last Visit
It's safe to say most websites use this approach since it offers a good balance between security and user experience. Resetting the session timeout based on last visit eliminates the issue related to static max timeout, and most ecom and banking websites use this approach, so it's certainly an accepted approach.
Not knowing what you're actually building, I'd say going with the reset approach is probably a good choice nonetheless. The examples you mentioned likely omitted resetting the timeout for brevity reasons, not because it's a bad design.

Related

Remove session entries in redis upon cookie deletion on the user side

I have the following scenario:
A user logs in, a session entry via connect-redis which is valid for 2 weeks. The user can now access certain parts of the app using the session id that is stored in the app.
Now, if 1. the user deletes that cookie in the browser (with the session) and 2. logs in again - there are now 2 session entries in Redis associated with the same user, with the older one being obsolete.
What is the best way to deal with such old/obsolete sessions? Should I use a client library for redis, search through all sessions to find the ones that match the info of the currently logging in user (after she potentially manually removed the cookie), and purge these obsolete session; or is there a better way?
Gracias,
nik
That depends whether this (user deletes the cookie) is a common scenario and, if it is, whether there's a problem with obsolete cookies in the server.
Two potential "problems" that I can think of are:
Security - could the stale cookie be exploited for malicious intent? I do not see how that's possible, but I may be wrong(tm).
Storage - are the stale cookies taking too much (RAM) resources? If there's a lot of stale cookies and each cookie is large enough, this could become a problem.
Unless 1 or 2 applies to your use case, I don't see why you'd want to go through the trouble of "manually" cleansing old cookies. Assuming that you're giving a ttl value to each session (2 weeks?), outdated cookies would be purged automatically after that period so no extra action is needed to handle these.

HTTPS or other clever authentication methods

A little background: I am going to be constructing a webserver, likely the most up to date version of apache when I get around to it. It is going to be updated with sensory information from a makeshift security system I have.
As a counterpart, I am designing an app to go along with it, that will automatically contact the webserver and pull the sensory information about once every 1.5 minutes.
I want to have an authentication method so that the average Bob can't see this information, mostly due to the fact that there will be some command and control as part of the server as well.
The question: I feel like a simple username and password is the wrong way to go about this since it isn't dynamic and theoretically seeing the same credentials sent that frequent could be dangerous, so is there any other authentication method that could mitigate this?
The question pt. 2: Obviously I want an encrypted channel, will https stumble over itself if it tries to renegotiate every minute and a half?
I haven't begun this project yet much less chosen any language to write it in, meaning I am super open minded to suggestions, any help is greatly appreciated.
The question: I feel like a simple username and password is the wrong
way to go about this since it isn't dynamic and theoretically seeing
the same credentials sent that frequent could be dangerous, so is
there any other authentication method that could mitigate this?
You could use Google Sign-In to allow log on via a Google account.
Or you could implement two factor authentication with say Google Authenticator or via SMS to prove that the user logging in has more than one factor of authentication. These factors could be:
Something you know (e.g. password)
Something you have (e.g. phone that provides a One Time Password)
Edit: Having re-read your question - yes you are fine to authenticate with username and password (over HTTPS), however you should then store a session identifier client-side and simply send this in future rather than the username/password each time. This is more secure as it can be stored safely client-side, and if exposed the identifier can be easily revoked.
The question pt. 2: Obviously I want an encrypted channel, will https
stumble over itself if it tries to renegotiate every minute and a half?
Nope, this is what it is designed for. Browsers will keep open an HTTPS connection for a length of time. Additionally, they will use session resumption rather than executing a full HTTPS handshake in the case that a new connection needs to be established. Session resumption is much quicker than establishing a completely new session. See this article on the CloudFlare blog for more info.

Should Domain Entities always be loaded in their entirety?

I have a custom ASP.NET Membership Provider that I am trying to add password history functionality to. User's passwords expire after X days. Then they have to change their password to one that has not been used in their past X changes.
I already had the User entity, which has a password attribute for their current password. This maps to the User table in the db. Since I needed a list of previous passwords I created a UserPassword table to store this information with a FK reference to the UserId.
Since passwords are value objects, and have no meaning outside of the user, they belong inside the User aggregate, with the User as the root. But here in lies my dilemma. When I retrieve a User from the repository do I always have to get all of their previously used passwords? 99% of the time I don't care about their old passwords, so retrieving them each time I need a User entity seems like a dumb thing to do for db performance. I can't use lazy loading because the User entity is disconnected from the context.
I was thinking of creating a PasswordHistory entity but for the reason stated above, passwords aren't really entities.
How would you DDD experts out there handle this situation?
Thanks.
Edit 1: After considering this some more, I realized this is essentially a question about Lazy Loading. More specifically, how do you handle lazy-loading in a disconnected entity?
Edit 2: I am using LINQ to SQL. The entities are completely detached from the context using this from CodePlex.
It is hard to fully answer this question because you do not specify a platform, so I cannot be exactly sure what you even mean by "disconnected". With Hibernate "disconnected" means you have an object in a valid session but the database connection is not currently open. That is trivial, you simply reconnect and lazy load. The more complicated situation is where you have an object which is "detached" i.e no longer associated with an active session at all and in that case you cannot simply reconnect, you have to either get a new object or attach the one you have to an active session.
Either way, even in the more complicated scenarios, there is still not a whole lot to lazy loading strategies because the requirements are so inflexible: You have to be "connected" to load anything, lazy or otherwise. Period. I will assume "disconnected" means the same thing as detached. Your strategy comes down to two basic scenarios: is this a situation where you probably need to just reconnect/attach on the fly to lazy load, or is it a scenario where you want to make a decision to sometimes conditionally load additional objects before you disconnect in the first place?
Sometimes you may in fact need to code for both possibilities.
In your case you also have to be connected not only to lazy load the old passwords but to update the User object in the first place. Also since this is ASP.NET you might be using session per request, in which case your option is now basically down to only one - conditionally lazy load before your disconnect and that is about it.
The most common scenario would be a person logs in and the system determines they are required to change their password, and asks them to do so before proceeding. In that case you might as well just take care of it immediately after login and keep the User connected. But you are probably using session per request, so what you could do is in the first request process the time limit and if it is expired, you are still connected here so go ahead and return a fully loaded User (assuming you are using the historic passwords in some kind of client side script validation). Then on the submit trip you could reattach or just get a new User instance and update that.
Then there is always the possibility you also have to provide them with the option to change their password at any time. They are already logged in. Does not matter much here, you have a User but the request ended long ago and it does not have passwords loaded. Here, I would probably just write a service method where when they invoke a change password function the service gets a second copy of the User object with the full history for update purposes only, then updates the password, and then discards that object without ever even using it for session or authentication purposes. Or if you are using Session per request you have to do the equivalent - get a fully initialized object for client side validation purposes, then when the data is submitted you can either reattach either one you already have or just get yet a third instance to actually do the update.
If the password is needed after beginning an authenticated session, you could still do the same things and either replace the local User or update the local User's in memory password version as well.
If you have too much stuff going on with multiple levels of authentication most likely you are going to have to require them to logoff and do a full log back in after a password change anyway, so the state of the User does not matter much once they request a password change.
In any case if you are using session per request and your objects become fully detached after every request, in the first scenario you can still lazy load while you are on the server on the original request to return data for client side validation. In the second scenario you have to make another trip (there really is no such thing as lazy loading here). In both case though you have to weigh your two update options because you are always disconnected before an update. You can either just get a second instance from the database on the submit trip to update, or you can reattach the one you already have. It depends on what is optimal/easiest - does saving a db round trip for an uncommon event really matter? Does reattaching using your ORM of choice possibly hit the database again anyway? I would probably not bother to reattach and instead just get a new instance for the actual update as I needed it.

What are the implications of using 'low' security in cakephp?

I had an authentication problem in cakephp, when positing credentials from an external site the authentication would work, and then get immediately lost, with the site prompting for login information again.
This guy determined that the cakephp session cookie was changing. His solution was to set security to low.
Seems like in medium or high security Cake makes a double check for
referer... but with low security works fine when clicking auth-
protected links from external sites like hotmail or yahoo
This solution also worked for me, but what I am losing by setting cakephp to 'low' security?
When security is high, a new session ID get generated on every request. It is practically impossible to create a single-sign-on solution between two applications by sharing a session cookie in this case, since Cake will constantly change the session ID without notifying the other application.
When security is medium (or higher), session.referer_check is enabled.
When security is low, you don't have either of the above features, but it is still just as secure as any average PHP website/CMS out there.
The main thing that I know of is the session timeout, as per the app/config/core.php comments, in that your session timeout will be multiplied by a lower number.
The book backs this up,
The level of CakePHP security. The session timeout time defined in 'Session.timeout' is multiplied according to the settings here.
Valid values:
'high' = x 10
'medium' = x 100
'low' = x 300
'high' and 'medium' also enable session.referer_check
CakePHP session IDs are also regenerated between requests if 'Security.level' is set to 'high'.
Ref: http://book.cakephp.org/view/44/CakePHP-Core-Configuration-Variables
So the other thing looks to be the referrer checking.
session.referer_check contains the substring you want to check each HTTP Referer for. If the Referer was sent by the client and the substring was not found, the embedded session id will be marked as invalid. Defaults to the empty string.
So the looks of it, the things you are lose are the ability to accuratly determine who and which sessions you are dealing with.
I ran into a similar problem with losing sessions and many answers pointed to using $this->requestAction() as it will basically curl a request out of the app, so it can look like another session being created with a high security.
The other thing that many google answers threw up was turning off Session.checkAgent in your app/config/core.php as that meant the session would not be checked. This at least prevented me from losing the session information between page requests.
:)
two things happens when setting to 'low'
1)timeout is bigger
2)if session highjacking is easy, then it will be easier! since the session dosent regenerate between requests as when set to 'high'!
and nothing else.
by the way you can change for a specific page the security level or the session timeout or both... so it is not a no-undo-choice
I believe the only ramifications of setting this to low are that the session time is multiplied by 300 rather than 10 or 100 for high and medium respectively and the session refer check that you are having the issue with.
Update:
If you previously had this set to high, you would also loose out on the session id regeneration between requests.

Best way to limit (and record) login attempts

Obviously some sort of mechanism for limiting login attempts is a security requisite. While I like the concept of an exponentially increasing time between attempts, what I'm not sure of storing the information. I'm also interested in alternative solutions, preferrably not including captchas.
I'm guessing a cookie wouldn't work due to blocking cookies or clearing them automatically, but would sessions work? Or does it have to be stored in a database? Being unaware of what methods can/are being used so I simply don't know what's practical.
Use some columns in your users table 'failed_login_attempts' and 'failed_login_time'. The first one increments per failed login, and resets on successful login. The second one allows you to compare the current time with the last failed time.
Your code can use this data in the db to determine how long it waits to lock out users, time between allowed logins etc
Assuming google has done the necessary usability testing (not an unfair assumption) and decided to use captchas , I'd suggest going along with them.
Increasing timeouts is frustrating when I'm a genuine user and have forgotten my password (with so many websites and their associated passwords that happens a lot , especially to me)
Storing attempts in the database is the best solution IMHO since it gives you the auditing records of the security breach attempts. Depending on your application this may or may not be a legal requirement.
By recording all bad attempts you can also gather higher level information, such as if the requests are coming from one IP address (i.e. someone / thing is attempting a brute force attack) so you can block the IP address. This can be VERY usefull information.
Once you have determined a threshold, why not force them to request the email to be sent to their email address (i.e. similar to 'I have forgotten my password'), or you can go for the CAPCHA approach.
Answers in this post prioritize database centered solutions because they provide a structure of records that make auditing and lockout logic convenient.
While the answers here address guessing attacks on individual users, a major concern with this approach is that it leaves the system open to Denial of Service attacks. Any and every request from the world should not trigger database work.
An alternative (or additional) layer of security should be implemented earlier in the req/ res cycle to protect the application and database from performing lock out operations that can be expensive and are unnecessary.
Express-Brute is an excellent example that utilizes Redis caching to filter out malicious requests while allowing honest ones.
You know which userid is being hit, keep a flag and when it reaches a threshold value simply stop accepting anything for that user. But that means you store an extra data value for every user.
I like the concept of an exponentially increasing time between attempts, [...]
Instead of using exponentially increasing time, you could actually have a randomized lag between successive attempts.
Maybe if you explain what technology you are using people here will be able to help with more specific examples.
Lock out Policy is all well and good but there is a balance.
One consideration is to think about the consruction of usernames - guessable? Can they be enumerated at all?
I was on an External App Pen Test for a dotcom with an Employee Portal that served Outlook Web Access /Intranet Services, certain Apps. It was easy to enumerate users (the Exec /Managament Team on the web site itself, and through the likes of Google, Facebook, LinkedIn etc). Once you got the format of the username logon (firstname then surname entered as a single string) I had the capability to shut 100's of users out due to their 3 strikes and out policy.
Store the information server-side. This would allow you to also defend against distributed attacks (coming from multiple machines).
You may like to say block the login for some time say for example, 10 minutes after 3 failure attempts for example. Exponentially increasing time sounds good to me. And yes, store the information at the server side session or database. Database is better. No cookies business as it is easy to manipulate by the user.
You may also want to map such attempts against the client IP adrress as it is quite possible that valid user might get a blocked message while someone else is trying to guess valid user's password with failure attempts.

Resources