NodeJS custom session Store, clear expired sessions - node.js

I am currently developing a session store for ArangoDB (connect-arango). It works almost identically to MongoDB session store (connect-mongo, hence 'connect-arango'), but the problem is that ArangoDB does not have a built in TTL for its entries.
MongoDB has this and it's not a problem there. But in ArangoDB, I have to do this in the session store somewhere.
Would checking for expired sessions every 60 seconds (using setTimeout) be sufficient, or should I use something else, like checking every time the "get" function is called?
I would use an AQL query to clear them, similar to this:
FOR s IN sessions
FILTER s.expires < DATE_NOW()
REMOVE s IN sessions
If the user were to clear his cookies, the session would never be accessed using the "get" function, which means I can't check if it has expired there.
What I can do however, is to run the above query every time the "get" function is called, but I think that would be quite unnecessary and put more load on the server.
Edit: Just so no one misunderstands, I know how to clear the expired sessions, I just don't know how often to run the clear function (in this case, it's the AQL query above).

If you put a skip-list index on expires, running the above query every 60 sec should not create any problems. You can also create a periodic job within ArangoDB that runs this query every minute.
Alan Plum has added a session Foxx app to ArangoDB which basically implements all of the above. I'm not sure if he has already released a documentation. The API documentation is visible as
localhost:8529/_db/_system/_admin/aardvark/standalone.html#!/sessions
If you have any questions about this Foxx application, please fell free to contact Alan at hackers (at) arangodb.org

As of ArangoDB 2.3 Foxx comes with a built-in sessions app you can use in your Foxx apps. You can re-use the session app even if you don't want to use Foxx.
You can simply mount a copy of the sessions app at a mount point of your choice. This allows you to configure the session TTL as well as other details (e.g. length of the session IDs). The app exposes an HTTP API that lets you create new sessions, update sessions, retrieve existing sessions and delete them. It automagically enforces the TTL (i.e. deletes expired sessions) when you try to retrieve or update a session.
Currently the TTL is only enforced whenever a session is accessed. Depending on your use case this may still clutter up the collection with expired sessions. Currently it's not possible to schedule recurring tasks directly inside ArangoDB; there's a job queue but it is not a good fit for this kind of problem. This will likely be solved in a future version of ArangoDB.
I would recommend monitoring over time the amount of expired sessions that pile up in the collection of your mounted copy of the sessions app. It's probably sufficient to prune the expired sessions once a week (or even less). As the sessions app will automatically delete expired sessions when they are accessed via its API, the only problem are abandoned sessions (e.g. private browsing mode, or one-time users).
Disclosure: I wrote the new sessions/auth apps introduced in ArangoDB 2.3.

Related

Node JS in memory VS external session

This is all the information that I have about session stores
"When you use memory-based storage, all session information is stored in memory and is lost when you stop and restart. Better to use a external persistence storage."
But, isn’t it the normal phenomenon that session should be lost when we stop or restart. Otherwise there can be security issues.
For example, in Amazon, if I add some products into my cart and I quit or logout. Next time when I login, I find the same items in my cart. This has got nothing with persistent sessions.
Can anyone site me an example with it’s practical use.

Caching user permissions in redis, good idea?

For last few days I am working on improving app performance. What do you think about caching user data and permission in redis? In my case every time when user create post or try to upload file app check in database, if user exist and fetch user permission and role. My first idea was to put permission and user role in session but user can have multiple session on different device, so every time when user get ban or user permission change app need to update every user's session and as far as I know express-session do not support this kind of feature.
Unfortunately it's a very open question with no strict answer. But as an advice, I'd say Redis is perfect for storing user session altogether. Moving parts of the session would still require you to query the database (you get the session, you must query for user information, and also ping Redis for permissions & roles). So I think you should put all session data in one place, and the fastest would be Redis. It would also let you save that data so it's not entirely in the memory. There are also many ways to optimize it, like when to write the data (like every second) and so forth.
Querying Redis is extremely fast and efficient since you don't have any user to user relations, and most of the times you won't search on anything different than "get me that user session by id".
It's a very standard solution to put user session in Redis, if not the most often used one :) Good luck!

Remove session entries in redis upon cookie deletion on the user side

I have the following scenario:
A user logs in, a session entry via connect-redis which is valid for 2 weeks. The user can now access certain parts of the app using the session id that is stored in the app.
Now, if 1. the user deletes that cookie in the browser (with the session) and 2. logs in again - there are now 2 session entries in Redis associated with the same user, with the older one being obsolete.
What is the best way to deal with such old/obsolete sessions? Should I use a client library for redis, search through all sessions to find the ones that match the info of the currently logging in user (after she potentially manually removed the cookie), and purge these obsolete session; or is there a better way?
Gracias,
nik
That depends whether this (user deletes the cookie) is a common scenario and, if it is, whether there's a problem with obsolete cookies in the server.
Two potential "problems" that I can think of are:
Security - could the stale cookie be exploited for malicious intent? I do not see how that's possible, but I may be wrong(tm).
Storage - are the stale cookies taking too much (RAM) resources? If there's a lot of stale cookies and each cookie is large enough, this could become a problem.
Unless 1 or 2 applies to your use case, I don't see why you'd want to go through the trouble of "manually" cleansing old cookies. Assuming that you're giving a ttl value to each session (2 weeks?), outdated cookies would be purged automatically after that period so no extra action is needed to handle these.

How to add global variable in REDIS cache

Need to add global variable in REDIS Cache.
For Ex:
Consider an student, employee and Staff related application.
Every role has a unique object. When student log in to the application we need to get student information from redis. Same for other roles log in too.
If we store all the details at time of application initialization, we no need to send request to get role related details. If we store it into session, that data will be checking by every users login. And also session id has been varied for every users.
Is it possible?
If yes, How can we store the values at the time of application initialization?
First of all, since Redis is a cache, you are storing objects that may be evicted with time. When Redis becomes full, it will start clearing objects according to your eviction policy configuration.
Probably caching everything upon initialization is not the best course of action, I'd go with caching the objects when they are first requested, if they don't exist on Redis, store them for future retrievals. This way, if your Redis instance clears that object, your application logic will always find it (from cache or from local storage). That's called a Cache-Aside Pattern.
Your initialization logic varies depending on which technology / platform are you using.
ASP.NET MVC 5 or lower has the Global.asax file, ASP.NET 5 MVC6 has the Startup.cs file.

Should Domain Entities always be loaded in their entirety?

I have a custom ASP.NET Membership Provider that I am trying to add password history functionality to. User's passwords expire after X days. Then they have to change their password to one that has not been used in their past X changes.
I already had the User entity, which has a password attribute for their current password. This maps to the User table in the db. Since I needed a list of previous passwords I created a UserPassword table to store this information with a FK reference to the UserId.
Since passwords are value objects, and have no meaning outside of the user, they belong inside the User aggregate, with the User as the root. But here in lies my dilemma. When I retrieve a User from the repository do I always have to get all of their previously used passwords? 99% of the time I don't care about their old passwords, so retrieving them each time I need a User entity seems like a dumb thing to do for db performance. I can't use lazy loading because the User entity is disconnected from the context.
I was thinking of creating a PasswordHistory entity but for the reason stated above, passwords aren't really entities.
How would you DDD experts out there handle this situation?
Thanks.
Edit 1: After considering this some more, I realized this is essentially a question about Lazy Loading. More specifically, how do you handle lazy-loading in a disconnected entity?
Edit 2: I am using LINQ to SQL. The entities are completely detached from the context using this from CodePlex.
It is hard to fully answer this question because you do not specify a platform, so I cannot be exactly sure what you even mean by "disconnected". With Hibernate "disconnected" means you have an object in a valid session but the database connection is not currently open. That is trivial, you simply reconnect and lazy load. The more complicated situation is where you have an object which is "detached" i.e no longer associated with an active session at all and in that case you cannot simply reconnect, you have to either get a new object or attach the one you have to an active session.
Either way, even in the more complicated scenarios, there is still not a whole lot to lazy loading strategies because the requirements are so inflexible: You have to be "connected" to load anything, lazy or otherwise. Period. I will assume "disconnected" means the same thing as detached. Your strategy comes down to two basic scenarios: is this a situation where you probably need to just reconnect/attach on the fly to lazy load, or is it a scenario where you want to make a decision to sometimes conditionally load additional objects before you disconnect in the first place?
Sometimes you may in fact need to code for both possibilities.
In your case you also have to be connected not only to lazy load the old passwords but to update the User object in the first place. Also since this is ASP.NET you might be using session per request, in which case your option is now basically down to only one - conditionally lazy load before your disconnect and that is about it.
The most common scenario would be a person logs in and the system determines they are required to change their password, and asks them to do so before proceeding. In that case you might as well just take care of it immediately after login and keep the User connected. But you are probably using session per request, so what you could do is in the first request process the time limit and if it is expired, you are still connected here so go ahead and return a fully loaded User (assuming you are using the historic passwords in some kind of client side script validation). Then on the submit trip you could reattach or just get a new User instance and update that.
Then there is always the possibility you also have to provide them with the option to change their password at any time. They are already logged in. Does not matter much here, you have a User but the request ended long ago and it does not have passwords loaded. Here, I would probably just write a service method where when they invoke a change password function the service gets a second copy of the User object with the full history for update purposes only, then updates the password, and then discards that object without ever even using it for session or authentication purposes. Or if you are using Session per request you have to do the equivalent - get a fully initialized object for client side validation purposes, then when the data is submitted you can either reattach either one you already have or just get yet a third instance to actually do the update.
If the password is needed after beginning an authenticated session, you could still do the same things and either replace the local User or update the local User's in memory password version as well.
If you have too much stuff going on with multiple levels of authentication most likely you are going to have to require them to logoff and do a full log back in after a password change anyway, so the state of the User does not matter much once they request a password change.
In any case if you are using session per request and your objects become fully detached after every request, in the first scenario you can still lazy load while you are on the server on the original request to return data for client side validation. In the second scenario you have to make another trip (there really is no such thing as lazy loading here). In both case though you have to weigh your two update options because you are always disconnected before an update. You can either just get a second instance from the database on the submit trip to update, or you can reattach the one you already have. It depends on what is optimal/easiest - does saving a db round trip for an uncommon event really matter? Does reattaching using your ORM of choice possibly hit the database again anyway? I would probably not bother to reattach and instead just get a new instance for the actual update as I needed it.

Resources