i'm using redis as my session store for a node.js + express app...will it automatically delete old sessions after they expire?
...or do I need to do some cleanup on the server side? (so the db doesn't grow too large)
var RedisStore = require('connect-redis')(express)
app.use(express.session({
store: new RedisStore({
host: cfg.redis.host,
db: cfg.redis.db
}),
secret: 'foobar'
}));
Yes, connect-redis will make Redis clean out your sessions when they expire.
If I remember correctly, the default session timeout is 24 hours which to me is quite a long time to keep something idle in memory, but you can give it a ttl parameter to configure (in seconds) how long you want the sessions kept before Redis expires them.
If you want to make sure for yourself that Redis cleans things up for you, just set the timeout to 30 seconds and have a look in Redis for yourself after the timeout has expired;
app.use(express.session({
store: new RedisStore({
host: cfg.redis.host,
db: cfg.redis.db,
ttl: 30
}),
secret: 'foobar'
}));
The ttl options is mentioned here and there is some minor extra detail on how it interacts with other options here.
It's working as expected. If I do a browser-only session (expires cookie when user-agent closes) then it lives in redis for 24 hours (I did not set a ttl option in connect-redis).
If I set a cookie to expire in 2 weeks, it lives in redis for 14 days.
You can check with these commands:
start redis-cli
> keys *
> ttl <key>
Related
I'm hosting a Node.js server on the free tier of Heroku (for now), and I'm using Redis to store the user login session (with connect-redis in express). The redis database is on the free tier of Redis Cloud (on Redis Labs, so NOT on Heroku).
The issue is each time the Heroku dyno sleeps and wakes up, my users are all logged out, despite the cookie age being months long.
Here is the only place I touch Redis in the code:
/// ... other imports
// Connect to Redis
const redis = require("redis");
const RedisStore = require("connect-redis")(session);
const redisClient = redis.createClient({
host: process.env.REDIS_HOST,
port: process.env.REDIS_PORT,
password: process.env.REDIS_PASSWORD,
legacyMode: true,
});
redisClient.on("error", (err) => {
console.log("Error: " + err);
});
// ...
// Set up session middleware
app.use(
session({
store: new RedisStore({ client: redisClient }),
saveUninitialized: false,
secret: process.env.SESSION_SECRET,
resave: false,
maxAge: 1000 * 60 * 60 * 24 * 120, // 120 days
})
);
Is it possible that somewhere when I initialize Redis or connect-redis it resets everything? I'm considering switching to JWTs however they aren't a perfect fit for this exact use case. What may be the cause of this issue and how can it be resolved?
In Heroku, the free tier of Redis database does not have persistence and will clear itself on shut down / reboots / etcetera.
Quoting from the documentation,
The hobby tier for Heroku Redis doesn’t persist instance data. If the instance must reboot or a failure occurs, the data on instance is lost.
The solution for this problem is to upgrade your Heroku instance, or if you want to stay at free tier, you may try other services, such as Redis on Cloud (Redis Labs). It provides a free persistence for 30MB of data in its free tier.
References:
Heroku Redis Documentation
Heroku Redis Add-On
For authentification I'm trying to understand, how sessions work. With help of documentation of express session and Sessions in Node JS I got it work.
Now I'm figuring out, what to do, that users can log out. In the documentation of express session is to read "The default server-side session storage, MemoryStore, is purposely not designed for a production environment." They recommand a compatible session store.
I have choosen connect-redis. They call it an "in-memory data structure store". Now I'm wondering, what is the difference between redis and the database, that I would like to use (back4app).
If I implement connect-redis
const RedisStore = require('connect-redis')(session);
const redis = require("redis").createClient();
let sess = {
store: new RedisStore({ host: 'localhost', 6379, client: redis }),
secret: cryptoString,
resave: true,
saveUninitialized: true,
cookie: {
maxAge: 1 * 60 * 1000,
},
}
server.use(session(sess));
the user object from back4app stills undefined. (Without redis the user object exists.)
As mentioned I have tryed Parse.User.logOut(). It doesn't work. The console says Parse.User is null.
Please, explain
what is the difference between back4app and redis? Do I need both?
how do I enable log out?
For all with the same problem. This is my other question in this context. It will help you to see the whole picture.
After a bit of research, I came to the conclusion that I can run multiple instances of Redis on my CentOS server for each NodeJS server I run (I use Redis to store sessions).
I have followed these instructions and both instances are running properly on two different ports.
On my NodeJS servers, I configured Redis as follows:
import * as session from "express-session";
var RedisStore = require('connect-redis')(session);
var redis = require("redis").createClient();
app.use(session(
{
secret: secret,
store: new RedisStore({ host: 'localhost', port: 6379, client: redis }),
cookie: { maxAge: 12 * 3600000 },
resave: true, saveUninitialized: true
}
));
One with port 6379 and the other with 6380.
I use req.session.regenerate to register a session.
Both login systems work perfectly individually. However, when I load anything on one application, the sessions of the other application are deleted (and need to be re-logged in).
What am I missing here?
The problem looks like it is the "session store" in Express and not your usage of Redis.
From the express session documentation:
NOTE be careful to generate unique IDs so your sessions do not conflict.
app.use(session({
genid: function(req) {
return genuuid() // use UUIDs for session IDs
},
secret: 'keyboard cat'
}))
Name: The name of the session ID cookie to set in the response (and read from in the request).
The default value is 'connect.sid'.
Specifically this warning explains your problem:
Note if you have multiple apps running on the same hostname (this is
just the name, i.e. localhost or 127.0.0.1; different schemes and
ports do not name a different hostname), then you need to separate the
session cookies from each other. The simplest method is to simply set
different names per app.
Basically I have a Node.js web application which uses the express-session module to handle sessions.
It works perfectly but with 1 expectation which ruins this option for me. If the Server crashes or I deploy a new release, the sessions get wiped out complety and thats unacceptable for me. Also its bad that I cant share the session between my main and my backup server.
So my goal is to handle the session via a external cloud database, just think of it as a "casual mysql database".
But here is the point were I just get confused on how to do that. I can assign unique ids to the sessions and depending on those load the resources from the database, but how can I re-recognize the users if these sessions get wiped away?
I am lacking alot of knowledge about sessions, but since this is quite a critical topic for me I post a question here.
You can use any of those stores (or write your own) :
https://www.npmjs.com/package/express-session#compatible-session-stores
I'm using the connect mongo to store sessions in my mongo DB, code looks like this (app.js) :
import session from 'express-session';
const MongoStore = require('connect-mongo')(session);
...
...
api.use(session({
secret: global.config.secrets.session,
saveUninitialized: false, // don't create session until something stored
resave: false, //don't save session if unmodified
store: new MongoStore({
mongooseConnection: mongoose.connection,
touchAfter: 24 * 3600 // time period in seconds
})
}));
I am developing an application using Kraken.js and to manage the sessions I decided to use: connect-mongo.
I have a setup like this:
'use strict';
var session = require('express-session');
var MongoStore = require('connect-mongo')(session);
module.exports = function SessionLib(opts) {
return session({
secret: opts.secret,
resave: opts.resave,
saveUninitialized: opts.saveUninitialized,
store: new MongoStore({
url: opts.url,
ttl: opts.ttl
})
});
};
I deployment the app in OpenShift with the option of auto-scaling, OpenShift are using HAProxy for auto scaling but this is causing me an error generating too many sessions in my MongoDB (About 250,000 in last weekend).
There is the possibility of not keeping HAProxy sessions?
One workaround:
Why don't you add a TTL index on you database for those session documents? You can add such indexes where you mark one field a TTL document should have, so every document having that field in a collection, will be removed after it's TTL expires