Redis publish memory leak? - node.js

I know that there is already many questions like this, but i don't find one that fits my implementation.
I'm using redis in a Node.js env, and it feels like redis.publish is leaking some memory. I expect it to be some kind of "backpressure" thing, like seen here:
Node redis publisher consuming too much memory
But to my understanding: Node needs to release that kind of pressure in a synchronous context, otherwise, the node event loop won't be called, and the GC won't be called either.
My program looks like that:
const websocketApi = new WebsocketApi()
const currentState = {}
websocketApi.connect()
websocketApi.on('open', () => {
channels.map((channel) => websocketApi.subscribeChannel(channel))
})
websocketApi.on('message', (message) => {
const ob = JSON.parse(message)
if (currentState[ob.id]) {
currentState[ob.id] = update(currentState[ob.id], ob.data)
} else {
currentState[ob.id] = ob.data
}
const payload = {
channel: ob.id,
info: currentState[ob.id],
timestamp: Date.now(),
type: 'newData'
}
// when i remove this part, the memory is stable
redisClient.publish(payload.channel, JSON.stringify(payload))
})
// to reconnect in case of error
websocketApi.on('close', () =>
websocketApi.connect())
It seems that the messages are too close from each other, so it doesn't have time to release the strings hold in the redis.publish.
Do you have any idea of what is wrong in this code ?
EDIT: More specifically, what I can observe when I do memory dumps of my application:
The memory is staturated with string that are my Stringified JSON payloads, and "chunks" of messages that are send via Redis itself. Their ref are hold inside the redis client manly in variables called chunk.
Some string payloads are still released, but I create them way faster.
When I don't publish the messages via Redis, the "currentState" variable grows until a point then don't grow anymore. It obviously has a big RAM impact, but it's expected. The rest is fine and the application is stable around 400mb, and it explodes whith the redis publisher (PM2 restarts it cause it reaches max RAM capacity)
My feeling here is that I ask redis to publish way more that it can handle, and redis doesn't have the time to finish to publish the messages. It still holds all the context, so it doesn't release anything. I may need some kind of "queue" to let redis release some context and finish publishing the messages. Is that really a possibility or am I becoming crazy ?
Basically, every loop in my program is "independent". Is it possible to have as many redis clients as I have got loops ? is it a better idea ? (IMHO, node is mono threaded, so it won't help, but it may help the V8 to better track down memory references and releasing memory)

The redis client buffers commands if the client is not connected either because it has not yet connected or its connection fails or it fails to connect.
Make sure that you can connect to the redis server. Make sure that your program is connected to the server. I would suggest adding a listener to redisClient.on('connect') if that is not emitted the client never connected.
If you are connected, the client shouldn't be buffering but to make the problem appear sooner disable the offline queue, pass the option enable_offline_queue: false to createClient this will cause attempts to send commands when not connected fail.
You should attach an error listener to the redisClient: redisClient.on('error', console.error.bind(console)). This might yield a message as to why the client is buffering.

Related

Is there a point where Node.js becomes the bottleneck for a Discord bot?

I recently coded a discord bot which can track the user engagement on a server once the user has opted in. I coded it primarily using the Discord.js framework It accepts events emitted from the Discord API when users of a server are active under certain conditions (message send, message reaction, etc.)
How would I be able to tell when, if ever, the single threaded nature of Node.js becomes the bottleneck for this application? Alternatively, how could I test the speed of my application response time to see if my actual code is the inefficient section? While the event-based nature of the application seems like a perfect fit for Node, I want to understand a bit more about the theory behind it and when it'd be a good idea to use another framework.
As you have said, nodejs scales up really well for this kind of application. So, as long as your code runs on a machine with a decent amount of CPU power it will be a long time until you hit a bottleneck. It will run fine with one or two vCPUs, but if your cloud provider throttles the cpu allocated to the virtual machine you're renting, your bot might not keep up with a heavy workload.
If your event handlers use a database, that is just as likely to become a bottleneck as discord.js. Nodejs itself can handle hundreds of concurrent event processors without breaking a sweat (as long as your vendor doesn't throttle your CPU).
How to know you're not keeping up with events? Try this. Each event is processed by a function looking something like this (your code style will certainly be different, this is just the general idea).
execute(client) {
try {
await someOperation(client)
await someOtherOperation(client)
}
catch (err) {
/* handle the error */
}
}
You can time these event handlers like so
execute(client) {
const startTime = Date.now()
try {
await someOperation(client)
await someOtherOperation(client)
}
catch (err) {
/* handle the error */
}
finally {
const elapsedTime = Date.now() - startTime
if (elapsedTime > 1000){ /*one second, you may want another value */
/* report it somehow */
}
}
This will keep track of event handlers taking more than a second.
If your bot does get busy enough to require refactoring, I suspect your best approach will be to shard your application; direct some clients to nodejs instance and others to another.

Node.js Cluster Shared Cache

I'm using node-cache to create a local cache, however, the problem I have is that when using the application with PM2 which creates an application cluster the cache is created multiple times, one for each process - this isn't too much of a problem as the cached data is small so memory isn't the issue.
The real problem that I have an API call to my application to flush the cache, however when calling this API it will only flush the cache for the particular process that handles that call.
Is there a way to signal all workers to perform a function?
I did think about using Redis to cache instead as that would make it simpler to only have the one cache, the problem I have with Redis is I'm not sure the best way to scale it, I've currently got 50 applications and wouldn't want to set-up a new Redis database for each application, the alternative was to use ioredis and it's transparent key prefixing for each application but this could cause some security vulnerabilities if one application was to accidentally read data from the other clients application - And I don't believe there is a way to delete all keys just for a particular prefix (i.e. one app/client) as FLUSHALL will remove all keys
What are best practices for sharing cache for clustered node instances, but where there are many instances of the application too - think SAAS application.
Currently, my workaround for this issue is using node-cron to clear the cache every 15mins, however, there are items in the cache that don't really ever change, and there are other items which should be updated as soon as an external tool signals the application to flush the cache via an API call
For anyone looking at this, for my use case, the best method was to use IPC.
I implemented an IPC messenger to pass messages to all processes, I read in the process name from the pm2 config file (app.json) to ensure we send the message to the correct application
// Sender
// The sender can run inside or outside of pm2
var pm2 = require('pm2');
var cfg = require('../app.json');
exports.IPCSend = function (topic, message) {
pm2.connect(function () {
// Find the IDs of who you want to send to
pm2.list(function (err, processes) {
for (var i in processes) {
if (processes[i].name == cfg.apps[0].name) {
console.log('Sending Message To Id:', processes[i].pm_id, 'Name:', processes[i].name)
pm2.sendDataToProcessId(processes[i].pm_id, {
data: {
message: message
},
topic: topic
}, function (err, res) {
console.log(err, res);
});
}
}
});
});
}
// Receiver
// No need to require require('pm2') however the receiver must be running inside of pm2
process.on('message', function (packet) {
console.log(packet);
});

How to properly use database when scaling a NodeJS app?

I am wondering how I would properly use MySQL when I am scaling my Node.JS app using the cluster module. Currently, I've only come up with two solutions:
Solution 1:
Create a database connection on every "worker".
Solution 2:
Have the database connection on a master process and whenever one of the workers request some data, the master process will return the data. However, using this solution, I do not know how I would be able to get the worker to retrieve the data from the master process.
I (think) I made a "hacky" workaround emitting with a unique number and then waiting for the master process to send the message back to the worker and the event name being the unique number.
If you don't understand what I mean by this, here's some code:
// Worker process
return new Promise (function (resolve, reject) {
process.send({
// Other data here
identifier: <unique number>
})
// having a custom event emitter on the worker
worker.once(<unique number>, function (data) {
// data being the data for the request with the unique number
// resolving the promise with returned data
resolve(data)
})
})
//////////////////////////
// Master process
// Custom event emitter on the master process
master.on(<eventName>, function (data) {
// logic
// Sending data back to worker
master.send(<other args>, data.identifier)
}
What would be the best approach to this problem?
Thank you for reading.
When you cluster in NodeJS, you should assume each process is completely independent. You really shouldn't be relaying messages like this to/from the master process. If you need multiple threads to access the same data, I don't think NodeJS is what you should be using. However, If you're just doing basic CRUD operations with your database, clustering (solution 1) is certainly the way to go.
For example, if you're trying to scale write ops to your database (assuming your database is properly scaled), each write op is independent from another. When you cluster, a single write request will be load balanced to one of your workers. Then in the worker, you delegate the write op to your database asynchronously. In this scenario, there is no need for a master process.
If you've not planned on using a proper microservice architecture where each process would actually have its own database (or perhaps just an in-memory storage), your best bet IMO is to use a connection pool created by the main process and have each child request a connection out of that pool. That's probably the safest approach to avoid issues in the neighborhood of threadsafety errors.

Node JS Socket.IO Emitter (and redis)

I'll give a small premise of what I'm trying to do. I have a game concept in mind which requires multiple players sitting around a table somewhat like poker.
The normal interaction between different players is easy to handle via socket.io in conjunction with node js.
What I'm having a hard time figuring out is; I have a cron job which is running in another process which gets new information every minute which then needs to be sent to each of those players. Since this is a different process I'm not sure how I send certain clients this information.
socket.io does have information for this and I'm quoting it below:
In some cases, you might want to emit events to sockets in Socket.IO namespaces / rooms from outside the context of your Socket.IO processes.
There’s several ways to tackle this problem, like implementing your own channel to send messages into the process.
To facilitate this use case, we created two modules:
socket.io-redis
socket.io-emitter
From what I understand I need these two modules to do what I mentioned earlier. What I do not understand however is why is redis in the equation when I just need to send some messages.
Is it used to just store the messages temporarily?
Any help will be appreciated.
There are several ways to achieve this if you just need to emit after an external event. It depend on what you're using for getting those new data to send :
/* if the other process is an http post incoming you can use for example
express and use your io object in a custom middleware : */
//pass the io in the req object
app.use( '/incoming', (req, res, next) => {
req.io = io;
})
//then you can do :
app.post('/incoming', (req, res, next) => {
req.io.emit('incoming', req.body);
res.send('data received from http post request then send in the socket');
})
//if you fetch data every minute, why don't you just emit after your job :
var job = sheduledJob('* */1 * * * *', io => {
axios.get('/myApi/someRessource').then(data => io.emit('newData', data.data));
})
Well in the case of socket.io providing those, I read into that you actually need both. However this shouldn't necessarily be what you want. But yes, redis is probably just used to store data temporarily, where it also does a really good job, by being close to what a message queue does.
Your cron now wouldn't need a message queue or similar behaviour.
My suggestion though would be to run the cron with some node package from within your process as a child_process hook onto it's readable stream and then push directly to your sockets.
If the cron job process is also a nodejs process, you can exchange data through redis.io pub-sub client mechanism.
Let me know what is your cron job process in and in case further help required in pub-sub mechanism..
redis is one of the memory stores used by socket.io(in case you configure)
You must employ redis only if you have multi-server configuration (cluster) to establish a connection and room/namespace sync between those node.js instances. It has nothing to do with storing data in this case, it works as a pub/sub machine.

why is performance of redis+socket.io better than just socket.io?

I earlier had all my code in socket.io+node.js server. I recently converted all the code to redis+socket.io+socket.io+node.js after noticing slow performance when too many users send messages across the server.
So, why socket.io alone was slow because it is not multi threaded, so it handles one request or emit at a time.
What redis does is distribute these requests or emits across channels. Clients subscribe to different channels, and when a message is published on a channel, all the client subscribed to it receive the message. It does it via this piece of code:
sub.on("message", function (channel, message) {
client.emit("message",message);
});
The client.on('emit',function(){}) takes it from here to publish messages to different channels.
Here is a brief code explaining what i am doing with redis:
io.sockets.on('connection', function (client) {
var pub = redis.createClient();
var sub = redis.createClient();
sub.on("message", function (channel, message) {
client.emit('message',message);
});
client.on("message", function (msg) {
if(msg.type == "chat"){
pub.publish("channel." + msg.tousername,msg.message);
pub.publish("channel." + msg.user,msg.message);
}
else if(msg.type == "setUsername"){
sub.subscribe("channel." +msg.user);
}
});
});
As redis stores the channel information, we can have different servers publish to the same channel.
So, what i dont understand is, if sub.on("message") is getting called every time a request or emit is sent, why is redis supposed to be giving better performance? I suppose even the sub.on("message") method is not multi threaded.
As you might know, Redis allows you to scale with multiple node instances. So the performance actually comes after the fact. Utilizing the Pub/Sub method is not faster. It's technically slower because you have to communicate between Redis for every Pub/Sign signal. The "giving better performance" is only really true when you start to horizontally scale out.
For example, you have one node instance (simple chat room) -- that can handle a maximum of 200 active users. You are not using Redis yet because there is no need. Now, what if you want to have 400 active users? Whilst using your example above, you can now achieve this 400 user mark, which is a "performance increase". In the sense you can now handle more users, but not really a speed increase. If that makes sense. Hope this helps!

Resources