Writing a custom winston-transport: why to emit logged event? - node.js

I'm working on a custom winston transport; documentation (cut&paste follows) is crystal clear...
class CustomTransport extends Transport {
log(info, callback) {
setImmediate(() => {
this.emit('logged', info);
});
// Perform the writing to the remote service
callback();
}
};
... but, which is the meaning of this.emit('logged', info); and why in a setImmediate?
I would have said that calling the callback was enough to let the caller know that writing operation have been performed,
we could say that setImmediate is required to fire the event after IO handlers in Node.js event loop, but there is absolutely no guarantee that next IO loop is enough for my custom write to be finished, so
why to fire something called 'logged' actually before the write operation rather than fire something called 'logging'?
I asked the same thing to the maintainers, but the result was... tumbleweeds.
Can somebody revel me the secrets behind that mysterious event?

Tired by the silence I did a test with a custom winston transport which does not fire the logged event: I wrote 3GB logs with 30,000,000 logger.info calls and I had no problems, neither the application grown by a single byte of memory usage.
My conclusion is: firing that event is completely useless.

Transports can listen to the logged event.
const transport = new CustomTransport();
transport.on('logged', (info) => {
// Verification that log was called on your transport
console.log(`Logging! It's happening!`, info);
});
If no transports are listening to that event, then it's useless.
I would emit the event just in case anyone is listening.
I checked winston's roadmap and in version 3.3.0 the "logged" event will be emitted automatically by winston-transport.

Related

socket.on("connection") with local eventEmitter listeners creating too many eventEmitter listeners

I am trying to use socket.io alongside a local event listener within the socket.on("connection", client => {...} ) event. The problem is every time a new socket.io connection is created it's creating a new event listener. This eventually leads to a max listeners error in node.js.
I need that event listener there so it can await data from other parts of the application and then use the returned socket.io client object to emit that data to the connected socket.io client.
Should I simply increase setMaxListeners per the documentation? Or is there something I should be doing differently with my code to prevent the creation of a new event listener each time the client connects (e.g. is there a way to register the event listener globally, but pass and use new client connections into the event listener)?
io.on('connection', client => {
//console.log("Websockets client connected")
events.on("initializePage", data => {
client.emit("initializePage", data)
})
client.on('disconnect', () => {
console.log("Socket.io client disconnected")
})
})
In the snippet of code you present, the global events eventEmitter will have a new listener attached for each new socket. So indeed, after a while the maxListeners will be rapidly exhausted if many clients connect.
A first step to avoid adding listeners indefinitely would be to some clean up each time a client disconnects, by tearing down all the listeners it registered with the off method.
In the case the clients only need to be notified one time, you can decide to register the listener using the once method instead of on, which will do the clean up automatically after one trigger.
But, to get back more specifically to socket.io, I feel you're in the situation when you want to perform some kind of broadcast/multicast. Therefore, what about using the namespace system and call:
events.on("initializePage", data => {
io.sockets.emit("initializePage", data);
})
This code has to be written at the top level of your file, not in the connection handler.

Handle message before event ws nodejs

I'm using ws version 7.4.0 and I would want to display a console log or perfom operations between the moment where the client is sending a message to the server and before the server fire the on event message.
To represent it:
webserver.on('example', function callback(msg){console.log(msg);}); //act before the call of callback
client------server---[here]---callback
The only way I see right now would be to use a "root" function before the callback of all my events like this:
function callback(msg){console.log(msg);}
webserver.on('example', function root(msg) {console.log('example msg'); callback(msg);});
I don't know if this is a real and/or good solution I really wish to write a clean and organized application.
If someone could give me some advise or a real solution? Thank you.
You could make a wrapper for all of your callbacks like so:
function makeCallback(fn) {
return function(msg) {
if (!environment.prod) console.log(msg);
fn(msg)
};
}
var myCallback = makeCallback(function (msg) {
// something
});
webserver.on('example', myCallback);
Or I think the better solution is to stream the requets into your stdout although I don't know the implications of using this method.
And I want to address the naming of your websocket server. Even though a web socket server is technically a web server, it only responds to the websocket protocol and naming it webserver could be misleading, I would recommend using the naming like in their documents wss.

How to handle connection timeout in ZeroMQ.js properly?

Consider a Node.js application with few processes:
single main process sitting in the memory and working like a web server;
system user's commands that can be run through CLI and exit when they are done.
I want to implement something like IPC between main and CLI processes, and it seems that ZeroMQ bindings for Node.js is a quite good candidate for doing that. I've chosen 6.0.0-beta.4 version:
Version 6.0.0 (in beta) features a brand new API that solves many fundamental issues and is recommended for new projects.
Using Request/Reply I was able to achieve what I wanted: CLI process notifies the main process about some occurred event (and optionally receives some data as a response) and continues its execution. A problem I have right now is that my CLI process hangs if the main process is off (is not available). The command still has to be executed and exit without notifying the main process if it's unable to establish a connection to a socket.
Here is a simplified code snippet of my CLI running in asynchronous method:
const { Request } = require('zeromq');
async function notify() {
let parsedResponse;
try {
const message = { event: 'hello world' };
const socket = new Request({ connectTimeout: 500 });
socket.connect('tcp://127.0.0.1:33332');
await socket.send(JSON.stringify(message));
const response = await socket.receive();
parsedResponse = JSON.parse(response.toString());
}
catch (e) {
console.error(e);
}
return parsedResponse;
}
(async() => {
const response = await notify();
if (response) {
console.log(response);
}
else {
console.log('Nothing is received.');
}
})();
I set connectTimeout option but wonder how to use it. The docs state:
Sets how long to wait before timing-out a connect() system call. The connect() system call normally takes a long time before it returns a time out error. Setting this option allows the library to time out the call at an earlier interval.
Looking at connect one see that it's not asynchronous:
Connects to the socket at the given remote address and returns immediately. The connection will be made asynchronously in the background.
Ok, probably send method of the socket will wait for connection establishment and reject a promise on connection timeout...but nothing happens there. send method is executed and the code is stuck at resolving receive. It's waiting for reply from the main process that will never come. So the main question is: "How to use connectTimeout option to handle socket's connection timeout?" I found an answer to similar question related to C++ but it actually doesn't answer the question (or I can't understand it). Can't believe that this option is useless and that it was added to the API in order to nobody can't use it.
I also would be happy with some kind of a workaround, and found receiveTimeout option. Changing socket creation to
const socket = new Request({ receiveTimeout: 500 });
leads to the the rejection in receive method and the following output:
{ [Error: Socket temporarily unavailable] errno: 11, code: 'EAGAIN' }
Nothing is received.
Source code executed but the process doesn't exit in this case. Seems that some resources are busy and are not freed. When main process is on the line everything works fine, process exits and I have the following reply in output:
{ status: 'success' }
So another question is: "How to exit the process gracefully on rejecting receive method with receiveTimeout?". Calling process.exit() is not an option here!
P.S. My environment is:
Kubuntu 18.04.1;
Node 10.15.0;
ZeroMQ bindings are installed this way:
$ yarn add zeromq#6.0.0-beta.4 --zmq-shared
ZeroMQ decouples the socket connection mechanics from message delivery. As the documentation states connectTimeout only influences the timeout of the connect() system call and does not affect the timeouts of sending/receiving messages.
For example:
const zmq = require("zeromq")
async function run() {
const socket = new zmq.Dealer({connectTimeout: 2000})
socket.events.on("connect:retry", event => {
console.log(new Date(), event.type)
})
socket.connect("tcp://example.com:12345")
}
run()
The connect:retry event occurs every ~2 seconds:
> node test.js
2019-11-25T13:35:53.375Z connect:retry
2019-11-25T13:35:55.536Z connect:retry
2019-11-25T13:35:57.719Z connect:retry
If we change connectTimeout to 200 then you can see the event will occur much more frequently. The timeout is not the only thing influencing the delay between the events, but it should be clear that it happens much quicker.
> node test.js
2019-11-25T13:36:05.271Z connect:retry
2019-11-25T13:36:05.531Z connect:retry
2019-11-25T13:36:05.810Z connect:retry
Hope this clarifies the effect of connectTimeout.

Detecting Socket.IO message delivery error on client side

We need to update the client side UI to indicate that a message fails to deliver. How do I have Socket.IO JS client call a custom callback directly when the message fails to deliver? For example, something like:
socket.emit("event", data).onError(myCallback);
I know Socket.IO provides the Ack mechanism to confirm delivery success. Therefore, one can set up a timer with a handler which calls the failure callback, if the ack is not called after a certain amount of time. But this doesn't seem to be the best way to do.
Also there is the error event provided by Socket.IO, but it doesn't come with info regarding which emit caused the error.
Unfortunately there's no way to get errors from callbacks, the only way is to indeed create your own timeout:
var timeoutId = setTimeout(timeoutErrorFn, 500);
var acknCallbackFn = function(err, userData){
clearTimeout(timeoutId)
//manage UserData
}
socket.emit('getUserData', acknCallbackFn);
Source of the code
And there's another issue about this, open
So for the time being you have to stick with your manual setTimeout.

node.js + socket.io broadcast from server, rather than from a specific client?

I'm building a simple system like a realtime news feed, using node.js + socket.io.
Since this is a "read-only" system, clients connect and receive data, but clients never actually send any data of their own. The server generates the messages that needs to be sent to all clients, no client generates any messages; yet I do need to broadcast.
The documentation for socket.io's broadcast (end of page) says
To broadcast, simply add a broadcast flag to emit and send method calls. Broadcasting means sending a message to everyone else except for the socket that starts it.
So I currently capture the most recent client to connect, into a variable, then emit() to that socket and broadcast.emit() to that socket, such that this new client gets the new data and all the other clients. But it feels like the client's role here is nothing more than a workaround for what I thought socket.io already supported.
Is there a way to send data to all clients based on an event initiated by the server?
My current approach is roughly:
var socket;
io.sockets.on("connection", function (s) {
socket = s;
});
/* bunch of real logic, yadda yadda ... */
myServerSideNewsFeed.onNewEntry(function (msg) {
socket.emit("msg", { "msg" : msg });
socket.broadcast.emit("msg", { "msg" : msg });
});
Basically the events that cause data to require sending to the client are all server-side, not client-side.
Why not just do like below?
io.sockets.emit('hello',{msg:'abc'});
Since you are emitting events only server side, you should create a custom EventEmitter for your server.
var io = require('socket.io').listen(80);
events = require('events'),
serverEmitter = new events.EventEmitter();
io.sockets.on('connection', function (socket) {
// here you handle what happens on the 'newFeed' event
// which will be triggered by the server later on
serverEmitter.on('newFeed', function (data) {
// this message will be sent to all connected users
socket.emit(data);
});
});
// sometime in the future the server will emit one or more newFeed events
serverEmitter.emit('newFeed', data);
Note: newFeed is just an event example, you can have as many events as you like.
Important
The solution above is better also because in the future you might need to emit certain messages only to some clients, not all (thus need conditions). For something simpler (just emit a message to all clients no matter what), io.sockets.broadcast.emit() is a better fit indeed.

Resources