Consider a Node.js application with few processes:
single main process sitting in the memory and working like a web server;
system user's commands that can be run through CLI and exit when they are done.
I want to implement something like IPC between main and CLI processes, and it seems that ZeroMQ bindings for Node.js is a quite good candidate for doing that. I've chosen 6.0.0-beta.4 version:
Version 6.0.0 (in beta) features a brand new API that solves many fundamental issues and is recommended for new projects.
Using Request/Reply I was able to achieve what I wanted: CLI process notifies the main process about some occurred event (and optionally receives some data as a response) and continues its execution. A problem I have right now is that my CLI process hangs if the main process is off (is not available). The command still has to be executed and exit without notifying the main process if it's unable to establish a connection to a socket.
Here is a simplified code snippet of my CLI running in asynchronous method:
const { Request } = require('zeromq');
async function notify() {
let parsedResponse;
try {
const message = { event: 'hello world' };
const socket = new Request({ connectTimeout: 500 });
socket.connect('tcp://127.0.0.1:33332');
await socket.send(JSON.stringify(message));
const response = await socket.receive();
parsedResponse = JSON.parse(response.toString());
}
catch (e) {
console.error(e);
}
return parsedResponse;
}
(async() => {
const response = await notify();
if (response) {
console.log(response);
}
else {
console.log('Nothing is received.');
}
})();
I set connectTimeout option but wonder how to use it. The docs state:
Sets how long to wait before timing-out a connect() system call. The connect() system call normally takes a long time before it returns a time out error. Setting this option allows the library to time out the call at an earlier interval.
Looking at connect one see that it's not asynchronous:
Connects to the socket at the given remote address and returns immediately. The connection will be made asynchronously in the background.
Ok, probably send method of the socket will wait for connection establishment and reject a promise on connection timeout...but nothing happens there. send method is executed and the code is stuck at resolving receive. It's waiting for reply from the main process that will never come. So the main question is: "How to use connectTimeout option to handle socket's connection timeout?" I found an answer to similar question related to C++ but it actually doesn't answer the question (or I can't understand it). Can't believe that this option is useless and that it was added to the API in order to nobody can't use it.
I also would be happy with some kind of a workaround, and found receiveTimeout option. Changing socket creation to
const socket = new Request({ receiveTimeout: 500 });
leads to the the rejection in receive method and the following output:
{ [Error: Socket temporarily unavailable] errno: 11, code: 'EAGAIN' }
Nothing is received.
Source code executed but the process doesn't exit in this case. Seems that some resources are busy and are not freed. When main process is on the line everything works fine, process exits and I have the following reply in output:
{ status: 'success' }
So another question is: "How to exit the process gracefully on rejecting receive method with receiveTimeout?". Calling process.exit() is not an option here!
P.S. My environment is:
Kubuntu 18.04.1;
Node 10.15.0;
ZeroMQ bindings are installed this way:
$ yarn add zeromq#6.0.0-beta.4 --zmq-shared
ZeroMQ decouples the socket connection mechanics from message delivery. As the documentation states connectTimeout only influences the timeout of the connect() system call and does not affect the timeouts of sending/receiving messages.
For example:
const zmq = require("zeromq")
async function run() {
const socket = new zmq.Dealer({connectTimeout: 2000})
socket.events.on("connect:retry", event => {
console.log(new Date(), event.type)
})
socket.connect("tcp://example.com:12345")
}
run()
The connect:retry event occurs every ~2 seconds:
> node test.js
2019-11-25T13:35:53.375Z connect:retry
2019-11-25T13:35:55.536Z connect:retry
2019-11-25T13:35:57.719Z connect:retry
If we change connectTimeout to 200 then you can see the event will occur much more frequently. The timeout is not the only thing influencing the delay between the events, but it should be clear that it happens much quicker.
> node test.js
2019-11-25T13:36:05.271Z connect:retry
2019-11-25T13:36:05.531Z connect:retry
2019-11-25T13:36:05.810Z connect:retry
Hope this clarifies the effect of connectTimeout.
Related
I'm using #google-cloud/logging to log some stuff out of my express app over on Cloud Run.
Something like this:
routeHandler.ts
import { Logging } from "#google-cloud/logging";
const logging = new Logging({ projectId: process.env.PROJECT_ID });
const logName = LOG_NAME;
const log = logging.log(logName);
const resource = {
type: "cloud_run_revision",
labels: { ... }
};
export const routeHandler: RequestHandler = (req,res,next) => {
try {
// EXAMPLE: LOG A WARNING
const metadata = { resource, severity: "WARNING" };
const entry = log.entry(metadata,"SOME WARNING MSG");
await log.write(entry);
return res.sendStatus(200);
}
catch(err) {
// EXAMPLE: LOG AN ERROR
const metadata = { resource, severity: "ERROR" };
const entry = log.entry(metadata,"SOME ERROR MSG");
await log.write(entry);
return res.sendStatus(500);
}
};
You can see that the log.write(entry) is asynchronous. So, in theory, it would be recommended to await for it. But here is what the documentation from #google-cloud/logging says:
Doc link
And I got no problem with that. In my real case, even if the log.write() fails, it is inside a try-catch and any errors will be handled just fine.
My problem is that it kind of conflicts with the Cloud Run documentation:
Doc link
Note: If I don't wait for the log.write() call, I'll end the request cycle by responding to the request
And Cloud Run does behave like that. A couple weeks back, I tried to respond immediately to the request and fire some long background job. And the process kind of halted for a while, and I think it restarted once it got another request. Completely unpredictable. And when I ran this test I'm mentioning here, I even had a MIN_INSTANCE=1 set on my cloud run service container. Even that didn't allow my background job to run smoothly. Therefore, I don't think it's fine to leave the process doing background stuff when I've finished handling a request (by doing the "fire and forget" approach).
So, what should I do here?
Posting this answer as a Community Wiki based on #Karl-JorhanSjögren's correct assumption in the comments.
For Log calls on apps running in Cloud Run you are indeed encouraged to take a Fire and Forget approach, since you don't really need to force synchronicity on that.
As mentioned in the comments replying to your concern on the CPU being disabled after the request is fulfilled, the CPU will be throttled first so that the instance can be brought back up quickly and completely disabled after a longer period of inactivity. So firing of small logging calls that in most cases will finish within milliseconds shouldn't be a problem.
What is mentioned in the documentation is targeted at processes that run for Longer periods of time.
I am using Rascal.Js(it uses amqplib) for my messaging logic with rabbitMq on node.js app.
I am using something similar to their example on my project startup, which creates a permanent instance and "registers" all of my subscribers and redirects messages when they arrive to the queue (in the background).
My issue is with the publishers. There are http requests from outside which should trigger my publishers. A user clicks on create button of sorts which leads to certain flow of actions. At some point it reaches the point at which I need to use a publisher.
And here I am not sure about the right approach. Do I need to open a new connection every time I need to publish a message? and close it after it ends? Or maybe I should implement this in a way that it keeps the same connection open for all of the publishers? (I actually not so sure how to create it in a way that it can be accessed from other parts of my app).
At the moment I am using the following :
async publishMessage(publisherName, message) {
const dynamicSettings = setupDynamicVariablesFromConfigFiles(minimalPublishSettings);
const broker = await Rascal.BrokerAsPromised.create(Rascal.withDefaultConfig(dynamicSettings.rascal));
broker.on('error', async function(err) {
loggerUtil.writeToLog('error', 'publishMessage() broker_error_event: ' + publisherName + err + err.stack);
await broker.shutdown();
})
const publication = await broker.publish(publisherName, message);
try {
publication.on('error', async function(err) {
loggerUtil.writeToLog('error', 'publishMessage() publish_error_event: ' + err + err.stack);
await broker.shutdown();
}).on("success", async (messageId) => {
await broker.shutdown();
}).on("return", async (message) => {
loggerUtil.writeToLog('error', 'publishMessage() publish_return_event: ' + err + err.stack);
await broker.shutdown();
})
}
catch(err) {
loggerUtil.writeToLog('error', 'Something went wrong ' + err + err.stack);
await broker.shutdown();
}
}
I use this function from different parts of my app when I need to publish messages.
I thought to just add the broker.shutdown() for all of the endpoints but at some point after an error, I got an exception about closing a connection which was already closed, and this got me worried about the shutdown approach (which probably not a good one). I think it is related to this -
I tried doing that (the commented code) but I think it isnt working well in certain situations. If everything is ok it goes to "success" and then I can close it.
But one time I had an error instead of success and when I tried to use broker.shutdown() it gave me another exception which crashed the app. I think it is related to this -
https://github.com/squaremo/amqp.node/issues/111
I am not sure what might be the safest way to approach this?
Edit:
Actually now that I think about it, the exception might be related to me trying to shutdown the broker in the catch{} area as well. I will continue to investigate.
Rascal is designed to be initiated once at application startup, rather than created per HTTP request. Your application will be extremely slow if you use it in this way, and depending on how many concurrent requests you need to handle, could easily exceed max number of connections you can make to the broker. Furthermore you will get none of the benefits that Rascal provides, such as failed connection recovery.
If you can pre-determine the queue or exchange you need to publish to, then configure Rascal at application start-up (prior to your http server), and share the publisher between requests. If you are unable to determine the queue or exchange until your receive the http request, then Rascal is not an appropriate choice. Instead you're better off using amqplib directly, but should still establish a shared connection and channel. You will have to handle connection and channel errors manually though, otherwise they will crash your application.
I'm working on a custom winston transport; documentation (cut&paste follows) is crystal clear...
class CustomTransport extends Transport {
log(info, callback) {
setImmediate(() => {
this.emit('logged', info);
});
// Perform the writing to the remote service
callback();
}
};
... but, which is the meaning of this.emit('logged', info); and why in a setImmediate?
I would have said that calling the callback was enough to let the caller know that writing operation have been performed,
we could say that setImmediate is required to fire the event after IO handlers in Node.js event loop, but there is absolutely no guarantee that next IO loop is enough for my custom write to be finished, so
why to fire something called 'logged' actually before the write operation rather than fire something called 'logging'?
I asked the same thing to the maintainers, but the result was... tumbleweeds.
Can somebody revel me the secrets behind that mysterious event?
Tired by the silence I did a test with a custom winston transport which does not fire the logged event: I wrote 3GB logs with 30,000,000 logger.info calls and I had no problems, neither the application grown by a single byte of memory usage.
My conclusion is: firing that event is completely useless.
Transports can listen to the logged event.
const transport = new CustomTransport();
transport.on('logged', (info) => {
// Verification that log was called on your transport
console.log(`Logging! It's happening!`, info);
});
If no transports are listening to that event, then it's useless.
I would emit the event just in case anyone is listening.
I checked winston's roadmap and in version 3.3.0 the "logged" event will be emitted automatically by winston-transport.
I want to develop a process that only subscribes to a Redis channel and stay alive to handle received messages.
I wrote the following code:
var redis = require("redis");
var sub = redis.createClient({host: process.env.REDIS_HOST});
console.log('subscribing...')
sub.on('subscribe', () => console.log('subscribed'));
sub.on('message', (ch, msg) => console.log(`Received message on ${ch}:${msg}`));
console.log('done')
But obviously it does not work: when launched it goes through all lines and dies. I think I don't need framework like Express because my process does not use http.
How could I write a server that stay alive "forever" without using http frameworks?
You're not subscribing to a channel:
sub.subscribe('channel');
I used the exact code above and the process stays open. In the code above you aren't publishing any messages, therefore you will only see "subscribing..." and "done" printed to the terminal.
Also, as mentioned you aren't subscribing to the channel either.
I am trying to make gracefully shutdown in node.js using express 4.x http server.
Closing express server is easy but what worries me is that we have a lot of async jobs. Example of the flow :
Receive request
Do some stuff
Send response back to the client
In background continue to do some async stuff related to that request like making another request to some third part service
Receive response from third part service and save response to database etc.
Finish
So if I make my gracefully shutdown code like this :
process.on('SIGTERM', function () {
serverInstance.close(function(){
closeConnectionToDatabases(function(){
process.exit(0);
})
});
// shutdown anyway after some time
setTimeout(function(){
process.exit(0);
}, 8000);
});
How can I be sure that everything goes ok if SIGTERM has happened between first and second step in flow explained above? What about fourth, fifth and sixth step? Is there any nice way to handle this or it is just about manually watch to all requests going from your service in async way and wait for them?
Thanks,
Ivan
process.exit() will terminate node.js process immediately, without waiting for all aync tasks to finish.
If you want a truly graceful shutdown, you should close all resources (all open connections and file descriptors) manually, instead of calling process.exit(). In this case node.js will terminate immediately after finishing all pending async tasks.
So, just close all database connections and other i/o resources:
process.on('SIGTERM', function () {
serverInstance.close(function(){
closeConnectionToDatabases(function(){
// now node.js should close automatically
})
});
// shutdown anyway after some time
setTimeout(function(){
process.exit(0);
}, 8000);
});
Here's a better option:
Instead of using setTimeout to wait for all the async jobs to finish, we can create a new Promise, and resolve it when the async jobs are completed. Then after the promise is resolved, we can exit the process.
process.on('SIGTERM', () => {
new Promise((resolve) => {
serverInstance.close(async () => {
await closeConnectionToDatabases();
resolve();
});
}).then(() => process.exit(0))
});
Hope it helps you :)