GCP Pubsub Nodejs client promises hang, client freezes, no errors - node.js

Promises hang with no errors, with a Google's Pub/Sub Node client library against your project.
Example:
const { PubSub } = require("#google-cloud/pubsub");
async function start() {
const pubsubClient = new PubSub({ projectId: "my-project-id" });
try {
const [topics] = await pubsubClient.getTopics();
console.log(topics);
} catch (error) {
console.error(error);
}
}
start().catch(console.error);
would return no error and no progress would be shown. Eventually the client times out after 10 minutes. No topics would get returned. The same goes for publishing to a topic, etc.

If you used emulator pubsub for local development you have set PUBSUB_EMULATOR_HOST variable. For some reason it leads to the issue. Remove it from your environment with unset PUBSUB_EMULATOR_HOST or remove it from your .env file and restart the server.
You can check if it is set with printenv in your shell (or run exec from the node app to check)
There is a known issue and there are associated Github Issues, so if you came across this answer and it helped, feel free to let the maintainers know here:
https://github.com/googleapis/nodejs-pubsub/issues/339
or here:
https://github.com/googleapis/gax-nodejs/issues/208
as it's a won't fix as seems to not affect many people.

Related

Is it fine to not await for a log.write() promise inside a cloud run container?

I'm using #google-cloud/logging to log some stuff out of my express app over on Cloud Run.
Something like this:
routeHandler.ts
import { Logging } from "#google-cloud/logging";
const logging = new Logging({ projectId: process.env.PROJECT_ID });
const logName = LOG_NAME;
const log = logging.log(logName);
const resource = {
type: "cloud_run_revision",
labels: { ... }
};
export const routeHandler: RequestHandler = (req,res,next) => {
try {
// EXAMPLE: LOG A WARNING
const metadata = { resource, severity: "WARNING" };
const entry = log.entry(metadata,"SOME WARNING MSG");
await log.write(entry);
return res.sendStatus(200);
}
catch(err) {
// EXAMPLE: LOG AN ERROR
const metadata = { resource, severity: "ERROR" };
const entry = log.entry(metadata,"SOME ERROR MSG");
await log.write(entry);
return res.sendStatus(500);
}
};
You can see that the log.write(entry) is asynchronous. So, in theory, it would be recommended to await for it. But here is what the documentation from #google-cloud/logging says:
Doc link
And I got no problem with that. In my real case, even if the log.write() fails, it is inside a try-catch and any errors will be handled just fine.
My problem is that it kind of conflicts with the Cloud Run documentation:
Doc link
Note: If I don't wait for the log.write() call, I'll end the request cycle by responding to the request
And Cloud Run does behave like that. A couple weeks back, I tried to respond immediately to the request and fire some long background job. And the process kind of halted for a while, and I think it restarted once it got another request. Completely unpredictable. And when I ran this test I'm mentioning here, I even had a MIN_INSTANCE=1 set on my cloud run service container. Even that didn't allow my background job to run smoothly. Therefore, I don't think it's fine to leave the process doing background stuff when I've finished handling a request (by doing the "fire and forget" approach).
So, what should I do here?
Posting this answer as a Community Wiki based on #Karl-JorhanSjögren's correct assumption in the comments.
For Log calls on apps running in Cloud Run you are indeed encouraged to take a Fire and Forget approach, since you don't really need to force synchronicity on that.
As mentioned in the comments replying to your concern on the CPU being disabled after the request is fulfilled, the CPU will be throttled first so that the instance can be brought back up quickly and completely disabled after a longer period of inactivity. So firing of small logging calls that in most cases will finish within milliseconds shouldn't be a problem.
What is mentioned in the documentation is targeted at processes that run for Longer periods of time.

The Things Network: Cannot publish/subscribe to my device up/down link topics

I am trying to make quick test for the pub/sub mechanism to my registered device on TTN so I can build my complete solution app on the data coming to the TTN broker.
At the moment I am waiting for my loRa module to arrive, that is why I want to use a simple nodeJS script to publish dummy data, and the other to subscribe and build an app using the dummy data. I use the following code for this:
var mqtt = require('mqtt')
var options = {
port: 1883,
host: ‘mqtt://eu.thethings.network’,
username: ‘xxxx’, // here I wrote my app id
password: ‘xxxx’ // here I wrote the access key
};
var client = mqtt.connect(‘mqtt://eu.thethings.network’,options)
client.on(‘connect’, function () {
client.subscribe(‘appID/devices/MyDeviceName/down’, function (err) {
if (!err) {
client.publish(‘appID/devices/MyDeviceName/down’, ‘Hello mqtt’)
}
})
})
client.on(‘message’, function (topic, message) {
// message is Buffer
console.log(message.toString())
// client.end()
})
This is however not doing anything, I was watching the data on TTN, nothing coming in.
I also tried using mqtt explorer but it did not work.
Both methods worked fine when I played the broker on my machine, eclipse and mosquittoo on cloud.
Your help is greatly appreciated.
Thanks!
Ahmed
I have encountered a similar issue in the past. I believe the issue is with trying to use "mqtt" instead of "https". For me, it worked when I called
mqtt.connect('https://thethings.network:1883', {
"username": username,
"password": password
}
However, I wasn't using the community version of the website (The Things Stack V3), so there might be a slight difference. For example, instead of "My-App-Id" I had to use "My-App-Id#My-Company-Name".
Please, try the above and let me know if it works.

How to handle connection timeout in ZeroMQ.js properly?

Consider a Node.js application with few processes:
single main process sitting in the memory and working like a web server;
system user's commands that can be run through CLI and exit when they are done.
I want to implement something like IPC between main and CLI processes, and it seems that ZeroMQ bindings for Node.js is a quite good candidate for doing that. I've chosen 6.0.0-beta.4 version:
Version 6.0.0 (in beta) features a brand new API that solves many fundamental issues and is recommended for new projects.
Using Request/Reply I was able to achieve what I wanted: CLI process notifies the main process about some occurred event (and optionally receives some data as a response) and continues its execution. A problem I have right now is that my CLI process hangs if the main process is off (is not available). The command still has to be executed and exit without notifying the main process if it's unable to establish a connection to a socket.
Here is a simplified code snippet of my CLI running in asynchronous method:
const { Request } = require('zeromq');
async function notify() {
let parsedResponse;
try {
const message = { event: 'hello world' };
const socket = new Request({ connectTimeout: 500 });
socket.connect('tcp://127.0.0.1:33332');
await socket.send(JSON.stringify(message));
const response = await socket.receive();
parsedResponse = JSON.parse(response.toString());
}
catch (e) {
console.error(e);
}
return parsedResponse;
}
(async() => {
const response = await notify();
if (response) {
console.log(response);
}
else {
console.log('Nothing is received.');
}
})();
I set connectTimeout option but wonder how to use it. The docs state:
Sets how long to wait before timing-out a connect() system call. The connect() system call normally takes a long time before it returns a time out error. Setting this option allows the library to time out the call at an earlier interval.
Looking at connect one see that it's not asynchronous:
Connects to the socket at the given remote address and returns immediately. The connection will be made asynchronously in the background.
Ok, probably send method of the socket will wait for connection establishment and reject a promise on connection timeout...but nothing happens there. send method is executed and the code is stuck at resolving receive. It's waiting for reply from the main process that will never come. So the main question is: "How to use connectTimeout option to handle socket's connection timeout?" I found an answer to similar question related to C++ but it actually doesn't answer the question (or I can't understand it). Can't believe that this option is useless and that it was added to the API in order to nobody can't use it.
I also would be happy with some kind of a workaround, and found receiveTimeout option. Changing socket creation to
const socket = new Request({ receiveTimeout: 500 });
leads to the the rejection in receive method and the following output:
{ [Error: Socket temporarily unavailable] errno: 11, code: 'EAGAIN' }
Nothing is received.
Source code executed but the process doesn't exit in this case. Seems that some resources are busy and are not freed. When main process is on the line everything works fine, process exits and I have the following reply in output:
{ status: 'success' }
So another question is: "How to exit the process gracefully on rejecting receive method with receiveTimeout?". Calling process.exit() is not an option here!
P.S. My environment is:
Kubuntu 18.04.1;
Node 10.15.0;
ZeroMQ bindings are installed this way:
$ yarn add zeromq#6.0.0-beta.4 --zmq-shared
ZeroMQ decouples the socket connection mechanics from message delivery. As the documentation states connectTimeout only influences the timeout of the connect() system call and does not affect the timeouts of sending/receiving messages.
For example:
const zmq = require("zeromq")
async function run() {
const socket = new zmq.Dealer({connectTimeout: 2000})
socket.events.on("connect:retry", event => {
console.log(new Date(), event.type)
})
socket.connect("tcp://example.com:12345")
}
run()
The connect:retry event occurs every ~2 seconds:
> node test.js
2019-11-25T13:35:53.375Z connect:retry
2019-11-25T13:35:55.536Z connect:retry
2019-11-25T13:35:57.719Z connect:retry
If we change connectTimeout to 200 then you can see the event will occur much more frequently. The timeout is not the only thing influencing the delay between the events, but it should be clear that it happens much quicker.
> node test.js
2019-11-25T13:36:05.271Z connect:retry
2019-11-25T13:36:05.531Z connect:retry
2019-11-25T13:36:05.810Z connect:retry
Hope this clarifies the effect of connectTimeout.

Google Datastore Silently Failing in Production (node.js)

As part of a larger web app, I'm using a combination of Google Datastore and Firebase. On my local machine, all requests go through seamlessly, however when I deploy my app to the GAE (using node.js - Flexible Environment) everything works except the calls to the datastore. The requests do not throw an error, directly or via promise and just never return, hanging the process.
My current configuration is set up to use a Service Account key file containing my private key. Ive checked that it has the proper scope (and even added more than i should just in case to have Datastore Owner permissions).
I've distilled the app down to the bare bones, and still no luck. I'm stuck and looking for any suggestions.
const datastore = require( '#google-cloud/datastore' );
const config = require( 'yaml-config' )
.readConfig( 'config.yaml' );
module.exports = {
get_test: function(query, callback) {
var ds_ref = datastore({
projectId: config.DATASTORE_PROJECT,
keyFilename: __dirname + config.GOOGLE_CLOUD_KEY
});
var q = ds_ref.createQuery('comps')
.filter('record', query.record);
ds_ref.runQuery(q, function(err, entities) {
if (!err) {
if (entities.length > 0) {
callback(err, entities[0]);
} else {
callback(err, []);
}
} else {
callback(err, undefined);
}
});
}
}
UPDATE:
Tried manual_scaling found here but didn't seem to work. Also found this article that seems to be a similar issue.
The problem seems to be in the grpc module. Use 0.6.0 version of datastore. This will automatically use an older version of grpc. The solution will work for compute engine. However you will still face problems with the flexible environment. This is because when the flexible environment is deployed, it will use the new modules which have the problem.
Also please refer to the following links on gitHub:
https://github.com/GoogleCloudPlatform/google-cloud-node/issues/1955
https://github.com/GoogleCloudPlatform/google-cloud-node/issues/1946
Please keep a watch of these links for an update in resolution.

Azure: how to trigger a nodejs webjob when there is a message in the queue?

I've created a webjob written in node. I wonder if there is a way to trigger this webjob to run whenever there is a message coming to the queue?
Thanks
Please check out the azure-webjobs-sdk-script repo where we're developing a solution to this very problem.
The repo is new, so doc and help are still coming online, but you can clone it and run the Host.Node sample project which demonstrates various Node.js triggered functions, including a queue triggered function :) This library has already been tested deployed to Azure and works.
Please log any issues/feedback on the issues list of the repo and we'll address them :)
Look at Mathew's post for a new thing we're working on with the SDK. https://github.com/Azure/azure-webjobs-sdk-script
Not yet with the WebJobs SDK. You can build a continuous Job and keep fetching. If you wanted to build something kinda sane, you could probably do something like:
var azure = require('azure-storage');
var queueService = azure.createQueueService(),
queueName = 'taskqueue';
// Poll every 5 seconds to avoid consuming too many resources
setInterval(function() {
queueService.getMessages(queueName, {}, function(error, serverMessages) {
if (!error) {
// For each message
serverMessages.foreach(function(i) {
// Do something
console.log(i.messagetext);
// Delete Message
queueService.deleteMessage(queueName, i.messageid, i.popreceipt,
function(error) {
if (error) {
console.log(error);
}
}); //end deleteMessage
}); // end foreach
} else {
console.log(error);
}
});
}, 5000);
You'll want to look at the JSDocs they have on azure.github.io to learn how to do things like grab multiple message and increase the "blocking" time which is defaulted to 30 seconds.
Let me know if you have any other issues.

Resources