Google Cloud Function running on nodejs10 test fails - node.js

Here are the relevant bits of my function:
// Finally send the JSON data to the browser or requestor
res.status(200).send(output);
} catch (err) {
res.status(500).send(err.message);
} finally {
await closeConnection(page, browser);
}
When I run this locally it works flawlessly and returns my output to the web browser. When I upload it to Google Cloud Functions and test it the res.status(200).send(output); line fails with this message:
Error: function execution failed. Details:
res.status is not a function
Has anyone else seen this behavior? I'm completely puzzled as to why it would work perfectly on my local machine, but fail when I run it as a cloud function.

After digging around a bunch I found the answer. Google Cloud Functions that have a 'background' trigger type do not recognize res.status. Instead they want callback:
https://cloud.google.com/functions/docs/writing/background#function_parameters
// Finally send the JSON data to the browser or requestor
callback(null, output);
} catch (err) {
callback(new Error('Failed'));
} finally {
await closeConnection(page, browser);
}
If you run your local development instance with the --signature-type flag it correctly starts up, but you can no longer test by hitting the port in a web browser:
"start": "functions-framework --target=pollenCount --signature-type=cloudevent",
Documentation on how to send mock pub/sub data into your local instance is here:
https://cloud.google.com/functions/docs/running/calling#background_functions

Related

How do use transloadit addStream() function in the NodeJS SDK?

Trying out the transloadit api, the template works when I use the testing mode on the transloadit website, but when I try to use it in Node JS with the SDK I'm getting an error:
INVALID_FORM_DATA - https://api2.transloadit.com/assemblies - INVALID_FORM_DATA: The form contained bad data, which cannot be parsed.
The relevant code: (_asset.content) is a Buffer object
async function getThumbnailUrl(_assetkey: string, _asset: I.FormFile): Promise<string> {
let tOptions = {
waitForCompletion: true,
params: {
template_id: process.env.THUMB_TRANSLOADIT_TEMPLATE,
},
};
const stream = new Readable({
read() {
this.push(_asset.content);
this.push(null);
},
});
console.log(_asset.content);
util.transloadit.addStream(_assetkey, stream);
return new Promise((resolve, reject) => {
util.transloadit.createAssembly(tOptions, (err, status) => {
if (err) {
reject(err);
}
console.log(status);
//return status;
resolve(status);
});
});
}
I noticed that you also posted this question on the Transloadit forums - so in the case that anyone else runs into this problem you can find more information on this topic here.
Here's a work-around that the OP found that may be useful:
Just to provide some closure to this topic, I just tested my
workaround (upload to s3, then use import s3 robot to grab the file)
and got it to work with the nodejs sdk so i should be good using that.
I have a suspicion the error I was getting was not to do with the
transloadit api, but rather the form-data library for node js
(https://github.com/form-data/form-data 1) and that’s somehow not
inputting the form data in the way that the transloadit api is
expecting.
But as there aren’t alternatives to that library that I could find, I
wasn’t really able to test that hypothesis.
The Transloadit core team also gave this response regarding the issue:
It may try to set his streams to be Tus streams which would mean that
they’re not uploaded as multipart/form data.
In either case it seems like the error to his callback would be
originating from the error out of _remoteJson
These could be the problem areas
https://github.com/transloadit/node-sdk/blob/master/src/TransloaditClient.js#L146
https://github.com/transloadit/node-sdk/blob/master/src/TransloaditClient.js#L606
https://github.com/transloadit/node-sdk/blob/master/src/TransloaditClient.js#L642
It is also possible that the form-data library could be the source of
the error
To really test this further we’re going to need to try using the
library he was using, make sure the output of it is good, and then
debug the node-sdk to see where the logic failure is in it, or if the
logic failure is on the API side.

Mocha tests fail when Wifi is disconnected

I have some mocha tests that complete without errors when connected to Wifi. I am running the tests using command:
./node_modules/.bin/mocha --recursive -R spec path/to/bootstrap.integration.js path/to/testfile.test.js
When I turn off the Wifi connection, the same tests fail in the before bootstrap with this error:
Uncaught Error: ENOENT: no such file or directory, open '/etc/resolv.conf'
I actually connect to the internet via a vpn and when the vpn is disconnected, but the wifi is still turned on and connected, the tests all pass. The firewall only allows outbound connections over the vpn.
Stepping through the code in the debugger, the error is thrown right after I clear out the database (mongodb).
Relevant code that gets called in the bootstrap file before:
function emptyCollection(collectionName, callback) {
debug(`emptying collection: ${collectionName}`);
models[collectionName].deleteMany({}, function(err, result) {
if (err) { // <--- Never reached when Wifi is disconnected
return callback(err);
}
debug(`emptied collection: ${collectionName}`);
return callback(null, result);
});
}
debug('emptying all collections...');
async.map(Object.keys(models), emptyCollection, function(err, results) {
if (err) { // <--- Never reached when Wifi is disconnected
return cb(err);
}
debug('emptied all collections');
return cb();
});
None of the deleteMany callbacks are ever reached when Wifi is disconnected. The async.map callback is never reached either.
I've been trying to find a way to set a breakpoint on when '/etc/resolv.conf' gets opened so I can determine which part of the code is trying to read the file, but I haven't found a way to do that.
I've been looking at this for hours, I'm all out of ideas.
Does any body have any troubleshooting advice?
[Update: I am looking for troubleshooting advice specific to the problem described above]
If your tests are dependent on wifi or any network calls then you're not writing your tests in correct manner. Read this - F.I.R.S.T principle

Azure: how to trigger a nodejs webjob when there is a message in the queue?

I've created a webjob written in node. I wonder if there is a way to trigger this webjob to run whenever there is a message coming to the queue?
Thanks
Please check out the azure-webjobs-sdk-script repo where we're developing a solution to this very problem.
The repo is new, so doc and help are still coming online, but you can clone it and run the Host.Node sample project which demonstrates various Node.js triggered functions, including a queue triggered function :) This library has already been tested deployed to Azure and works.
Please log any issues/feedback on the issues list of the repo and we'll address them :)
Look at Mathew's post for a new thing we're working on with the SDK. https://github.com/Azure/azure-webjobs-sdk-script
Not yet with the WebJobs SDK. You can build a continuous Job and keep fetching. If you wanted to build something kinda sane, you could probably do something like:
var azure = require('azure-storage');
var queueService = azure.createQueueService(),
queueName = 'taskqueue';
// Poll every 5 seconds to avoid consuming too many resources
setInterval(function() {
queueService.getMessages(queueName, {}, function(error, serverMessages) {
if (!error) {
// For each message
serverMessages.foreach(function(i) {
// Do something
console.log(i.messagetext);
// Delete Message
queueService.deleteMessage(queueName, i.messageid, i.popreceipt,
function(error) {
if (error) {
console.log(error);
}
}); //end deleteMessage
}); // end foreach
} else {
console.log(error);
}
});
}, 5000);
You'll want to look at the JSDocs they have on azure.github.io to learn how to do things like grab multiple message and increase the "blocking" time which is defaulted to 30 seconds.
Let me know if you have any other issues.

How to bundle an express server and html script into same Node Webkit application?

I am having an extremely difficult time with trying to turn my current Node.js application into a desktop app using Node Webkit. Without using webkit, my application works perfectly. I start my server using "Node app.js" in the terminal. Then, when I connect to localhost:8080, the client connects and the index.html page is loaded; the app then works perfectly. I need all this to happen from a desktop app and am thus using Node webkit.
I can't figure out how to get all this to happen using Node webkit. I have searched online for hours and this seems to be a common problem but no one has a decent and easy-to-follow solution. If someone could please help me it would be greatly appreciated.
Also on a side note, I have tried loading my node.js file first by using the "node-main" property in my package.json file, but this does not work. For some reason, the "node-main" property crashes my webkit app every single time I try to use it.
If anyone can provide a clear walkthrough of how to implement all this into a single node webkit app that would be very helpful. Thanks!
you need to use the node-main to do what you are trying to achieve. Your code probably crash due to exception.
first, add the node-main to the config
"node-main": "index.js",
To debug the code, use attach uncaught exception handler:
var report_error = function(err){
if (typeof console == 'object' && typeof console.log == 'function') {
console.log('Exception: ' + err.message, err.stack);
} else {
setTimeout(function(){report_error(err);},200);
}
};
process.on('uncaughtException', function (err) {
report_error(err);
});
note that I check if the console is accessible - when this script run console is not yet accessible:
This symbol is not available at the time the script is loaded,
because the script is executed before the DOM window load (source).
I experienced similar issues because I used console.log in the index.js. after using the following function instead of console.log I got it to work:
var console_log = function(err){
if (typeof console == 'object' && typeof console.log == 'function') {
console.log(err);
} else {
setTimeout(function(){console_log(err);},200);
}
};
is a really weird case for a desktop app, but if you really need that, you could run your server using a child process first from a window.onload method included in the index.html of your NW app, with some code like this:
var exec = require('child_process').exec;
exec('node app.js ', function(error, stdout, stderr) {
if (error !== null) {
throw error;
}
console.log('Server listening');
});

Catching Mocha timeouts

I'm writing a node.js web service which needs to communicate with another server. So its basically server to server communication. I don't have any previous experience of writing web services so I have very limited knowledge. For unit tests I'm using Mocha.
Now, I intend to test the behavior of my service for a particular scenario when this other server doesn't respond to my GET request and the request is actually timed out. For tests I've created a fake client and server around my web service. My web service now takes request from this fake client and then gets information from another fake server that I created which then returns the response in the expected format. To simulate timeout I don't do response.end() from my route handler. The problem is that Mocha judges it to have failed this test case.
Is there a way I could catch this intentional timeout in Mocha and the test is a success?
As mido22 suggested you should use handle the timeout generated by whatever library you use to connect. For instance, with request:
var request = require("request");
it("test", function (done) {
request("http://www.google.com:81", {
timeout: 1000
}, function (error, response, body) {
if (error && error.code === 'ETIMEDOUT') {
done(); // Got a timetout: that's what we wanted.
return;
}
// Got another error or no error at all: that's bad!
done(error || new Error("did not get a timeout"));
});
});

Resources