Google Datastore Silently Failing in Production (node.js) - node.js

As part of a larger web app, I'm using a combination of Google Datastore and Firebase. On my local machine, all requests go through seamlessly, however when I deploy my app to the GAE (using node.js - Flexible Environment) everything works except the calls to the datastore. The requests do not throw an error, directly or via promise and just never return, hanging the process.
My current configuration is set up to use a Service Account key file containing my private key. Ive checked that it has the proper scope (and even added more than i should just in case to have Datastore Owner permissions).
I've distilled the app down to the bare bones, and still no luck. I'm stuck and looking for any suggestions.
const datastore = require( '#google-cloud/datastore' );
const config = require( 'yaml-config' )
.readConfig( 'config.yaml' );
module.exports = {
get_test: function(query, callback) {
var ds_ref = datastore({
projectId: config.DATASTORE_PROJECT,
keyFilename: __dirname + config.GOOGLE_CLOUD_KEY
});
var q = ds_ref.createQuery('comps')
.filter('record', query.record);
ds_ref.runQuery(q, function(err, entities) {
if (!err) {
if (entities.length > 0) {
callback(err, entities[0]);
} else {
callback(err, []);
}
} else {
callback(err, undefined);
}
});
}
}
UPDATE:
Tried manual_scaling found here but didn't seem to work. Also found this article that seems to be a similar issue.

The problem seems to be in the grpc module. Use 0.6.0 version of datastore. This will automatically use an older version of grpc. The solution will work for compute engine. However you will still face problems with the flexible environment. This is because when the flexible environment is deployed, it will use the new modules which have the problem.
Also please refer to the following links on gitHub:
https://github.com/GoogleCloudPlatform/google-cloud-node/issues/1955
https://github.com/GoogleCloudPlatform/google-cloud-node/issues/1946
Please keep a watch of these links for an update in resolution.

Related

GCP Pubsub Nodejs client promises hang, client freezes, no errors

Promises hang with no errors, with a Google's Pub/Sub Node client library against your project.
Example:
const { PubSub } = require("#google-cloud/pubsub");
async function start() {
const pubsubClient = new PubSub({ projectId: "my-project-id" });
try {
const [topics] = await pubsubClient.getTopics();
console.log(topics);
} catch (error) {
console.error(error);
}
}
start().catch(console.error);
would return no error and no progress would be shown. Eventually the client times out after 10 minutes. No topics would get returned. The same goes for publishing to a topic, etc.
If you used emulator pubsub for local development you have set PUBSUB_EMULATOR_HOST variable. For some reason it leads to the issue. Remove it from your environment with unset PUBSUB_EMULATOR_HOST or remove it from your .env file and restart the server.
You can check if it is set with printenv in your shell (or run exec from the node app to check)
There is a known issue and there are associated Github Issues, so if you came across this answer and it helped, feel free to let the maintainers know here:
https://github.com/googleapis/nodejs-pubsub/issues/339
or here:
https://github.com/googleapis/gax-nodejs/issues/208
as it's a won't fix as seems to not affect many people.

How do use transloadit addStream() function in the NodeJS SDK?

Trying out the transloadit api, the template works when I use the testing mode on the transloadit website, but when I try to use it in Node JS with the SDK I'm getting an error:
INVALID_FORM_DATA - https://api2.transloadit.com/assemblies - INVALID_FORM_DATA: The form contained bad data, which cannot be parsed.
The relevant code: (_asset.content) is a Buffer object
async function getThumbnailUrl(_assetkey: string, _asset: I.FormFile): Promise<string> {
let tOptions = {
waitForCompletion: true,
params: {
template_id: process.env.THUMB_TRANSLOADIT_TEMPLATE,
},
};
const stream = new Readable({
read() {
this.push(_asset.content);
this.push(null);
},
});
console.log(_asset.content);
util.transloadit.addStream(_assetkey, stream);
return new Promise((resolve, reject) => {
util.transloadit.createAssembly(tOptions, (err, status) => {
if (err) {
reject(err);
}
console.log(status);
//return status;
resolve(status);
});
});
}
I noticed that you also posted this question on the Transloadit forums - so in the case that anyone else runs into this problem you can find more information on this topic here.
Here's a work-around that the OP found that may be useful:
Just to provide some closure to this topic, I just tested my
workaround (upload to s3, then use import s3 robot to grab the file)
and got it to work with the nodejs sdk so i should be good using that.
I have a suspicion the error I was getting was not to do with the
transloadit api, but rather the form-data library for node js
(https://github.com/form-data/form-data 1) and that’s somehow not
inputting the form data in the way that the transloadit api is
expecting.
But as there aren’t alternatives to that library that I could find, I
wasn’t really able to test that hypothesis.
The Transloadit core team also gave this response regarding the issue:
It may try to set his streams to be Tus streams which would mean that
they’re not uploaded as multipart/form data.
In either case it seems like the error to his callback would be
originating from the error out of _remoteJson
These could be the problem areas
https://github.com/transloadit/node-sdk/blob/master/src/TransloaditClient.js#L146
https://github.com/transloadit/node-sdk/blob/master/src/TransloaditClient.js#L606
https://github.com/transloadit/node-sdk/blob/master/src/TransloaditClient.js#L642
It is also possible that the form-data library could be the source of
the error
To really test this further we’re going to need to try using the
library he was using, make sure the output of it is good, and then
debug the node-sdk to see where the logic failure is in it, or if the
logic failure is on the API side.

Firebase test auth cloud functions locally

exports.sendWelcomeEmail = functions.auth.user().onCreate((user) => {
console.log(user.uid);
console.log(user.email);
console.log(user.displayName);
});
exports.getUserInfo = functions.https.onCall(async (data, context) => {
// get array of user IDs and returns information (from Users collection)
const userIDs = data.userIDs;
const result = [];
const querySnapData = await admin.firestore().collection("Users").get();
querySnapData.forEach((ele) => {
if (userIDs.indexOf(ele.id) !== -1 && ele.id !== context.auth.uid) {
result.push(ele.data());
}
});
return { res: result };
});
I've got these two functions in my project - one is callable function and the other one is auth trigger functions.
So in my client app, I run
firebase.functions().useFunctionsEmulator('http://localhost:5001');
let getUserInfo = functions.httpsCallable('getUserInfo');
getUserInfo({userIDs: data}).then(res => doSomething);
And to run the cloud functions locally
firebase emulators:start
But it says
functions[sendWelcomeEmail]: function ignored because the auth emulator does not exist or is not running.
So in the client App, getUserInfo works pretty well but can't trigger onCreate.
But I was not able to find any document about auth emulator.
Any link/article/video or answer is appreciated.
The Firebase Emulator Suite currently Cloud Firestore, Realtime Database, Cloud Functions, and Cloud Pub/Sub. It does not yet emulate Firebase Authentication APIs. So any auth calls you make will be executed against the real project that is associated with the emulators.
This also means that your functions.auth.user().onCreate((user) => { Cloud Function will not be triggered in the emulators at the moment. You'll have to deploy it to the servers to test this trigger.
To learn when an auth emulator is available, I recommend keeping an eye on Firebase's release notes, and on the main documentation page for the emulator suite that lists the supported products. You can also follow along more closely on Github, either in the commits, or in this feature request.
firebase recently released the Authentication Emulator, you can check it via the release notes here: https://firebase.google.com/support/releases#october_26_2020, and further guide here: https://firebase.google.com/docs/emulator-suite/connect_auth
So as #Franek van Puffelen wrote above, it is not done yet.
Was able to test auth functions locally like below.
function sendWelcomeEmail(user) {
console.log(user.uid);
console.log(user.email);
console.log(user.displayName);
}
exports.sendWelcomeEmail = functions.auth.user().onCreate((user) => sendWelcomeEmail(user));

Azure Functions: Nodejs, What are restrictions / limitations when using file system?

I have not been able to get an azure function working that uses the node file system module.
I created a brand new function app with most basic HTTP trigger function and included the 'fs' module:
var fs = require('fs');
module.exports = function (context, req, res) {
context.log('function triggered');
context.log(req);
context.done();
}
This works fine. I see the full request in live streaming logs, and in the function invocation list.
However, as soon as I add the code which actually uses the file system, it seems to crash the azure function. It neither completes or throws the error. It also doesn't seem to show up in the azure function invocations list which is scary since this is loss of failure information and I might think my service was running fine when there were actually crashes.
var fs = require('fs');
module.exports = function (context, req, res) {
context.log('function triggered');
context.log(req);
fs.writeFile('message.txt', 'Hello Node.js', (err) => {
if (err) throw err;
console.log('It\'s saved!');
context.done();
});
}
The fs.writeFile code taken directly from the node.js website:
https://nodejs.org/dist/latest-v4.x/docs/api/fs.html#fs_fs_writefile_file_data_options_callback
I added the context.done() in the callback, but that snippet should work without issue on normal development environment.
This brings up the questions:
Is it possible to use the file system when using Azure Functions?
If so, what are the restrictions?
If no restrictions, are developers required to keep track and perform
cleanup or is this taken care of by some sandboxing?
From my understanding even though this is considered server-less computing there is still a VM / Azure Website App Service underneath which has a file system.
I can use the Kudu console and navigate around and see all the files in /wwwroot and the /home/functions/secrets files.
Imagine a scenario where an azure function is written to write a file with unique name and not perform cleanup it would eventually take up all the disk space on the host VM and degrade performance. This could happen accidentally by a developer and possibly go unnoticed until it's too late.
This makes me wonder if it is by design not to use the file system, or if my function is just written wrong?
Yes you can use the file system, with some restrictions as described here. That page describes some directories you can access like D:\HOME and D:\LOCAL\TEMP. I've modified your code below to write to the temp dir and it works:
var fs = require('fs');
module.exports = function (context, input) {
fs.writeFile('D:/local/Temp/message.txt', input, (err) => {
if (err) {
context.log(err);
throw err;
}
context.log('It\'s saved!');
context.done();
});
}
Your initial code was failing because it was trying to write to D:\Windows\system32 which is not allowed.

Debug Node.js & Express App - Intermittently using 100% CPU

I'm developing an app using NGinx + Node.js + Express + Firebase that simply takes input from a mobile app and stores it to Firebase, optionally uploading files to S3.
In its simplest terms, the "create" function does this
Validates input
Formats the input Checks if there is a file uploaded
(via the multer plugin) and stores it
If there was a file, upload
to Amazon S3 and delete the source file (it's important to note I was
encountering this issue before the inclusion of S3).
Create the item
by pushing into the items reference on Firebase
Create the item for
the user by pushing into the user_items reference on Firebase.
There are a few other functions that I have implemented as an API.
My trouble is coming from an intermittent spike in CPU usage, which is causing the nginx server to report a gateway timeout from the Node.js application.
Sometimes the server will fall over when performing authentication against a MongoDB instance, other times it will fall over when I'm recieving the input from the Mobile app. There doesn't seem to be any consistency between when it falls over. Sometimes it works fine for 15+ various requests (upload/login/list, etc), but sometimes it will fall over after just one request.
I have added error checking in the form of:
process.on('uncaughtException', function(err) {
console.error(err.stack);
});
Which will throw errors if I mistype a variable for example, but when the server crashes there are no exceptions thrown. Similarly checking my logs shows me nothing. I've tried profiling the application but the output doesn't make any sense at all to me. It doesn't point to a function or plugin in particular.
I appreciate this is a long winded problem but I'd really appreciate it if you could point me in a direction for debugging this issue, it's causing me such a headache!
This may be a bug in the Firebase library. What version are you using?
I've been having a very similar issue that has had me frustrated for days. Node.js + Express + Firebase on Heroku. Process will run for a seemingly random time then I start getting timeout errors from Heroku without the process ever actually crashing or showing an error. Higher load doesn't seem to make it happen sooner.
I just updated from Firebase 1.0.14 to latest 1.0.19 and I think it may have fixed the problem for me. Process has been up for 2 hours now where it would only last for 5-30 min previously. More testing to do, but thought I'd share my in-progress results in case they were helpful.
It seems the answer was to do with the fact that my Express app was reusing one Firebase connection for every request, and for some reason this was causing the server to lock up.
My solution was to create some basic middleware that provides a new reference to the Firebase on each API request, see below:
var Middleware = {
/*
* Initialise Firebase Refs per connection
*/
initFireBase: function(req, res, next) {
console.log('Intialising Firebase for user');
// We need a authToken
var authToken = req.param('authToken');
// Validate the auth token
if(!authToken || authToken.length === 0) {
return res.send(500, {code: 'INVALID_TOKEN', message: 'You must supply an authToken to this method.'});
}
else {
// Attempt to parse the auth token
try {
var decodedToken = JWTSimple.decode(authToken, serverToken);
}
catch(e) {
return res.send(500, {code: 'INVALID_TOKEN', message: 'Supplied token was not recognised.'});
}
// Bail out if the token is invalid
if(!decodedToken) {
return res.send(500, {code: 'INVALID_TOKEN', message: 'Supplied token was not recognised.'});
}
// Otherwise send the decoded token with the request
else {
req.auth = decodedToken.d;
}
}
// Create a root reference
var rootRef = new Firebase('my firebase url');
// Apply the references to each request
req.refs = {
root: rootRef,
user: rootRef.child('users'),
inbox: rootRef.child('inbox')
};
// Carry on to the calling function
next();
}
};
I then simply call this middleware on my routes:
/*
* Create a post
*/
router.all('/createPost', Middleware.initFireBase, function(req, res) {
var refs = req.refs;
refs.inbox.push({}) // etc
....
This middleware will soon be extended to provide Firebase.auth() on the connection to ensure that any API call made with a valid authToken would be signed to the user on Firebase's side. However for development this is acceptable.
Hopefully this helps someone.

Resources