Function execution took 540029 ms, finished with status: 'timeout' - node.js

I have created a cloud function that connects to an MQTT broker I have used a third-party MQTT broker (Mosquito MQTT broker), and sends the data to the Firebase real-time database every time the MQTT broker receives data from the machine. I am using the GCP console for writing and deploying the function. I successfully deployed the function without any errors, however, when I test it from the GCP console, it starts sending data but stops after the time specified in the timeout. I have tried timeout values from 60 to 540 seconds, but it still stops after the specified time. I have also increased the allocated memory, but it hasn't resolved the issue and I keep getting the same timeout error.
This is my code
const Admin = require("firebase-admin");
const functions = require("firebase-functions");
const mqtt = require('mqtt');
const clientId = 'mqtt_googleserver_********7'
const topic = '#'
const serviceAccount = require("./service.json");
Admin.initializeApp({
credential: Admin.credential.cert(serviceAccount),
databaseURL: "https://***************firebaseio.com/"
});
exports.rtdb_mains = functions.https.onRequest((_request, _response) => {
const client = mqtt.connect('mqtt://**.**.**.****.***',{
clientId,
clean: true,
connectTimeout: 4000,
username: '******',
password: '********',
reconnectPeriod: 1000,
});
const db = Admin.database();
client.addListener('connect', () => {
console.log('Connected');
client.subscribe([topic], { qos: 1 });
console.log(`Subscribe to topic '${topic}'`);
});
client.on('message', async (topic, payload) => {
console.log('Received Message:', topic, payload.toString());
if (payload.toString() !== "" && topic !== "") {
const ref = db.ref("All_machines");
const childref = ref.child(topic.toString());
await childref.set(payload.toString());
const topicDetails = topic.split("/");
const machineId = topicDetails[1];
const machineParameter = topicDetails[2];
if (machineParameter === "BoardID") {
const ref = db.ref(machineParameter);
await ref.set(machineId);
}
}
});
});
can anyone please help me with this problem.

You don't need to specify a service.json if you push the CF on firebase. You can directly use the default configuration.
You can do directly this :
admin.initializeApp();
Secondly, the way you use your MQTT implementation and the cloud function are not correct.
You are listenning and waiting for a message in a function that is trigger only by a POST or GET request.
I suggest to use the pub/sub api for doing such a thing and have a good implementation for sending / receiving messages.
In case of you really need to listen for message in your MQTT implementation, you will need another provider than Cloud Function or calling the native MQTT of Cloud Function
https://cloud.google.com/functions/docs/calling/pubsub
https://www.googlecloudcommunity.com/gc/Serverless/Can-a-cloud-function-subscribe-to-an-MQTT-topic/m-p/402965

Related

How to use Google PubSub without Cloud Function

How can I use Google PubSub to retrieve billing updates without using cloud functions? I am using the code below currently but it says that onPublish does not exist:
const { PubSub } = require('#google-cloud/pubsub');
const pubsub = new PubSub('MyProjectID');
handleServerEvent = pubsub.topic(GOOGLE_PLAY_PUBSUB_BILLING_TOPIC)
.onPublish(async (message) => {
})
TypeError: pubsub.topic(...).onPublish is not a function
I am using Node.js and want to react to events published on a topic.
The onPublish() method is a part of Cloud Functions API. You need to use createSubscription() to get a Subscription object and then use it to listen for new messages. Try the following:
const listenToTopic = async (topicName: string) => {
const [sub] = await pubsub
.topic(topicName)
.createSubscription("subscriptionName");
sub.on("message", (message) => {
message.ack();
console.log(`Received message: ${message}`);
});
};
// start listener
listenToTopic(GOOGLE_PLAY_PUBSUB_BILLING_TOPIC)
After creating the subscription once you need to change createSubscription("subscriptionName") to subscription("subscriptionName") to listen to incoming messages as the subscription has been created already

Able to connect to redis but set/get times out

I'm trying to do a get() from my AWS Lambda (NodeJS) on ElastiCache Redis using node_redis client. I believe that I'm able to connect to redis but I'm getting Time out (Lambda 60 sec time out) when I'm trying to perform a get() operation.
I have also granted my AWS lambda Administrator access just to be certain that it's not a permissions issue. I'm hitting lambda by going to AWS console and clicking the Test button.
Here is my redisClient.js:
const util = require('util');
const redis = require('redis');
console.info('Start to connect to Redis Server');
const client = redis.createClient({
host: process.env.ElastiCacheEndpoint,
port: process.env.ElastiCachePort
});
client.get = util.promisify(client.get);
client.set = util.promisify(client.set);
client.on('ready',function() {
console.log(" subs Redis is ready"); //Can see this output in logs
});
client.on('connect',function(){
console.log('subs connected to redis'); //Can see this output in logs
})
exports.set = async function(key, value) {
console.log("called set!");
return await client.set(key, value);
}
exports.get = async function(key) {
console.log("called get!"); //Can see this output in logs
return await client.get(key);
}
Here's my index.js which calls the redisClient.js:
const redisclient = require("./redisClient");
exports.handler = async (event) => {
const params = event.params
const operation = event.operation;
try {
console.log("Checking RedisCache by calling client get") // Can see this output in logs
const cachedVal = await redisclient.get('mykey');
console.log("Checked RedisCache by calling client get") // This doesn't show up in logs.
console.log(cachedVal);
if (cachedVal) {
return {
statusCode: 200,
body: JSON.stringify(cachedVal)
}
} else {
const setCache = await redisclient.set('myKey','myVal');
console.log(setCache);
console.log("*******")
let response = await makeCERequest(operation, params, event.account);
console.log("CE Request returned");
return response;
}
}
catch (err) {
return {
statusCode: 500,
body: err,
};
}
}
This is the output (time out error message) that I get:
{
"errorMessage": "2020-07-05T19:04:28.695Z 9951942c-f54a-4b18-9cc2-119eed65e9f1 Task timed out after 60.06 seconds"
}
I have tried using Bluebird (changing get to getAsync()) per this: https://github.com/UtkarshYeolekar/promisify-redis-client/blob/master/redis.js but still got the same behavior.
I also changed the port to use a random value (like 8088) that I'm using to create client (to see the behavior of connect event for a failed connection) - in this case I still see a Timed Out error response but I don't see the subs Redis is ready and subs connected to redis in my logs.
Can anyone please point me in the right direction? I don't seem to understand why I'm able to connect to redis but the get() request times out.
I figured out the issue and posting here in case it helps anyone in future as the behavior wasn't very intuitive for me.
I had enabled AuthToken param while setting up my redis. I was passing the param to lambda with the environment variables but wasn't using it while sending the get()/set() requests. When I disabled the AuthToken requirement from redis configuration - Lambda was able to hit redis with get/set requests. More details on AuthToken can be found here: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-elasticache-replicationgroup.html#cfn-elasticache-replicationgroup-authtoken

How can I implement socket.IO with Cloud Functions?

So basically I'm doing a game where the server sends messages to clients, and the client who answer first recieve 1 pnt. I'm trying to create rooms to improve the multiplayer mode, but I'm stuck at this point.
I'm trying to connect socket.io to my google Firebase functions, but when I call the function it returns this error:
Billing account not configured. External network is not accessible and quotas are severely limited.
Configure billing account to remove these restrictions
10:13:08.239 AM
addStanza
Uncaught exception
10:13:08.242 AM
addStanza
Error: getaddrinfo EAI_AGAIN at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:67:26)
10:13:08.584 AM
addStanza
Error: function crashed out of request scope Function invocation was interrupted.
This is the code:
//firebase deploy --only functions
const Proverbi = require('./Proverbi.js');
const socketIo = require("socket.io");
const https = require("https");
const functions = require('firebase-functions');
const admin = require('firebase-admin');
admin.initializeApp();
var server = https.createServer();
server.listen(443, "https://us-central1-chip-chop.cloudfunctions.net");
var io = socketIo.listen(server);
// Take the text parameter passed to this HTTP endpoint and insert it into the
// Realtime Database under the path /messages/:pushId/original
exports.addStanza = functions.https.onRequest(async (req, res) => {
// Grab the text parameter.
const nome = req.query.nome;
// Push the new message into the Realtime Database using the Firebase Admin SDK.
const snapshot = await admin.database().ref('/stanze').push({ giocatori: { giocatore: { nome: nome, punteggio: 0 } } });
// Redirect with 303 SEE OTHER to the URL of the pushed object in the Firebase console.
//res.redirect(200, nome.toString());
var link = snapshot.toString().split('/');
res.json({ idStanza: link[4] });
});
// Listens for new messages added to /messages/:pushId/original and creates an
// uppercase version of the message to /messages/:pushId/uppercase
exports.addFirstPlayer = functions.database.ref('/stanze/{pushId}/giocatori/giocatore/nome')
.onCreate((snapshot, context) => {
// Grab the current value of what was written to the Realtime Database.
const nome = snapshot.val();
// const snapshot3 = snapshot.ref('/stanza/{pushId}/giocatori/giocatore').remove();
const snapshot2 = snapshot.ref.parent.parent.remove();
var room = snapshot.ref.parent.parent.parent.val();
// handle incoming connections from clients
io.sockets.on('connection', function (socket) {
// once a client has connected, we expect to get a ping from them saying what room they want to join
socket.on('room', function (room) {
socket.join(room);
});
});
io.sockets.in(room).emit('message', nome + 'Si è unito alla stanza');
return snapshot.ref.parent.parent.push({ nome: nome, punteggio: 0, room:room });
});
exports.addPlayer = functions.https.onRequest(async (req, res) => {
// Grab the text parameter.
const nome = req.query.nome;
const idStanza = req.query.id;
// Push the new message into the Realtime Database using the Firebase Admin SDK.
const snapshot = await admin.database().ref('/stanze/' + idStanza + "/giocatori").push({ nome: nome, punteggio: 0 });
// Redirect with 303 SEE OTHER to the URL of the pushed object in the Firebase console.
var room = idStanza;
// handle incoming connections from clients
io.sockets.on('connection', function (socket) {
// once a client has connected, we expect to get a ping from them saying what room they want to join
socket.on('room', function (room) {
socket.join(room);
});
});
io.sockets.in(room).emit('message', nome + 'Si è unito alla stanza');
//res.redirect(200, nome.toString());
res.json({ success: { id: idStanza } });
});
Is the function crashing only because my firebase plan is limited? Or is there other problems?
It's not possible to use Cloud Functions as a host for socket-based I/O. Calls to "listen" on any port will fail every time. The provided network infrastructure only handles individual HTTP requests with a request and response payload size of 10MB per request. You have no control over how it handles the request and response at the network level.

Firestore Real Time updates connection in NodeJS

I'm developing a NodeJS web app to receive Real Time updates from Firestore DB through Admin SDK.
This is the init code for the Firestore object. It's executed just once, when the app is deployed (on AWS Elastic Beanstalk):
const admin = require('firebase-admin');
var serviceAccount = require('./../key.json');
admin.initializeApp({
credential: admin.credential.cert(serviceAccount)
});
var db = admin.firestore();
FUNC.firestore = db;
Then I use this firestore object in a websocket comunication to send realtime updates to browser. The idea is to use the server as a proxy between browser and Firestore.
socket.on('open', function (client) {
var query = FUNC.firestore.collection("notifications").doc(client.user.id.toString()).collection("global");
query.onSnapshot(querySnapshot => {
querySnapshot.docChanges().forEach(change => {
client.send({ id: change.doc.id, body: change.doc.data(), type: change.type });
});
}, err => {
console.log(`Encountered error: ${err}`);
});
});
socket.on('close', function (client) {
var unsub = FUNC.firestore.collection("notifications").doc(client.user.id.toString()).collection("global").onSnapshot(() => {
});
unsub();
});
It works well for a while, but after few hours the client stop receiving onSnapshot() updates, and after a while the server log the error: Encountered error: Error: 10 ABORTED: The operation was aborted.
What's wrong? Should I initialized firestore on each connection? Is there some lifecycle mistake?
Thank you
EDIT (A very bad solution)
I've tried to create a single firebase-admin app instance for each logged user and changed my code in this way
const admin = require('firebase-admin');
var serviceAccount = require('./../key.json');
admin.initializeApp({
credential: admin.credential.cert(serviceAccount)
});
FUNC.getFirestore = function (user) {
try {
user.firebase = admin.app(user.id.toString());
return user.firebase.firestore();
} catch(e) {
//ignore
}
var app = admin.initializeApp({
credential: admin.credential.cert(serviceAccount)
}, user.id.toString());
user.firebase = app;
return user.firebase.firestore();
}
FUNC.removeFirebase = function (user) {
if (user.firebase) {
user.firebase.delete();
}
}
And then socket listeners:
self.on('open', function (client) {
var query = FUNC.getFirestore(client.user).collection("notifications").doc(client.user.id.toString()).collection("global");
query.onSnapshot(querySnapshot => {
querySnapshot.docChanges().reverse();
querySnapshot.docChanges().forEach(change => {
client.send({ id: change.doc.id, body: change.doc.data(), type: change.type });
});
}, err => {
console.log(`Encountered error: ${err}`);
});
});
self.on('close', function (client) {
var unsub = FUNC.getFirestore(client.user).collection("notifications").doc(client.user.id.toString()).collection("global").onSnapshot(() => {
});
unsub();
FUNC.removeFirebase(client.user);
});
So when a client disconnect for a reason the server removes its firebase app, it works, but I've noticed a huge memory leak on server, I need some help
UPDATED ANSWER
After many reaserch I've understand that this kind of approach is wrong. Of course, the old answer could be a workaround but is not the real solution of the problem, because Firestore was not designed to do something like: Firestore <--(Admin SDK)--> Server <--(WebSocket)--> Client.
In order to create the best comunication I have understand and applied Firestore Security Rules (https://firebase.google.com/docs/firestore/security/get-started) together with Custom token generation (https://firebase.google.com/docs/auth/admin/create-custom-tokens). So the correct flow is:
Client login request --> Server + Admin SDK generate custom auth token and return to client
Then, the real time comunication will be only between Client and Firestore itself, so: Client + Custom Auth Token <--(Firebase JS SDK)--> Firestore DB
As you can see, the server is not involved anymore in real-time comunication, but client receive updates directly from Firestore.
OLD ANSWER
Finally I can answer from myself. First of all the second solution I've tried is a very bad one, because each new app created through Admin SDK is stored in RAM, with 20/30 users the app reaches more then 1GB of RAM, absolutely unacceptable.
So the first implementation was the better solution, anyway I've wrong the register/unregister onSnapshot listener lifecycle. Each onSnapshot() call returns a different function, even if called on the same reference. So, instead of close the listener when socket close, I opened another one. This is how should be:
socket.on('open', function (client) {
var query = FUNC.firestore.collection("notifications").doc(client.user.id.toString()).collection("global");
client.user.firestoreUnsub = query.onSnapshot(querySnapshot => {
querySnapshot.docChanges().forEach(change => {
client.send({ id: change.doc.id, body: change.doc.data(), type: change.type });
});
}, err => {
console.log(`Encountered error: ${err}`);
});
});
socket.on('close', function (client) {
client.user.firestoreUnsub();
});
After almost 48h, listeners still works without problems and no memory leaks occurs.

Google Cloud Functions: Could not authenticate request

I am using a node.js function on Google Cloud Functions to save Pub/Sub messages to GCS (Storage), but it randomly gives me the following error for some messages (most of the messages are succesfully written):
"Error: Could not authenticate request. Could not load the default credentials. Browse to https://developers.google.com/accounts/docs/application-default-credentials for more information."
It doesn't make sense, once running on GCE would use same service-account for all messages, which has the proper permissions, and all messages comes from the same source and goes to the same destination. Can someone enlighten me on what could I do?
I'm using google-cloud/storage version 0.8.0.
/**
* Triggered from a message on a Cloud Pub/Sub topic.
*
* #param {!Object} event The Cloud Functions event.
* #param {!Function} The callback function.
*/
const bucketName = 'backup-queue-bucket';
const util = require('util')
const gcs = require('#google-cloud/storage')();
const crypto = require('crypto');
exports.backupQueue = function backupQueue(event, callback) {
// The Cloud Pub/Sub Message object.
const timestamp = event.timestamp;
const resources = event.resource.split('/');
const pubsubMessage = event.data;
const messageContent = Buffer.from(pubsubMessage.data, 'base64').toString();
// We're just going to log the message to prove that
// it worked.
var queueName = resources[resources.length-1];
console.log(`Message received: ${messageContent} in queue ${queueName}`);
const filename = timestamp+'_'+crypto.createHash('md5').update(messageContent).digest('hex');
const bucket = gcs.bucket(bucketName);
const file = bucket.file(queueName+'/'+filename);
const fs = file.createWriteStream({});
fs.on('finish', function () {
console.log(`Message ${filename} successfully written to file.`);
});
fs.on('error', function (error) {
console.warn(`Message ${filename} could not be written to file. Retry will be called. Error: ${error.message}`);
setTimeout(backupQueue(event, callback), 1000);
});
fs.write(Buffer.from(pubsubMessage.data, 'base64').toString());
fs.end();
callback();
};`
EDIT:
I opened an issue on google-cloud-node and they reported this as a bug. It should be fixed by next release.

Resources